Available Tools
Reference of tools available to the nctl ai agent.
The Nirmata Personal Agent (nctl ai) runs on your workstation and integrates directly into your development workflow, offering specialized guidance and automation without requiring cluster access or cloud services.
nctl ai is built with a security-first design – it only accesses directories you explicitly allow, loads built-in skills and only the skills you provide (with –skills option), and asks for your confirmation before performing any operation. See Security for details.
Install nctl using Homebrew:
brew tap nirmata/tap
brew install nctl
For more installation options, see nctl installation.
Run the personal agent in interactive mode:
nctl ai
You will be prompted to enter your business email to:
Using nctl AI requires authentication with Nirmata Control Hub to access
AI-enabled services. Please enter your business email to sign up for a
free trial, or sign in to your account
Enter email: ****@******.com
A verification code has been sent to your email.
Enter verification code: ******
Email verified successfully!
Your credentials have been fetched and successfully saved.
👋 Hi, I am your Nirmata AI Platform Engineering Assistant!
I can help you automate security, compliance, and operational best practices
across your clusters and pipelines.
💡 Here are some tasks I can do for you, or ask anything:
▶ scan clusters
▶ generate policies and tests
▶ optimize costs
💡 type 'help' to see commands for working in nctl ai
───────────────────────────────────────────────────────────────────────────────────────
>
───────────────────────────────────────────────────────────────────────────────────────
Try some sample prompts like:
Non-Interactive Mode:
You can also provide a prompt directly for single shot requests:
nctl ai --prompt "create a policy that requires all pods to have resource limits"
See Command Reference for full details.
nctl ai is a personal agent specializing in Kubernetes, Policy as Code and Platform Engineering.
It provides comprehensive support across these domains:
nctl ai is built with a security-first approach. The agent operates within strict boundaries and always asks for permission before performing operations.
By default, nctl ai can only access the current working directory. To grant access to additional directories, use the --allowed-dirs flag:
nctl ai --allowed-dirs "/path/to/policies,/tmp"
You can also set the NIRMATA_AI_ALLOWED_DIRS environment variable:
export NIRMATA_AI_ALLOWED_DIRS="/path/to/policies,/tmp"
nctl ai
The agent will refuse to read, write, or execute files outside of the allowed directories, ensuring your filesystem remains protected.
Before performing any operation that modifies your system (writing files, executing commands, applying Kubernetes resources), nctl ai prompts for explicit confirmation. This ensures you remain in control of all changes.
For automated workflows where manual confirmation is not practical, you can disable permission checks:
nctl ai --skip-permission-checks --prompt "scan my cluster"
To allow destructive operations (e.g., deleting resources) in non-interactive mode, both --prompt and --skip-permission-checks must be combined with the --force flag:
nctl ai --force --skip-permission-checks --prompt "delete unused configmaps"
Warning: Use
--skip-permission-checksand--forcewith caution. These flags bypass safety prompts and should only be used in trusted automation pipelines.
| Feature | Default Behavior | Override |
|---|---|---|
| File system access | Current working directory only | --allowed-dirs |
| Tool execution | Requires user confirmation | --skip-permission-checks |
| Destructive operations | Blocked in non-interactive mode | --force (requires --skip-permission-checks and --prompt) |
| Skill loading | Built-in skills only | --skills |
nctl ai provides built-in session and task management so you can pause, resume, and track work across multiple interactions.
Sessions automatically capture your conversation history, tool calls, and results. You can resume any previous session to continue where you left off.
Interactive commands:
| Command | Description |
|---|---|
sessions | List all available sessions |
save | Save current session |
new | Create a new session |
resume <id> | Resume a specific session (or latest) |
exit / quit / q | Save session and exit |
exit-nosave | Exit without saving |
CLI flags:
# Resume the most recent session
nctl ai --resume-session latest
# Resume a specific session by ID
nctl ai --resume-session 20260210-0206
# List all available sessions
nctl ai --list-sessions
# Delete a session by ID
nctl ai --delete-session 20260210-0206
Sessions work with any provider (Nirmata, Anthropic, Bedrock, etc.) and are saved periodically during conversation. Use Ctrl+D to explicitly save and exit, or Ctrl+C to exit without saving (the session ID is displayed for later resuming).
nctl ai tracks tasks automatically during complex, multi-step operations. The agent creates and updates a task list as it works, giving you visibility into progress.
Interactive commands:
| Command | Description |
|---|---|
tasks | Show current todo list and task progress |
task <N> | Show detailed information for task N (including worker output) |
The task list updates in real time as the agent works through multi-step workflows like cluster scanning, policy generation, or compliance assessments.
By default, nctl ai uses Nirmata Control Hub as its AI provider. However, you can configure it to work with other AI providers using the --provider flag.
The default provider uses Nirmata Control Hub for AI services. This requires authentication as described in the Quickstart section.
nctl ai --prompt "generate a policy to require pod labels"
Configuration:
Set the environment variable with your Anthropic API key:
export ANTHROPIC_API_KEY=<your-api-key>
Usage:
nctl ai --provider anthropic --prompt "What is Kubernetes? Answer in one sentence."
Notes:
Configuration:
Set the environment variable with your Google AI API key:
export GEMINI_API_KEY=<your-api-key>
Usage:
nctl ai --provider gemini --prompt "what is 5+5? answer in one word"
Notes:
GEMINI_API_KEY (not GOOGLE_API_KEY)gemini-2.5-proConfiguration:
Set the environment variables with your Azure OpenAI endpoint and API key:
export AZURE_OPENAI_ENDPOINT="https://<your-resource-name>.openai.azure.com/"
export AZURE_OPENAI_API_KEY="<your-api-key>"
Usage:
nctl ai --provider azopenai --model gpt-4o --prompt "what is 5+5? answer in one word"
Notes:
--model flag (e.g., gpt-4o, gpt-4, gpt-35-turbo)Amazon Bedrock uses AWS credentials for authentication. Ensure you have a valid AWS profile configured with appropriate Bedrock access permissions.
Configuration:
Step 1: Login to AWS SSO (if using SSO):
aws sso login --profile your-profile-name
Step 2: Set your AWS profile environment variable:
export AWS_PROFILE=your-profile-name
Step 3: Verify your credentials are working:
aws sts get-caller-identity
You should see output similar to:
{
"UserId": "AROA4JFRUINQC7VCOQ7UD:user@example.com",
"Account": "123456789012",
"Arn": "arn:aws:sts::123456789012:assumed-role/YourRole/user@example.com"
}
Usage:
nctl ai --provider bedrock --model us.anthropic.claude-sonnet-4-5-20250929-v1:0 --prompt "Your prompt here"
Notes:
--model flag (defaults to Claude Sonnet 4 if not specified)region.provider.model-name-version:variant (e.g., us.anthropic.claude-sonnet-4-5-20250929-v1:0). Model IDs MUST start with us. prefix (e.g., us.anthropic.claude-...). Without the prefix, you’ll get an “on-demand throughput isn’t supported” error.| Provider | Environment Variables | Model Selection | Notes |
|---|---|---|---|
| Nirmata (default) | Authentication via nctl login | Automatic | Includes access to Nirmata platform features |
| Anthropic | ANTHROPIC_API_KEY | Automatic | Best for Claude-specific features |
| Google Gemini | GEMINI_API_KEY | Default: gemini-2.5-pro | Free tier available with rate limits |
| Azure OpenAI | AZURE_OPENAI_ENDPOINTAZURE_OPENAI_API_KEY | Required via --model | Enterprise-ready with Azure integration |
| Amazon Bedrock | AWS_PROFILE (or AWS credentials) | Required via --model | AWS-native with IAM authentication |
You can configure nctl ai to route requests through AI/LLM proxy services. This is useful for:
Each provider supports proxy configuration through a base URL environment variable:
Anthropic with Proxy:
export ANTHROPIC_API_KEY=<your-api-key>
export ANTHROPIC_BASE_URL=http://your-proxy:8000
nctl ai --provider anthropic --prompt "Your prompt here"
Google Gemini with Proxy:
export GEMINI_API_KEY=<your-api-key>
export GEMINI_BASE_URL=http://your-proxy:8000
nctl ai --provider gemini --prompt "Your prompt here"
Azure OpenAI with Proxy:
export AZURE_OPENAI_API_KEY=<your-api-key>
export AZURE_OPENAI_ENDPOINT=http://your-proxy:8000
nctl ai --provider azopenai --model gpt-4o --prompt "Your prompt here"
Notes:
The agent has access to tools for command execution, Kyverno and policy workflows, file system operations, Slack and email, and task management. See the Available Tools reference for the full list in a searchable table.
Examples:
List Slack channels:
nctl ai --prompt "list my slack channels"
Send a message to a channel:
nctl ai --prompt "scan my cluster and send the report to dev-general channel"
nctl ai loads specialized skills dynamically based on your task (policy generation, cluster assessment, troubleshooting, cost management, and more). See the Available Skills reference for the full list in a table.
Built-in skills are curated and safe. They require read-only permissions and do not write to external URLs. They follow all security best practices.
You can also add your own Skills to customize the agent.
The Model Context Protocol (MCP) allows you to extend nctl ai with additional capabilities by connecting external MCP servers. These servers can provide specialized tools, resources, and functionality beyond the built-in features.
Configuration
To configure MCP servers, create a configuration file at ~/.nirmata/nctl/mcp.yaml:
servers:
- name: resend-email
command: node
args:
- /path/to/directory/mcp-send-email/build/index.js
env:
RESEND_API_KEY: your_api_key_here
SENDER_EMAIL_ADDRESS: example@email.com
REPLY_TO_EMAIL_ADDRESS: another_example@email.com
capabilities:
tools: true
prompts: false
resources: false
attachments: true
Configuration Options
name: Unique identifier for the MCP servercommand: Executable command to start the server (e.g., node, python, binary path)args: Array of command-line arguments passed to the serverenv: Environment variables required by the server (API keys, configuration values, etc.)capabilities: Defines what features the server provides:tools: Server provides callable tools/functionsprompts: Server provides prompt templatesresources: Server provides data resourcesattachments: Server can handle file attachmentsCommon Use Cases
MCP servers can extend nctl ai with capabilities like:
Note: Make sure the MCP server executable is installed and accessible at the specified path before adding it to the configuration.
You can extend nctl ai with custom domain knowledge and best practices by creating skill files. Skills provide specialized guidance that the personal agent dynamically loads based on the task context.
Use the --skills flag to load skills from any local directory:
nctl ai --skills "/path/to/custom-skills"
You can load multiple skill directories:
nctl ai --skills "/path/to/team-skills,/path/to/project-skills"
You can also set the NIRMATA_AI_SKILLS environment variable to always load your custom skills:
export NIRMATA_AI_SKILLS="/path/to/custom-skills"
nctl ai
Skills placed in the ~/.nirmata/nctl/skills directory are loaded automatically without requiring the --skills flag:
~/.nirmata/nctl/skills/
└── kyverno-cli-tests/
└── SKILL.md
└── my-custom-skill/
└── SKILL.md
Each skill is a Markdown file (named SKILL.md) containing domain knowledge, instructions, and best practices. Here’s an example:
Example: ~/.nirmata/nctl/skills/kyverno-cli-tests/SKILL.md
# Kyverno Tests (Unit Tests)
Kyverno CLI tests are used to validate policy behaviors against sample "good" and "bad" resources. Carefully follow the instructions and best practices below when running Kyverno CLI tests:
- Always use the supplied tools to generate and execute Kyverno tests.
- **Testing:** When creating test files for Kyverno policies, always name the test file as "kyverno-test.yaml".
- **Test Execution:** After generating a Kyverno policy, test file (kyverno-test.yaml), and Kubernetes resource files, always run the "kyverno test" command to validate that the policy works correctly with the test scenarios.
- **Test Results:** All Kyverno tests must `Pass` for a successful outcome. Stop when all tests pass.
- Only test for `Audit` mode. Do not try to update policies and test for `Enforce` mode.
## Test File Organization
Organize Kyverno CLI test files in a `.kyverno-test` sub-directory where the policy YAML is contained.
```
pod-security/
├── disallow-privileged-containers/
│ ├── disallow-privileged-containers.yaml
│ └── .kyverno-tests/
│ ├── kyverno-test.yaml
│ ├── resources.yaml
│ └── variables.yaml
└── other-policies/
```
Skills can also include executable scripts (bash, Python, etc.) that the agent can run locally on your workstation for custom automation and validation workflows.
When you interact with nctl ai, the personal agent automatically:
--skills pathsNote: Skills are loaded dynamically based on context. You don’t need to restart
nctl aiafter adding or modifying skill files.
After successful authentication, you can also access the Nirmata Control Hub web interface:
Alternatively, you can sign up for a 15-day free trial and log in manually using the CLI:
nctl login --userid YOUR_USER_ID --token YOUR_API_TOKEN
Run the agent as an MCP server using stdio transport (default):
nctl ai --mcp-server
For Cursor and Claude Desktop, edit ~/.cursor/mcp.json or ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"nctl": {
"command": "nctl",
"args": ["ai", "--mcp-server", "--token", "YOUR_NIRMATA_TOKEN"]
}
}
}
You can also run the MCP server over HTTP for remote or networked setups:
nctl ai --mcp-server --mcp-server-transport http --mcp-server-port 8080
The authoritative reference for nctl ai flags and examples is the nctl ai command documentation. That page is maintained to match the CLI.
help for a full list of commands and capabilities.nctl ai --help for the latest usage, examples, and flags from your installed version.Reference of tools available to the nctl ai agent.
Reference of built-in skills loaded by nctl ai for policy, clusters, and operations.