AI Platform Assistant

AI-powered personal agent for platform engineers — policy development, testing, and Kubernetes operations from your terminal.

The Nirmata Personal Agent (nctl ai) runs on your workstation and integrates directly into your development workflow, offering specialized guidance and automation without requiring cluster access or cloud services.

nctl ai is built with a security-first design – it only accesses directories you explicitly allow, loads built-in skills and only the skills you provide (with –skills option), and asks for your confirmation before performing any operation. See Security for details.

Quickstart

Install nctl using Homebrew:

brew tap nirmata/tap
brew install nctl

For more installation options, see nctl installation.

Run the personal agent in interactive mode:

nctl ai

You will be prompted to enter your business email to:

  • sign up for a free trial
  • or sign in to your account
Using nctl AI requires authentication with Nirmata Control Hub to access 
AI-enabled services. Please enter your business email to sign up for a 
free trial, or sign in to your account

Enter email: ****@******.com

A verification code has been sent to your email.
Enter verification code: ******

Email verified successfully!
Your credentials have been fetched and successfully saved.

👋 Hi, I am your Nirmata AI Platform Engineering Assistant!

I can help you automate security, compliance, and operational best practices 
across your clusters and pipelines.

💡 Here are some tasks I can do for you, or ask anything:
  ▶ scan clusters
  ▶ generate policies and tests
  ▶ optimize costs

💡 type 'help' to see commands for working in nctl ai

───────────────────────────────────────────────────────────────────────────────────────
>
───────────────────────────────────────────────────────────────────────────────────────

Try some sample prompts like:

  • scan my cluster
  • generate a policy to require pod labels
  • summarize violations across my clusters
  • perform a Kyverno health check

Non-Interactive Mode:

You can also provide a prompt directly for single shot requests:

nctl ai --prompt "create a policy that requires all pods to have resource limits"

See Command Reference for full details.

Key Capabilities

nctl ai is a personal agent specializing in Kubernetes, Policy as Code and Platform Engineering. It provides comprehensive support across these domains:

Policy as Code

  • Generate Kyverno policies from natural language descriptions
  • Create and execute comprehensive Kyverno CLI and Chainsaw tests
  • Generate policy exceptions for failing workloads
  • Upgrade Kyverno policies from older versions to CEL
  • Convert policies from OPA/Sentinel to Kyverno

Platform Engineering

  • Troubleshoot Kyverno engine, webhook, and controller issues
  • Get policy recommendations for your environments
  • Manage compliance across clusters
  • Manage Nirmata agents across your clusters
  • Install and configure Kyverno and other controllers

Security

nctl ai is built with a security-first approach. The agent operates within strict boundaries and always asks for permission before performing operations.

Allowed Directories

By default, nctl ai can only access the current working directory. To grant access to additional directories, use the --allowed-dirs flag:

nctl ai --allowed-dirs "/path/to/policies,/tmp"

You can also set the NIRMATA_AI_ALLOWED_DIRS environment variable:

export NIRMATA_AI_ALLOWED_DIRS="/path/to/policies,/tmp"
nctl ai

The agent will refuse to read, write, or execute files outside of the allowed directories, ensuring your filesystem remains protected.

Permission Checks

Before performing any operation that modifies your system (writing files, executing commands, applying Kubernetes resources), nctl ai prompts for explicit confirmation. This ensures you remain in control of all changes.

For automated workflows where manual confirmation is not practical, you can disable permission checks:

nctl ai --skip-permission-checks --prompt "scan my cluster"

To allow destructive operations (e.g., deleting resources) in non-interactive mode, both --prompt and --skip-permission-checks must be combined with the --force flag:

nctl ai --force --skip-permission-checks --prompt "delete unused configmaps"

Warning: Use --skip-permission-checks and --force with caution. These flags bypass safety prompts and should only be used in trusted automation pipelines.

Security Summary

FeatureDefault BehaviorOverride
File system accessCurrent working directory only--allowed-dirs
Tool executionRequires user confirmation--skip-permission-checks
Destructive operationsBlocked in non-interactive mode--force (requires --skip-permission-checks and --prompt)
Skill loadingBuilt-in skills only--skills

Session & Task Management

nctl ai provides built-in session and task management so you can pause, resume, and track work across multiple interactions.

Session Management

Sessions automatically capture your conversation history, tool calls, and results. You can resume any previous session to continue where you left off.

Interactive commands:

CommandDescription
sessionsList all available sessions
saveSave current session
newCreate a new session
resume <id>Resume a specific session (or latest)
exit / quit / qSave session and exit
exit-nosaveExit without saving

CLI flags:

# Resume the most recent session
nctl ai --resume-session latest

# Resume a specific session by ID
nctl ai --resume-session 20260210-0206

# List all available sessions
nctl ai --list-sessions

# Delete a session by ID
nctl ai --delete-session 20260210-0206

Sessions work with any provider (Nirmata, Anthropic, Bedrock, etc.) and are saved periodically during conversation. Use Ctrl+D to explicitly save and exit, or Ctrl+C to exit without saving (the session ID is displayed for later resuming).

Task Management

nctl ai tracks tasks automatically during complex, multi-step operations. The agent creates and updates a task list as it works, giving you visibility into progress.

Interactive commands:

CommandDescription
tasksShow current todo list and task progress
task <N>Show detailed information for task N (including worker output)

The task list updates in real time as the agent works through multi-step workflows like cluster scanning, policy generation, or compliance assessments.

AI Provider Configuration

By default, nctl ai uses Nirmata Control Hub as its AI provider. However, you can configure it to work with other AI providers using the --provider flag.

Nirmata (Default)

The default provider uses Nirmata Control Hub for AI services. This requires authentication as described in the Quickstart section.

nctl ai --prompt "generate a policy to require pod labels"

Anthropic Claude

Configuration:

Set the environment variable with your Anthropic API key:

export ANTHROPIC_API_KEY=<your-api-key>

Usage:

nctl ai --provider anthropic --prompt "What is Kubernetes? Answer in one sentence."

Notes:

Google Gemini

Configuration:

Set the environment variable with your Google AI API key:

export GEMINI_API_KEY=<your-api-key>

Usage:

nctl ai --provider gemini --prompt "what is 5+5? answer in one word"

Notes:

  • Environment variable is GEMINI_API_KEY (not GOOGLE_API_KEY)
  • Default model: gemini-2.5-pro
  • Free tier rate limit: approximately 2 requests per minute
  • Get your API key from Google AI Studio

Azure OpenAI

Configuration:

Set the environment variables with your Azure OpenAI endpoint and API key:

export AZURE_OPENAI_ENDPOINT="https://<your-resource-name>.openai.azure.com/"
export AZURE_OPENAI_API_KEY="<your-api-key>"

Usage:

nctl ai --provider azopenai --model gpt-4o --prompt "what is 5+5? answer in one word"

Notes:

  • Requires both endpoint URL and API key to be configured
  • You must specify the model with the --model flag (e.g., gpt-4o, gpt-4, gpt-35-turbo)
  • Get your credentials from Azure Portal

Amazon Bedrock

Amazon Bedrock uses AWS credentials for authentication. Ensure you have a valid AWS profile configured with appropriate Bedrock access permissions.

Configuration:

Step 1: Login to AWS SSO (if using SSO):

aws sso login --profile your-profile-name

Step 2: Set your AWS profile environment variable:

export AWS_PROFILE=your-profile-name

Step 3: Verify your credentials are working:

aws sts get-caller-identity

You should see output similar to:

{
    "UserId": "AROA4JFRUINQC7VCOQ7UD:user@example.com",
    "Account": "123456789012",
    "Arn": "arn:aws:sts::123456789012:assumed-role/YourRole/user@example.com"
}

Usage:

nctl ai --provider bedrock --model us.anthropic.claude-sonnet-4-5-20250929-v1:0 --prompt "Your prompt here"

Notes:

  • Requires valid AWS credentials with Bedrock access permissions
  • Supports Claude models from Anthropic available through Bedrock
  • Ensure your AWS account has Bedrock model access enabled in the target region
  • You must specify the model with the --model flag (defaults to Claude Sonnet 4 if not specified)
  • Model IDs follow the format region.provider.model-name-version:variant (e.g., us.anthropic.claude-sonnet-4-5-20250929-v1:0). Model IDs MUST start with us. prefix (e.g., us.anthropic.claude-...). Without the prefix, you’ll get an “on-demand throughput isn’t supported” error.
  • For more information, see Amazon Bedrock Documentation

Provider Comparison

ProviderEnvironment VariablesModel SelectionNotes
Nirmata (default)Authentication via nctl loginAutomaticIncludes access to Nirmata platform features
AnthropicANTHROPIC_API_KEYAutomaticBest for Claude-specific features
Google GeminiGEMINI_API_KEYDefault: gemini-2.5-proFree tier available with rate limits
Azure OpenAIAZURE_OPENAI_ENDPOINT
AZURE_OPENAI_API_KEY
Required via --modelEnterprise-ready with Azure integration
Amazon BedrockAWS_PROFILE (or AWS credentials)Required via --modelAWS-native with IAM authentication

Using AI/LLM Proxies

You can configure nctl ai to route requests through AI/LLM proxy services. This is useful for:

  • Centralizing API key management
  • Implementing rate limiting and cost controls
  • Adding observability and monitoring
  • Load balancing across multiple providers
  • Using self-hosted AI gateways

Each provider supports proxy configuration through a base URL environment variable:

Anthropic with Proxy:

export ANTHROPIC_API_KEY=<your-api-key>
export ANTHROPIC_BASE_URL=http://your-proxy:8000

nctl ai --provider anthropic --prompt "Your prompt here"

Google Gemini with Proxy:

export GEMINI_API_KEY=<your-api-key>
export GEMINI_BASE_URL=http://your-proxy:8000

nctl ai --provider gemini --prompt "Your prompt here"

Azure OpenAI with Proxy:

export AZURE_OPENAI_API_KEY=<your-api-key>
export AZURE_OPENAI_ENDPOINT=http://your-proxy:8000

nctl ai --provider azopenai --model gpt-4o --prompt "Your prompt here"

Notes:

  • The proxy must be compatible with the provider’s API format
  • Popular proxy solutions include LiteLLM, OpenLLM, and enterprise gateways
  • Ensure your proxy is properly configured to forward requests to the actual AI provider
  • The base URL should include the protocol (http:// or https://) and port if needed
  • When using a proxy, set AZURE_OPENAI_ENDPOINT to your proxy URL instead of your Azure endpoint.

Available Tools

The agent has access to tools for command execution, Kyverno and policy workflows, file system operations, Slack and email, and task management. See the Available Tools reference for the full list in a searchable table.

Examples:

List Slack channels:

nctl ai --prompt "list my slack channels"

Send a message to a channel:

nctl ai --prompt "scan my cluster and send the report to dev-general channel"

Available Skills

nctl ai loads specialized skills dynamically based on your task (policy generation, cluster assessment, troubleshooting, cost management, and more). See the Available Skills reference for the full list in a table.

Skills safety

Built-in skills are curated and safe. They require read-only permissions and do not write to external URLs. They follow all security best practices.

You can also add your own Skills to customize the agent.

Adding Tools

The Model Context Protocol (MCP) allows you to extend nctl ai with additional capabilities by connecting external MCP servers. These servers can provide specialized tools, resources, and functionality beyond the built-in features.

Configuration

To configure MCP servers, create a configuration file at ~/.nirmata/nctl/mcp.yaml:

servers:
  - name: resend-email
    command: node
    args:
      - /path/to/directory/mcp-send-email/build/index.js
    env:
      RESEND_API_KEY: your_api_key_here
      SENDER_EMAIL_ADDRESS: example@email.com
      REPLY_TO_EMAIL_ADDRESS: another_example@email.com
    capabilities:
      tools: true
      prompts: false
      resources: false
      attachments: true

Configuration Options

  • name: Unique identifier for the MCP server
  • command: Executable command to start the server (e.g., node, python, binary path)
  • args: Array of command-line arguments passed to the server
  • env: Environment variables required by the server (API keys, configuration values, etc.)
  • capabilities: Defines what features the server provides:
    • tools: Server provides callable tools/functions
    • prompts: Server provides prompt templates
    • resources: Server provides data resources
    • attachments: Server can handle file attachments

Common Use Cases

MCP servers can extend nctl ai with capabilities like:

  • Sending emails and notifications
  • Interacting with external APIs and services
  • Accessing databases and data sources
  • Integration with cloud platforms
  • Custom business logic and workflows

Note: Make sure the MCP server executable is installed and accessible at the specified path before adding it to the configuration.

Adding Skills

You can extend nctl ai with custom domain knowledge and best practices by creating skill files. Skills provide specialized guidance that the personal agent dynamically loads based on the task context.

Loading Custom Skills

Use the --skills flag to load skills from any local directory:

nctl ai --skills "/path/to/custom-skills"

You can load multiple skill directories:

nctl ai --skills "/path/to/team-skills,/path/to/project-skills"

You can also set the NIRMATA_AI_SKILLS environment variable to always load your custom skills:

export NIRMATA_AI_SKILLS="/path/to/custom-skills"
nctl ai

Default Skills Directory

Skills placed in the ~/.nirmata/nctl/skills directory are loaded automatically without requiring the --skills flag:

~/.nirmata/nctl/skills/
  └── kyverno-cli-tests/
      └── SKILL.md
  └── my-custom-skill/
      └── SKILL.md

Creating a Skill File

Each skill is a Markdown file (named SKILL.md) containing domain knowledge, instructions, and best practices. Here’s an example:

Example: ~/.nirmata/nctl/skills/kyverno-cli-tests/SKILL.md

# Kyverno Tests (Unit Tests)

Kyverno CLI tests are used to validate policy behaviors against sample "good" and "bad" resources. Carefully follow the instructions and best practices below when running Kyverno CLI tests:

- Always use the supplied tools to generate and execute Kyverno tests.
- **Testing:** When creating test files for Kyverno policies, always name the test file as "kyverno-test.yaml".
- **Test Execution:** After generating a Kyverno policy, test file (kyverno-test.yaml), and Kubernetes resource files, always run the "kyverno test" command to validate that the policy works correctly with the test scenarios.
- **Test Results:** All Kyverno tests must `Pass` for a successful outcome. Stop when all tests pass.
- Only test for `Audit` mode. Do not try to update policies and test for `Enforce` mode.

## Test File Organization

Organize Kyverno CLI test files in a `.kyverno-test` sub-directory where the policy YAML is contained.

```
pod-security/
  ├── disallow-privileged-containers/
  │   ├── disallow-privileged-containers.yaml
  │   └── .kyverno-tests/
  │       ├── kyverno-test.yaml
  │       ├── resources.yaml
  │       └── variables.yaml
  └── other-policies/
```

Skills can also include executable scripts (bash, Python, etc.) that the agent can run locally on your workstation for custom automation and validation workflows.

Skill Best Practices

  • Clear Structure: Use headings and lists to organize information
  • Actionable Guidance: Provide specific, actionable instructions
  • Examples: Include code examples and sample outputs
  • Context: Explain when and why to use specific approaches
  • Avoid Ambiguity: Be explicit about requirements and expectations
  • Executable Scripts: Include scripts that can be run locally to automate workflows

How Skills Work

When you interact with nctl ai, the personal agent automatically:

  1. Analyzes your request to determine the relevant domain
  2. Loads applicable skills from the default directory and any --skills paths
  3. Applies the guidance and best practices from those skills
  4. Provides responses aligned with your custom knowledge base

Note: Skills are loaded dynamically based on context. You don’t need to restart nctl ai after adding or modifying skill files.

Accessing Nirmata Control Hub

After successful authentication, you can also access the Nirmata Control Hub web interface:

  1. Navigate to https://nirmata.io
  2. Use the same email address you provided during nctl setup
  3. Use the password you created in the authentication process

Alternatively, you can sign up for a 15-day free trial and log in manually using the CLI:

nctl login --userid YOUR_USER_ID --token YOUR_API_TOKEN

Integrating with MCP clients like Cursor, Claude Code, etc.

Run the agent as an MCP server using stdio transport (default):

nctl ai --mcp-server

For Cursor and Claude Desktop, edit ~/.cursor/mcp.json or ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "nctl": {
      "command": "nctl",
      "args": ["ai", "--mcp-server", "--token", "YOUR_NIRMATA_TOKEN"]
    }
  }
}

You can also run the MCP server over HTTP for remote or networked setups:

nctl ai --mcp-server --mcp-server-transport http --mcp-server-port 8080

Command Reference

The authoritative reference for nctl ai flags and examples is the nctl ai command documentation. That page is maintained to match the CLI.

  • In interactive mode: type help for a full list of commands and capabilities.
  • From the terminal: run nctl ai --help for the latest usage, examples, and flags from your installed version.

Available Tools

Reference of tools available to the nctl ai agent.

Available Skills

Reference of built-in skills loaded by nctl ai for policy, clusters, and operations.