Getting Started

An introduction to Cloud Admission Controller and Cloud Scanner

This section provides quick start guides for using the Cloud Controller, including its core features like the cloud admission controller and cloud scanner. These guides are designed for proof-of-concept and testing purposes only and are not suitable for production environments. For production-ready installation instructions, please refer to the installation page.

Prerequisites

Before you begin, ensure you have the following prerequisites:

  1. A Kubernetes cluster. You can quickly create a cluster using the following command:

    kind create cluster
    
  2. Install the ClusterPolicyReport and ClusterEphemeralReport CRDs:

    kubectl apply -f https://raw.githubusercontent.com/kyverno/kyverno/refs/heads/main/config/crds/policyreport/wgpolicyk8s.io_clusterpolicyreports.yaml
    kubectl apply -f https://raw.githubusercontent.com/kyverno/kyverno/refs/heads/main/config/crds/reports/reports.kyverno.io_clusterephemeralreports.yaml
    
  3. Set up AWS credentials to scan the account. You can update the scanner.awsConfig section in the values.yaml file as shown below:

    scanner:
      awsConfig:
        accessKeyId: <AWS_ACCESS_KEY_ID>
        secretAccessKey: <AWS_SECRET_ACCESS_KEY>
        sessionToken: <AWS_SESSION_TOKEN>
    

    Replace <AWS_ACCESS_KEY_ID>, <AWS_SECRET_ACCESS_KEY>, and <AWS_SESSION_TOKEN> with your AWS credentials.

  4. Deploy the Cloud Control Helm chart into your Kubernetes cluster:

    helm install cloud-control ./charts/cloud-controller --create-namespace --namespace nirmata
    
  5. Verify Installation:

    kubectl get pods -n nirmata
    

    The output should show the cloud controller pods deployed in the nirmata namespace.

    NAME                                                  READY   STATUS    RESTARTS   AGE
    cloud-control-admission-controller-57cf7b745b-8bhkj   1/1     Running   0          103s
    cloud-control-reports-controller-864bcbc488-j5xkr     1/1     Running   0          103s
    cloud-control-scanner-7b7c8fd977-hmlqp                1/1     Running   0          103s
    

Cloud Admission Controller

This section provides a step-by-step guide on how to use the admission controller to intercept AWS requests and apply policies to them.

Setting up a Proxy

To intercept AWS requests, you need to create a proxy server that listens on a specific port. The proxy server will apply the policies to the requests and forward them to the AWS cloud if they are compliant.

In this example, we will create a proxy server that listens on port 8443. It intercepts all requests destined for AWS. It then checks these requests against defined policies, specifically those labeled app: kyverno. Only compliant requests are forwarded to AWS.

apiVersion: nirmata.io/v1alpha1
kind: Proxy
metadata:
  name: proxy-sample
spec:
  port: 8443
  caKeySecret:
    name: cloud-control-admission-controller-svc.nirmata.svc.tls-ca
    namespace: nirmata
  urls:
    - ".*.amazonaws.com"
  policySelectors:
    - matchLabels:
        app: kyverno

The admission controller automatically generates self-signed CA certificates. These certificates are stored as a Secret in the nirmata Namespace.

To retrieve the Secret name, run the following command:

kubectl get secrets -n nirmata

The output should show the generated secret:

NAME                                                        TYPE                 DATA   AGE
cloud-control-admission-controller-svc.nirmata.svc.tls-ca   kubernetes.io/tls    2      4m28s

The cloud-control-admission-controller-svc.nirmata.svc.tls-ca Secret contains the required CA certificate. As shown in the above Proxy configuration, the spec.caKeySecret field references this Secret.

The proxy server is now running within your Kubernetes cluster, listening on port 8443. To use this proxy from your local machine, you need to establish a connection between your local port 8443 and the proxy server’s port 8443 within the cluster. This is achieved using port forwarding.

kubectl port-forward svc/cloud-control-admission-controller-svc 8443:8443 -n nirmata

By running this command, any traffic sent to localhost:8443 on your machine will be forwarded to the proxy server in the cluster. This allows you to interact with the proxy and, consequently, enforce your policies on AWS requests as if the proxy server was running locally.

ValidatingPolicies

We will create a ValidatingPolicy to ensure that ECS clusters include the group tag. The policy will be labeled app: kyverno to align with the policy selector specified in the Proxy configuration. Operating in Enforce mode, this policy will block and prevent non-compliant requests from being forwarded to AWS.

apiVersion: nirmata.io/v1alpha1
kind: ValidatingPolicy
metadata:
  name: ecs-cluster
  labels:
    app: kyverno
spec:
  failureAction: Enforce
  admission: true
  rules:
    - name: check-tags
      identifier: payload.clusterName
      match:
        all:
        - (metadata.provider): "AWS"
        - (metadata.service): "ecs"
        - (metadata.action): "CreateCluster"
      assert:
        all:
        - message: A 'group' tag is required
          check:
            payload:
              (tags[?key=='group'] || `[]`):
                (length(@) > `0`): true

Using the AWS CLI

You need to configure your AWS CLI to route requests through the proxy server. This involves setting two environment variables:

  1. HTTPS_PROXY: This tells the AWS CLI to send all requests through the controller acting as a local proxy.

    export HTTPS_PROXY=http://localhost:8443
    
  2. AWS_CA_BUNDLE: The controller uses a self-signed security certificate. This variable tells the AWS CLI to trust that certificate.

    First, you need to download the certificate:

    kubectl get secrets -n nirmata cloud-control-admission-controller-svc.nirmata.svc.tls-ca -o jsonpath="{.data.tls\.crt}" | base64 --decode > ca.crt
    

    Then, set the environment variable:

    export AWS_CA_BUNDLE=ca.crt
    

    It tells the AWS CLI which Certificate Authority (CA) to trust for verifying the proxy’s SSL certificate. Because the cloud admission controller is using a self-signed certificate (not issued by a publicly trusted CA), the AWS CLI won’t trust it by default. By setting AWS_CA_BUNDLE to the path of the controller’s CA certificate (ca.crt), you’re explicitly telling the AWS CLI that this certificate is valid and should be used to establish a secure connection with the proxy. Without this, the AWS CLI would reject the connection due to the untrusted certificate.

Once configured, your AWS CLI commands will be checked against the defined policies before being sent to AWS.

Example: Creating an ECS Cluster

The following examples demonstrate how the admission controller enforces a policy requiring all ECS clusters to have a group tag.

  1. Create an ECS cluster without the group tag:

    aws ecs create-cluster --cluster-name bad-cluster
    

    The output should be similar to the following:

    An error occurred (406) when calling the CreateCluster operation: ecs-cluster.check-tags bad-cluster: -> A 'group' tag is required
    -> all[0].check.data.(tags[?key=='group'] || `[]`).(length(@) > `0`): Invalid value: false: Expected value: true
    

    As expected, the request was blocked since it violates the ValidatingPolicy that requires all ECS clusters to have the group tag.

  2. Create an ECS cluster with the group tag:

    aws ecs create-cluster --cluster-name good-cluster --tags key=group,value=test key=owner,value=test
    

    The output should be similar to the following:

    {
        "cluster": {
            "clusterArn": "arn:aws:ecs:us-east-1:844333597536:cluster/good-cluster",
            "clusterName": "good-cluster",
            "status": "ACTIVE",
            "registeredContainerInstancesCount": 0,
            "runningTasksCount": 0,
            "pendingTasksCount": 0,
            "activeServicesCount": 0,
            "statistics": [],
            "tags": [
                {
                    "key": "owner",
                    "value": "test"
                },
                {
                    "key": "group",
                    "value": "test"
                }
            ],
            "settings": [
                {
                    "name": "containerInsights",
                    "value": "disabled"
                }
            ],
            "capacityProviders": [],
            "defaultCapacityProviderStrategy": []
        }
    }
    

    The request was successful since it complies with the ValidatingPolicy that requires all ECS clusters to have the group tag.

Cloud Scanner

This section provides a step-by-step guide on how to run the scanner to scan your AWS account. In this example, we are going to scan ECS services in the us-east-1 region.

ValidatingPolicies

We will create two ValidatingPolicies; one of which matches ECS Clusters and the other matches ECS Task Definitions. Both policies check for the presence of the group tag.

apiVersion: nirmata.io/v1alpha1
kind: ValidatingPolicy
metadata:
  name: check-task-definition-tags
spec:
  scan: true
  rules:
    - name: check-task-definition-tags
      identifier: payload.family
      match:
        all:
        - (metadata.provider): AWS
        - (metadata.region): us-east-1
        - (metadata.service): ecs
        - (metadata.resource): TaskDefinition
      assert:
        all:
        - message: >- 
            ECS task definitions must have a 'group' tag
          check:
            payload:
              (tags[?key=='group'] || `[]`):
                (length(@) > `0`): true
---
apiVersion: nirmata.io/v1alpha1
kind: ValidatingPolicy
metadata:
  name: check-ecs-cluster-tags
spec:
  scan: true
  rules:
    - name: check-tags
      identifier: payload.clusterName
      match:
        all:
        - (metadata.provider): "AWS"
        - (metadata.region): us-east-1
        - (metadata.service): "ecs"
        - (metadata.resource): "Cluster"
      assert:
        all:
        - message: A 'group' tag is required
          check:
            payload:
              (tags[?key=='group'] || `[]`):
                (length(@) > `0`): true

ECS Clusters and Task Definitions

To test the scanner, we will create ECS resources, both compliant and non-compliant with the ValidatingPolicies that check the group tag, by creating ECS clusters and task definitions, some with and some without the required group tag.

  1. Create an ECS cluster named bad-cluster without the group tag:

    aws ecs create-cluster --cluster-name bad-cluster
    
  2. Register a task definition named bad-task without the group tag:

    aws ecs register-task-definition \
    --family bad-task \
    --container-definitions '[{"name": "my-app", "image": "nginx:latest", "essential": true, "portMappings": [{"containerPort": 80, "hostPort": 80}]}]' \
    --requires-compatibilities FARGATE \
    --cpu 256 \
    --memory 512 \
    --network-mode awsvpc
    
  3. Create an ECS cluster named good-cluster with the group tag:

    aws ecs create-cluster --cluster-name good-cluster --tags key=group,value=development
    
  4. Register a task definition named good-task with the group tag:

    aws ecs register-task-definition \
    --family good-task \
    --container-definitions '[{"name": "my-app", "image": "nginx:latest", "essential": true, "portMappings": [{"containerPort": 80, "hostPort": 80}]}]' \
    --requires-compatibilities FARGATE \
    --cpu 256 \
    --memory 512 \
    --network-mode awsvpc \
    --tags '[{"key": "group", "value": "production"}]'
    

AWSAccountConfiguration

To scan AWS resources, you need to define the scope of the scan by creating an AWSAccountConfig custom resource. This configuration specifies the target AWS account ID, regions, and services. It’s important to create the necessary policies before applying the AWSAccountConfig to ensure they are ready when the scanner starts.

apiVersion: nirmata.io/v1alpha1
kind: AWSAccountConfig
metadata:
  name: aws-scan
spec:
  scanInterval: 1h
  accountID: "123456789012"
  accountName: "mariamfahmy"
  regions:
    - us-east-1
  services:
    - ECS

Upon the creation of the AWSAccountConfig resource, the scanner will be triggered and scan the specified AWS account for ECS services in the us-east-1 region. As a result, policy reports will be generated for the scanned resources.

View Reports

In this example, the scanner will generate four ClusterPolicyReports: two for the bad-cluster and bad-task resources, and two for the good-cluster and good-task resources. The reports will show the compliance status of the resources based on the ValidatingPolicies.

To view the generated reports, run the following command:

kubectl get clusterpolicyreports

The output should show the generated reports:

NAME                                                              KIND                NAME             PASS   FAIL   WARN   ERROR   SKIP   AGE
1a468eba2818db9333ede8428bf6c910d467db5d5fc1b36adc535ce32cea2c5   ECSCluster          good-cluster     1      0      0      0       0      4s
1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9   ECSCluster          bad-cluster      0      1      0      0       0      4s
91696bc8dbb327de99c4d34c579de8bd71e2ef45ad325d10d39d690ad14776c   ECSTaskDefinition   bad-task__2      0      1      0      0       0      4s
cf987d912032e51712ad73a2067a1c5ffee16d8872575166c0739ffedfc0766   ECSTaskDefinition   good-task__2     1      0      0      0       0      4s

To view the details of a specific report, run the following command:

kubectl get clusterpolicyreports 1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9 -o yaml
apiVersion: wgpolicyk8s.io/v1alpha2
kind: ClusterPolicyReport
metadata:
  labels:
    app.kubernetes.io/managed-by: cloud-control-point
    cloud.policies.nirmata.io/account-id: "123456789012"
    cloud.policies.nirmata.io/account-name: mariamfahmy
    cloud.policies.nirmata.io/last-modified: "1731585775"
    cloud.policies.nirmata.io/provider: AWS
    cloud.policies.nirmata.io/region: us-east-1
    cloud.policies.nirmata.io/resource-id: 1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9
    cloud.policies.nirmata.io/resource-name: bad-cluster
    cloud.policies.nirmata.io/resource-type: Cluster
    cloud.policies.nirmata.io/service: ecs
    cloud.policies.nirmata.io/ttl: 1h10m0s
  name: 1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9
results:
- message: |-
    -> A 'group' tag is required
     -> all[0].check.payload.(tags[?key=='group'] || `[]`).(length(@) > `0`): Invalid value: false: Expected value: true    
  policy: check-ecs-cluster-tags
  result: fail
  rule: check-tags
  scored: true
  source: cloud-control
  timestamp:
    nanos: 0
    seconds: 1731585775
scope:
  apiVersion: nirmata.io/v1alpha1
  kind: ECSCluster
  name: bad-cluster
summary:
  error: 0
  fail: 1
  pass: 0
  skip: 0
  warn: 0

For more details on the cloud scanner, refer to the Cloud Scanner section.