Cloud Admission Controller
What is an Admission Controller
In Kubernetes, an admission controller is a key component that intercepts requests to the Kubernetes API server, validating or mutating resource configurations before they are persisted in the cluster. Admission controllers are designed to enforce policies, ensuring that any new or modified resources—such as pods, services, or deployments—meet certain compliance and security standards.
For example, an admission controller can prevent a deployment if it doesn’t adhere to specified security policies, such as disallowing images with certain vulnerabilities or ensuring that all containers are running with minimal privileges. By catching non-compliant configurations at this stage, admission controllers protect the system from risky or unintended changes, adding an essential layer of governance.
What an Admission Controller Means for the Cloud
While admission controllers are standard in Kubernetes, there has traditionally been no equivalent for cloud environments. In the cloud, resources are often provisioned dynamically and across multiple providers, making it challenging to enforce consistent governance and prevent misconfigurations before they impact the environment.
Introducing an admission controller-like functionality to the cloud - like Nirmata’s Cloud Control Point, brings the same level of preventive governance to cloud resources. This means that every new resource created in a cloud environment can be evaluated against policies in real-time, whether it’s a virtual machine, a database instance, or a storage bucket. This capability helps prevent misconfigurations and ensures cloud resources are provisioned securely, adhering to organizational policies and compliance requirements.
Benefits of Cloud Admission Controller
Implementing an admission controller for the cloud offers several significant benefits:
-
Proactive Prevention of Misconfigurations: By intercepting and evaluating resources before they are fully created or modified, an admission controller prevents misconfigurations from reaching production. This is critical for avoiding security vulnerabilities, compliance violations, and unintended costs.
-
Consistent Governance Across Cloud Environments: With an admission controller in place, policies are consistently enforced across multiple cloud providers and services. This ensures that governance standards are met regardless of where resources are hosted.
-
Increased Operational Efficiency and Reduced Risk: An admission controller reduces the need for reactive fixes or costly rollbacks, catching issues early and minimizing the risk of human error.
Example Use Case
Imagine an organization that has a policy requiring all cloud storage buckets to be encrypted. Without an admission controller, a team member could accidentally create an unencrypted bucket, which might go unnoticed and expose sensitive data. With Cloud Control Point acting as an admission controller, any new storage bucket is evaluated against this policy, and if encryption is missing, the bucket creation is blocked. This preventive control ensures that only compliant configurations are allowed, greatly reducing security risks.
Components
Proxy Controller
The Proxy Controller monitors for the creation of Proxy Custom Resources (CRs), which define configurations for the proxy server. Each Proxy CR specifies settings like target URLs for interception and applicable policies. Upon detecting a new Proxy CR, the Proxy Controller automatically deploys a proxy server instance tailored to these configurations.
Example of a Proxy CR:
apiVersion: nirmata.io/v1alpha1
kind: Proxy
metadata:
name: proxy-sample
spec:
port: 8443
caKeySecret:
name: cloud-admission-controller-service.nirmata.svc.tls-ca
namespace: nirmata
urls:
- ".*.amazonaws.com"
policySelectors:
- matchLabels:
app: kyverno
Configuration Options
Field | Description |
---|---|
port | The port on which the proxy will listen for HTTPS connections. In this example, the port is set to 8443. |
caKeySecret | Defines the Kubernetes secret containing the CA certificate and private key for the proxy. |
caKeySecret.name | Specifies the name of the Kubernetes Secret holding the Certificate Authority (CA) and key. |
caKeySecret.namespace | Specifies the namespace where the caKeySecret is located. |
urls | A list of URL patterns to intercept and proxy. In this example, the configuration targets any subdomain of amazonaws.com. |
policySelectors | Defines selectors to match specific policies in the Kubernetes environment. In this example, it matches labels for resources associated with app: kyverno. |
Explanation of caKeySecret and Its Role in MITM
The caKeySecret field is critical for enabling the MITM functionality of the proxy. This field refers to a Kubernetes Secret containing a Certificate Authority (CA) certificate and its associated private key, allowing the proxy to establish and decrypt HTTPS connections transparently.
The Cloud Admission controller automatically generates the CA certificate and private key when deploying the proxy server; however, you can also use a custom CA by providing the certificate and key in a Kubernetes Secret.
Certificate Authority (CA): The CA acts as a trusted intermediary that the proxy uses to issue certificates dynamically for any intercepted HTTPS connections. When a client attempts to connect to a URL that matches one in the urls field, the proxy will generate a temporary certificate signed by the CA, which the client will trust if it recognizes this CA.
Private Key: The CA’s private key enables the proxy to sign these certificates and establish trusted connections on the fly. Without the private key, the proxy would be unable to mimic the HTTPS connection to the target server.
Why a CA is Required
In a man-in-the-middle configuration, the proxy intercepts secure HTTPS traffic, which is encrypted. The CA and its private key allow the proxy to decrypt this traffic securely. By generating signed certificates for each session, the proxy ensures that clients do not reject the connection due to untrusted certificates, maintaining the integrity of the proxy while still allowing full inspection or modification of the traffic.
Note: For security, it is essential to restrict access to the caKeySecret and ensure it is stored securely, as unauthorized access to the CA’s private key would compromise the integrity of the proxy and the intercepted traffic.
Proxy Server
The Proxy Server is the core of the Cloud Admission Controller, acting as the intermediary between clients and cloud providers. It intercepts requests, processes them, and evaluates compliance against relevant policies before forwarding or blocking the request. Key functions include:
- Request Interception: Captures API requests before they reach the cloud provider as per the defined URLs in the Proxy CR.
- Pre-processing: Extracts necessary details (e.g., provider, service, action, and region) for policy evaluation, creating a structured payload.
- Policy Evaluation: Applies policies from the Policy Cache by evaluating the structured payload against these policies using the Policy Engine.
Policy Action:
- Enforce Mode: Requests violating policies are blocked, and an error response is returned. An event is generated.
- Audit Mode: Non-compliant requests are allowed but logged, with an Event generated for further review.
Note: You need to set the spec.admission
field to true in the Policy CR to enable the admission controller functionality.
Preprocessor Configuration
The Preprocessor is a critical component of the Proxy Server, responsible for extracting relevant details from intercepted requests and creating a structured payload for policy evaluation.
apiVersion: nirmata.io/v1alpha1
kind: Preprocessor
metadata:
name: aws
spec:
match:
provider: AWS
admission:
apiDefinitions:
- https://raw.githubusercontent.com/aws/aws-sdk-go/refs/heads/main/models/apis/ecs/2014-11-13/api-2.json
idRules:
- match:
provider: AWS
service: ecs
resource: CapacityProvider
action: CreateCapacityProvider
identifierPath: name
- match:
provider: AWS
service: ecs
resource: CapacityProvider
action: DeleteCapacityProvider
identifierPath: capacityProvider
# More rules follow
Explanation of admission.apiDefinitions and idRules admission.apiDefinitions
The apiDefinitions
field provides URLs to API schemas published by AWS. These API definitions describe the structure of requests and responses for AWS services, allowing the Preprocessor to understand the metadata available for each resource.
By referencing these schemas, the Preprocessor dynamically retrieves the metadata structure for AWS ECS resources, ensuring that any resource type or action listed in the API definition can be inspected and identified according to the rules specified in idRules
.
The idRules
field provides a set of matching rules for specific AWS resources and actions. Each rule uses a combination of provider, service, resource, and action identifiers to precisely target certain API calls. The identifierPath field in each rule specifies a JMESPath expression that extracts the identifier or name of the resource from the metadata.
For example:
When creating an ECS Cluster, the rule specifies an identifierPath of clusterName. This tells the Preprocessor to look for the clusterName field in the API request, where the name of the newly created cluster is located. For ECS TaskDefinition actions, such as DeregisterTaskDefinition, the identifierPath is set to taskDefinition. This extracts the taskDefinition name, ensuring accurate identification of the task definition being processed.
Note: JMESPath expressions are essential here as they allow flexible and powerful querying of JSON data, making it easy to pinpoint fields across different resource types and actions. The combination of apiDefinitions
and idRules
provides a scalable solution for extracting metadata for any AWS ECS resource while accommodating new actions and resources as AWS services evolve.