Capsule Proxy and Rancher Projects
This guide explains how to setup the integration between Capsule Proxy and Rancher Projects.
It then explains how for the tenant user, the access to Kubernetes cluster-wide resources is transparent.
Rancher Shell and Capsule
In order to integrate the Rancher Shell with Capsule it's needed to route the Kubernetes API requests made from the shell, via Capsule Proxy.
The capsule-rancher-addon allows the integration transparently.
Install the Capsule addon
Add the Clastix Helm repository
By updating the cache with Clastix's Helm repository a Helm chart named
capsule-rancher-addon is available.
Install keeping attention to the following Helm values:
Secretkey that contains the CA certificate used to sign the Capsule Proxy TLS certificate (it should be
"ca.crt"when Capsule Proxy has been configured with certificates generated with Cert Manager).
proxy.servicePort: the port configured for the Capsule Proxy Kubernetes
443in this setup).
proxy.serviceURL: the name of the Capsule Proxy
"capsule-proxy.capsule-system.svc"hen installed in the capsule-system
Rancher Cluster Agent
In both CLI and dashboard use cases, the Cluster Agent is responsible for the two-way communication between Rancher and the downstream cluster.
In a standard setup, the Cluster Agents communicates to the API server. In this setup it will communicate with Capsule Proxy to ensure filtering of cluster-scope resources, for Tenants.
Cluster Agents accepts as arguments:
which will be set, at cluster import-time, to the values of the Capsule Proxy
Service. For example:
KUBERNETES_SERVICE_PORT=9001. You can skip it by installing Capsule Proxy with Helm value
The expected CA is the one for which the certificate is inside the
ConfigMap in the same
Namespace of the Cluster Agent (cattle-system).
Capsule Proxy needs to provide a x509 certificate for which the root CA is trusted by the Cluster Agent. The goal can be achieved by, either using the Kubernetes CA to sign its certificate, or by using a dedicated root CA.
With the Kubernetes root CA
Note: this can be achieved when the Kubernetes root CA keypair is accessible. For example is likely to be possibile with on-premise setup, but not with managed Kubernetes services.
With this approach Cert Manager will sign certificates with the Kubernetes root CA for which it's needed to be provided a
kubectl create secret tls -n capsule-system kubernetes-ca-key-pair --cert=/path/to/ca.crt --key=/path/to/ca.key
When installing Capsule Proxy with Helm chart, it's needed to specify to generate Capsule Proxy
Certificates with Cert Manager with an external
and disable the job for generating the certificates without Cert Manager:
Enable tenant users access cluster resources
In order to allow tenant users to list cluster-scope resources, like
Nodes, Tenants need to be configured with proper
proxySettings, for example:
apiVersion: capsule.clastix.io/v1beta2 kind: Tenant metadata: name: oil spec: owners: - kind: User name: alice proxySettings: - kind: Nodes operations: - List [...]
Also, in order to assign or filter nodes per Tenant, it's needed labels on node in order to be selected:
kubectl label node worker-01 capsule.clastix.io/tenant=oil
and a node selector at Tenant level:
apiVersion: capsule.clastix.io/v1beta2 kind: Tenant metadata: name: oil spec: nodeSelector: capsule.clastix.io/tenant: oil [...]
The final manifest is:
apiVersion: capsule.clastix.io/v1beta2 kind: Tenant metadata: name: oil spec: owners: - kind: User name: alice proxySettings: - kind: Node operations: - List nodeSelector: capsule.clastix.io/tenant: oil
The same appplies for:
More on this in the official documentation.