8.3 KiB
title | lang | meta | description | |||||
---|---|---|---|---|---|---|---|---|
Kubernetes API / Kubectl | en-US |
|
This guide covers how to add authentication and authorization to kubernetes apiserver using single-sing-on and pomerium. |
Securing Kubernetes
The following guide covers how to secure Kubernetes using Pomerium.
Kubernetes
This guide is written for two starting points:
-
New users without a Kubernetes cluster running Pomerium. This track will use Kind to set up a local test environment, and assumes it is installed locally.
-
Users who followed the Pomerium using Helm doc, and have a running Pomerium instance on a Kubernetes cluster.
The following section covers configuring a test cluster using Kind. Afterwards, use the appropriate tab where the steps diverge.
Kind
-
Create a config file (
kind-config.yaml
):# kind-config.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: - containerPort: 30443 hostPort: 30443
-
Create the cluster:
kind create cluster --config=./kind-config.yaml
Pomerium Service Account
Pomerium uses a single service account and user impersonation headers to authenticate and authorize users in Kubernetes.
:::: tabs
::: tab Kind
To create the Pomerium service account use the following config: (pomerium-k8s.yaml
)
# pomerium-k8s.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: pomerium
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pomerium-impersonation
rules:
- apiGroups:
- ""
resources:
- users
- groups
- serviceaccounts
verbs:
- impersonate
- apiGroups:
- "authorization.k8s.io"
resources:
- selfsubjectaccessreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pomerium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pomerium-impersonation
subjects:
- kind: ServiceAccount
name: pomerium
namespace: default
Apply it with:
kubectl apply -f ./pomerium-k8s.yaml
:::
::: tab Helm
If you've installed Pomerium using Helm, you can enable the service account by setting apiProxy.enabled
in pomerium-values.yaml
:
apiProxy:
enabled: true
Upgrade with Helm to apply:
helm upgrade --install pomerium pomerium/pomerium --values=./pomerium-values.yaml
:::
::::
User Permissions
To grant access to users within Kubernetes, you will need to configure RBAC permissions. For example:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: someuser@example.com
Permissions can also be granted to groups the Pomerium user is a member of.
Certificates
Those who followed the Certificates section of Pomerium using Helm will already have a certificate solution, and can skip this section. If not, we will generate wildcard certificates for the *.localhost.pomerium.io
domain using mkcert
:
mkcert '*.localhost.pomerium.io'
This creates two files:
_wildcard.localhost.pomerium.io-key.pem
_wildcard.localhost.pomerium.io.pem
Pomerium
Configuration
:::: tabs
::: tab Kind
Our Pomerium configuration will route requests from k8s.localhost.pomerium.io:30443
to the kube-apiserver. Create a Kubernetes YAML configuration file (pomerium.yaml
):
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: pomerium
labels:
app: pomerium
spec:
replicas: 1
selector:
matchLabels:
app: pomerium
template:
metadata:
labels:
app: pomerium
spec:
containers:
- name: pomerium
image: pomerium/pomerium:master
ports:
- containerPort: 30443
env:
- name: ADDRESS
value: "0.0.0.0:30443"
- name: AUTHENTICATE_SERVICE_URL
value: "https://authenticate.localhost.pomerium.io:30443"
- name: CERTIFICATE
value: "..." # $(base64 -w 0 <./_wildcard.localhost.pomerium.io.pem)
- name: CERTIFICATE_KEY
value: "..." # $(base64 -w 0 <./_wildcard.localhost.pomerium.io-key.pem)
- name: COOKIE_SECRET
value: "..." # $(head -c32 /dev/urandom | base64 -w 0)
- name: IDP_PROVIDER
value: google
- name: IDP_CLIENT_ID
value: "..."
- name: IDP_CLIENT_SECRET
value: "..."
- name: POLICY
value: "..." #$(echo "$_policy" | base64 -w 0)
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: pomerium
spec:
type: NodePort
selector:
app: pomerium
ports:
- port: 30443
targetPort: 30443
nodePort: 30443
Make sure to fill in the appropriate values as indicated.
The policy should be a base64-encoded block of yaml:
- from: https://k8s.localhost.pomerium.io:30443
to: https://kubernetes.default.svc
tls_skip_verify: true
allow_spdy: true
policy:
- allow:
or:
- domain:
is: pomerium.com
kubernetes_service_account_token: "/var/run/secrets/kubernetes.io/serviceaccount/token"
Applying this configuration will create a Pomerium deployment and service within kubernetes that is accessible from *.localhost.pomerium.io:30443
.
:::
::: tab Helm
-
Update
pomerium-values.yaml
to add a policy for access to the Kubernetes API server through Pomerium:policy: - from: https://k8s.localhost.pomerium.io to: https://kubernetes.default.svc tls_skip_verify: true allow_spdy: true allowed_users: user@companyDomain.com kubernetes_service_account_token: "/var/run/secrets/kubernetes.io/serviceaccount/token"
-
Apply the new configuration:
helm upgrade --install pomerium pomerium/pomerium --values=./pomerium-values.yaml
:::
::::
Kubectl
Pomerium uses a custom Kubernetes exec-credential provider for kubectl access. This provider will open up a browser window to the Pomerium authenticate service and generate an authorization token that will be used for Kubernetes API calls.
The Pomerium Kubernetes exec-credential provider can be installed via go-get:
env GO111MODULE=on GOBIN=$HOME/bin go get github.com/pomerium/pomerium/cmd/pomerium-cli@master
Make sure $HOME/bin
is on your path.
To use the Pomerium Kubernetes exec-credential provider, update your kubectl config. For a local environment with Kind, append :30443
to each instance of https://k8s.localhost.pomerium.io
:
# Add Cluster
kubectl config set-cluster via-pomerium --server=https://k8s.localhost.pomerium.io
# Add Context
kubectl config set-context via-pomerium --user=via-pomerium --cluster=via-pomerium
# Add credentials command
kubectl config set-credentials via-pomerium --exec-command=pomerium-cli --exec-arg=k8s,exec-credential,https://k8s.localhost.pomerium.io --exec-api-version=client.authentication.k8s.io/v1beta1
Here's the resulting configuration:
-
Cluster:
clusters: - cluster: server: https://k8s.localhost.pomerium.io name: via-pomerium
-
Context:
contexts: - context: cluster: via-pomerium user: via-pomerium name: via-pomerium
-
User:
- name: via-pomerium user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: - k8s - exec-credential - https://k8s.localhost.pomerium.io command: pomerium-cli env: null
With kubectl
configured you can now query the Kubernetes API via pomerium:
kubectl --context=via-pomerium cluster-info
You should be prompted to login and see the resulting cluster info.