WIP helm updates

This commit is contained in:
alexfornuto 2021-07-22 18:43:11 -05:00
parent de9f627a35
commit d6ed5431c1
4 changed files with 261 additions and 55 deletions

View file

@ -12,20 +12,60 @@ This quick-start will show you how to deploy Pomerium with [Helm](https://helm.s
## Prerequisites ## Prerequisites
- A [Google Cloud Account](https://console.cloud.google.com/). - A Kubernetes provider ([Google Cloud](https://console.cloud.google.com/) for example).
- A configured [identity provider]. - A configured [identity provider].
- Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). - Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
- Install the [Google Cloud SDK](https://cloud.google.com/kubernetes-engine/docs/quickstart). - Install the [Google Cloud SDK](https://cloud.google.com/kubernetes-engine/docs/quickstart).
- Install [helm](https://helm.sh/docs/using_helm/). - Install [helm](https://helm.sh/docs/using_helm/).
- [TLS certificates]. - [TLS certificates].
Though there are [many ways](https://unofficial-kubernetes.readthedocs.io/en/latest/setup/pick-right-solution/) to work with Kubernetes, for the purpose of this guide, we will be using Google's [Kubernetes Engine](https://cloud.google.com/kubernetes-engine/). That said, most of the following steps should be very similar using any other provider.
In addition to sharing many of the same features as the Kubernetes quickstart guide, the default helm deployment script also includes a bootstrapped certificate authority enabling mutually authenticated and encrypted communication between services that does not depend on the external LetsEncrypt certificates. Having the external domain certificate de-coupled makes it easier to renew external certificates. In addition to sharing many of the same features as the Docker-based quickstart guide, the default helm deployment script also includes a bootstrapped certificate authority enabling mutually authenticated and encrypted communication between services that does not depend on the external LetsEncrypt certificates. Having the external domain certificate de-coupled makes it easier to renew external certificates.
## Configure ## Configure
Download and modify the following helm_gke.sh script and values file to match your [identity provider] and [TLS certificates] settings. 1. In your Kubernetes provider, create a new cluster. If you are only installing open-source Pomerium, 1 node would suffice. If you're preparing a configuration for [Pomerium Enterprise](/enterprise/install/helm.md), use at least 3 nodes.
If you're using Google Cloud, for example, and have the [Google Cloud SDK](https://cloud.google.com/kubernetes-engine/docs/quickstart) installed, you can use the following command. Substitute your preferred region and node count:
```bash
gcloud container clusters create pomerium --region us-west2 --num-nodes 1
```
1. Set the context for `kubectl` to your new cluster. <!-- @travis is there a provider-agnostic way to describe this? -->
1. Add Pomerium's Helm repo:
```bash
helm repo add pomerium https://helm.pomerium.io
```
1. So that we can create a valid test route, add Bitnami's Helm repo to pull nginx from:
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
```
1. Update Helm:
```bash
helm repo update
```
1. Create the Pomerium namespace in your cluster, and set your `kubectl` context to it:
```bash
kubectl create namespace pomerium
kubectl config set-context --current --namespace=pomerium
```
1. Install nginx to the cluster
```
helm upgrade --install nginx bitnami/nginx
```
<<<@/examples/helm/helm_gke.sh <<<@/examples/helm/helm_gke.sh

View file

@ -4,6 +4,8 @@ sidebarDepth: 1
description: Install Pomerium Enterprise in Kubernetes with Helm description: Install Pomerium Enterprise in Kubernetes with Helm
--- ---
# Install Pomerium Enterprise Console in Helm
This document covers installing Pomerium Enterprise Console into your existing helm-managed Kubernetes cluster. This document covers installing Pomerium Enterprise Console into your existing helm-managed Kubernetes cluster.
## Before You Begin ## Before You Begin
@ -12,9 +14,10 @@ The Pomerium Enterprise Console requires:
- An accessible RDBMS. We support PostgreSQL 9+. - An accessible RDBMS. We support PostgreSQL 9+.
- A database and user with full permissions for it. - A database and user with full permissions for it.
- A certificate management solution. This page will assume a store of certificates in <!-- @travis pick a location? --> and assume [cert-manager](https://cert-manager.io/docs/) as the solution. If you use another certificate solution, adjust the steps accordingly. - A certificate management solution. This page will assume a store of certificates using [cert-manager](https://cert-manager.io/docs/) as the solution. If you use another certificate solution, adjust the steps accordingly.
- An existing Pomerium installation. If you don't already have the open-source Pomerium installed in your cluster, see [Pomerium using Helm](/docs/quick-start/helm.md) before you continue. - An existing Pomerium installation. If you don't already have the open-source Pomerium installed in your cluster, see [Pomerium using Helm](/docs/quick-start/helm.md) before you continue.
## System Requirements ## System Requirements
For an production deployment, Pomerium Enterprise requires: For an production deployment, Pomerium Enterprise requires:
@ -45,67 +48,191 @@ For an production deployment, Pomerium Enterprise requires:
- Pomerium Enterprise Console must be able to reach a supported database instance - Pomerium Enterprise Console must be able to reach a supported database instance
- Pomerium Proxy service must be able to forward traffic to the Pomerium Enterprise Console - Pomerium Proxy service must be able to forward traffic to the Pomerium Enterprise Console
## Certificates
This setup uses [mkcert](https://mkcert.org/) to generate certificates that are trusted by your local web browser for testing, and cert-manager to manage them. If you already have a certificate solution, you can skip the steps below and move on to [the next stage](#configure-kubernetes-for-pomerium).
### Configure mkcert
1. After [installing mkcert](https://github.com/FiloSottile/mkcert#installation), confirm the presence and names of your local CA files:
```bash
mkcert -install
The local CA is already installed in the system trust store! 👍
The local CA is already installed in the Firefox and/or Chrome/Chromium trust store! 👍
ls $(mkcert -CAROOT)
rootCA-key.pem rootCA.pem
```
### Install cert-manager
If you haven't already, install cert-manager and create a CA issuer. You can follow their docs listed below, or use the steps provided:
- [cert-manager: Installing with Helm](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm)
- [cert-manager: CA](https://cert-manager.io/docs/configuration/ca/]https://cert-manager.io/docs/configuration/ca/)
1. Create a namespace for cert-manager:
```bash
kubectl create namespace cert-manager
```
1. Add the jetstack.io repository and update Helm:
```bash
helm repo add jetstack https://charts.jetstack.io
helm repo update
```
1. Install cert-manager to your cluster:
```bash
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace \
--version v1.4.0 --set installCRDs=true
```
1. Confirm deployment with `kubectl get pods --namespace cert-manager`:
```bash
kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-5d7f97b46d-8g942 1/1 Running 0 33s
cert-manager-cainjector-69d885bf55-6x5v2 1/1 Running 1 33s
cert-manager-webhook-8d7495f4-s5s6p 1/1 Running 0 33s
```
1. In your Pomerium namespace, create a Kubernetes secret for the rootCA-key file in your local CA root:
```bash
kubectl create secret tls pomerium-tls-ca --namespace=pomerium \
--cert=$(mkcert -CAROOT)/rootCA.pem --key=$(mkcert -CAROOT)/rootCA-key.pem
```
1. Define an Issuer configuration in `issuer.yaml`:
```yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: pomerium-issuer
namespace: pomerium
spec:
ca:
secretName: pomerium-tls-ca
```
1. Apply and confirm:
```bash
kubectl apply -f issuer.yaml
issuer.cert-manager.io/pomerium-issuer created
kubectl get issuers.cert-manager.io
NAME READY AGE
pomerium-issuer True 5s
```
1. Create certificate configurations for Pomerium and Pomerium Enterprise, or just for Enterprise if your existing Pomerium configuration is already configured for TLS encryption:
- `pomerium-certificates.yaml`
<<< @/examples/kubernetes/pomerium-certificates.yaml
::: tip
If you already have a public domain configured for your cluster, you can substitute it for `localhost.pomerium.com`.
:::
- `pomerium-console-certificates.yaml`
<<< @/examples/kubernetes/pomerium-console-certificates.yaml
1. Apply the required certificate configurations, and confirm:
```bash
kubectl apply -f pomerium-certificates.yaml # If open-source Pomerium wasn't already configured for TLS
kubectl apply -f pomerium-console-certificates.yaml
```
```bash
kubectl get certificate
NAME READY SECRET AGE
pomerium-cert True pomerium-tls 10s
pomerium-console-cert True pomerium-console-tls 10s
pomerium-redis-cert True pomerium-redis-tls 10s
```
## Configure Kubernetes for Pomerium
If open-source Pomerium was already configured in your Kubernetes cluster, you can skip to the [next step](#update-pomerium)
1. Create the Pomerium namespace, and set your local context to it:
```bash
kubectl create namespace pomerium
kubectl config set-context --current --namespace=pomerium
```
## Update Pomerium ## Update Pomerium
1. Open your helm values file for Pomerium. This document will refer to this file as `pomerium-values.yaml`. 1. Open your helm values file for Pomerium. This document will refer to this file as `pomerium-values.yaml`.
1. In pomerium-values.yaml, remove the `service` block: 1. Confirm that the `authenticate` block is using the correct TLS secret:
```diff ```yaml
- service: authenticate:
- type: NodePort existingTLSSecret: pomerium-tls
- ...
``` ```
1. In `pomerium-values.yaml`, set `ingress.enabled=false` and define a service block for NodePort:
1. Add or modify the `ingress` block to set `enabled: false`:
```yaml ```yaml
ingress: ingress:
enabled: false enabled: false
annotations:
kubernetes.io/ingress.allow-http: "false"
```
1. Add or modify the `proxy` block:
```yaml
proxy: proxy:
service: existingTLSSecret: pomerium-tls
type: LoadBalancer service:
tls: type: LoadBalancer
cert: # base64 encoded TLS certificate
key: # base64 encoded TLS key
``` ```
1. In the `config` block, set a `sharedSecret`, `cookieSecret`, and `rootDomain`: 1. In the `config` block, make sure to set a `sharedSecret`, `cookieSecret`, and `rootDomain`:
```yaml ```yaml
config: config:
sharedSecret: # Shared with the console, you can use "head -c32 /dev/urandom | base64" to create existingTLSSecret: pomerium-tls
cookieSecret: # Shared with the console, you can use "head -c32 /dev/urandom | base64" to create sharedSecret: # Shared with the console, you can use "head -c32 /dev/urandom | base64" to create
rootDomain: appspace.companydomain.com cookieSecret: # Shared with the console, you can use "head -c32 /dev/urandom | base64" to create
rootDomain: appspace.companydomain.com
``` ```
These values are generated by default when not set, but must be explicitly set when configuring Pomerium with Enterprise Console.
1. Also in `config`, set a `policy` block for the Enteprise Console: 1. Also in `config`, set a `policy` block for the Enteprise Console:
```yaml ```yaml
policy: policy:
- from: https://console.appspace.companydomain.com - from: https://console.appspace.companydomain.com
to: https://pomerium-console.default.svc.cluster.local to: https://pomerium-console.pomerium.svc.cluster.local
allowed_domains: allowed_domains:
- companydomain.com - companydomain.com
pass_identity_headers: true pass_identity_headers: true
``` ```
Remember to adjust the `to` value to match your namespace.
1. Add the `redis` and `databroker` blocks: 1. Add the `redis` and `databroker` blocks:
```yaml ```yaml
redis: redis:
enabled: true enabled: true
generateTLS: false
tls:
certificateSecret: pomerium-redis-tls
databroker: databroker:
existingTLSSecret: pomerium-tls
storage: storage:
connectionString: rediss://pomerium-redis-master.default.svc.cluster.local type: redis
``` ```
1. Use Helm to update your Pomerium installation: 1. Use Helm to update your Pomerium installation:
@ -120,23 +247,24 @@ For an production deployment, Pomerium Enterprise requires:
```yaml ```yaml
database: database:
type: pg type: pg
username: pomeriumDbUser username: pomeriumDbUser
password: IAMASTRONGPASSWORDLOOKATME password: IAMASTRONGPASSWORDLOOKATME
host: 198.51.100.53 host: 198.51.100.53
name: pomeriumDbName name: pomeriumDbName
sslmode: require sslmode: require
config: config:
sharedSecret: #Shared with Pomerium sharedSecret: #Shared with Pomerium
databaseEncryptionKey: #Generate from "head -c32 /dev/urandom | base64" databaseEncryptionKey: #Generate from "head -c32 /dev/urandom | base64"
administrators: "youruser@yourcompany.com" #This is a hard-coded access, remove once setup is complete administrators: "youruser@yourcompany.com" #This is a hard-coded access, remove once setup is complete
tls: tls:
enabled: true existingCASecret: pomerium-tls
existingCASecret: pomerium-ca-tls caSecretKey: ca.crt
caSecretKey: ca.crt #Set to your CA existingSecret: pomerium-console-tls
generate: false
image: image:
pullUsername: pomerium/enterprise pullUsername: pomerium/enterprise
pullPassword: your-access-key pullPassword: your-access-key
``` ```
1. Add the Pomerium Enterprise repository to your Helm configuration: 1. Add the Pomerium Enterprise repository to your Helm configuration:
@ -152,21 +280,12 @@ For an production deployment, Pomerium Enterprise requires:
helm install pomerium-console pomerium-enterprise/pomerium-console --values=pomerium-console-values.yaml helm install pomerium-console pomerium-enterprise/pomerium-console --values=pomerium-console-values.yaml
``` ```
## Troubleshooting ## Troubleshooting
### Updating Service Types:
### Disabling Ingress: If, while updating the open-source Pomerium values, you change any block's `service.type` you may need to manually delete corresponding service before applying the new configuration. For example:
After setting `ingress.enabled=false`, you may need to manually delete the `pomerium-proxy` and `pomerium-authenticate` service to update to the new configuration: <!-- @travis I'm sure context could be improved here -->
```bash ```bash
kubectl delete svc pomerium-proxy kubectl delete svc pomerium-proxy
kubectl delete svc pomerium-authenticate
``` ```
### Updating Redis
<!-- @travis I forget the context here, and it isn't in my history -->
proxy.existingTLSSecret=pomerium-tls. (config after)

View file

@ -0,0 +1,34 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pomerium-cert
namespace: pomerium
spec:
secretName: pomerium-tls
issuerRef:
name: pomerium-issuer
kind: Issuer
usages:
- server auth
- client auth
dnsNames:
- pomerium-proxy.pomerium.svc.cluster.local
- pomerium-authorize.pomerium.svc.cluster.local
- pomerium-databroker.pomerium.svc.cluster.local
- pomerium-authenticate.pomerium.svc.cluster.local
- "*.localhost.pomerium.io"
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pomerium-redis-cert
namespace: pomerium
spec:
secretName: pomerium-redis-tls
issuerRef:
name: pomerium-issuer
kind: Issuer
dnsNames:
- pomerium-redis-master.pomerium.svc.cluster.local
- pomerium-redis-headless.pomerium.svc.cluster.local
- pomerium-redis-replicas.pomerium.svc.cluster.local

View file

@ -0,0 +1,13 @@
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pomerium-console-cert
namespace: pomerium
spec:
secretName: pomerium-console-tls
issuerRef:
name: pomerium-issuer
kind: Issuer
dnsNames:
- pomerium-console.pomerium.svc.cluster.local