docs: refactor sections, consolidate examples (#1164)

This commit is contained in:
bobby 2020-07-30 11:02:14 -07:00 committed by GitHub
parent f41eeaf138
commit 8cae3f27bb
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
74 changed files with 85 additions and 194 deletions

78
docs/guides/ad-guard.md Normal file
View file

@ -0,0 +1,78 @@
---
title: AdGuard
lang: en-US
meta:
- name: keywords
content: pomerium identity-access-proxy adguard ad-guard pi-hole piehole
description: >-
This guide covers how to add authentication and authorization to a hosted, fully, online instance of adguard.
---
# Securing AdGuard Home
This guide covers how to add authentication and authorization to an instance of AdGuard while giving us a great excuse to demonstrate how to use Pomerium's [add headers](../docs/configuration/readme.md) functionality to **transparently pass along basic authentication credentials to a downstream app**.
## What is AdGuard?
[AdGuard](https://adguard.com/en/adguard-home/overview.html) Home operates as a DNS server that re-routes tracking domains to a "black hole", thus preventing your devices from connecting to those servers. Instead of browser plugins or other software on each computer, you can install AdGuard in one place and your entire network is protected. AdGuard is very similar to [Pi-hole](https://pi-hole.net) but has some [marked advantages](https://github.com/AdguardTeam/AdGuardHome#comparison).
## Where Pomerium fits
AdGuard is a great candidate for protecting with pomerium as it it does not currently support any authentication or authorization capabilities beyond a single set of [HTTP Basic Access Authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) credentials.
## Pre-requisites
This guide assumes you have already completed one of the [quick start] guides, and have a working instance of Pomerium up and running. For purpose of this guide, I'm going to use docker-compose, though any other deployment method would work equally well.
## Configure
### Pomerium Config
```yaml
# config.yaml
- from: https://adguard.domain.example
to: http://adguard
allowed_users:
- user@example.com
set_request_headers:
# https://www.blitter.se/utils/basic-authentication-header-generator/
Authorization: Basic dXNlcjpwYXNzd29yZA===
allow_websockets: true
```
Here's the important bit. If you don't add the `set_request_headers` line above, you will be prompted for a basic login on each visit.
### Docker-compose
```yaml
# docker-compose.yaml
adguard:
image: adguard/adguardhome:latest
volumes:
- ./adguard/workdir:/opt/adguardhome/work:rw
- ./adguard/confdir:/opt/adguardhome/conf:rw
ports:
- 53:53/udp
expose:
- 67
- 68
- 80
- 443
- 853
- 3000
restart: always
```
### Router
![adguard router setup](./img/adguard-router-setup.png)
Set your router to use your new host as the primary DNS server.
### That's it!
Simply navigate to your new adguard instance (e.g. `https://adguard.domain.example`) and behold all of the malware you and your family are no longer subjected to.
![adguard dashboard](./img/adguard-dashboard.png)
[quick start]: ../docs/quick-start

107
docs/guides/argo.md Normal file
View file

@ -0,0 +1,107 @@
---
title: Argo
lang: en-US
meta:
- name: keywords
content: pomerium identity-access-proxy argo argo-cd
description: >-
This guide covers how to add authentication and authorization to an instance
of argo.
---
# Securing Argo
[Argo](https://argoproj.github.io/projects/argo) is an open-source container-native workflow engine for orchestrating parallel jobs on Kubernetes. This guide covers how to add authentication and authorization to Argo using Pomerium.
## Install Argo
To install Argo in Kubernetes you can either follow the instructions [here](https://github.com/argoproj/argo/blob/master/docs/getting-started.md), or use [Helm](https://github.com/argoproj/argo-helm/tree/master/charts/argo). This guide will use the Helm chart.
Run the following commands:
```bash
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm install \
--namespace kube-system \
--set minio.install=true \
--set installCRD=false \
argo argo/argo
kubectly apply \
--namespace kube-system \
--file https://raw.githubusercontent.com/argoproj/argo/master/manifests/base/crds/workflow-crd.yaml
```
You should now have a working Argo installation using [Minio](https://min.io/) to store artifacts. Both Argo and Minio provide web-based GUIs. Confirm that Minio is working by running:
```bash
kubectl --namespace kube-system port-forward svc/argo-minio 9000:9000
```
You should now be able to reach the Minio UI by accessing <http://localhost:9000/minio>. If you're curious the Access Key and Secret Key are generated by the Helm chart and stored in a Kubernetes secret:
```bash
kubectl --namespace=kube-system get secret argo-minio -o yaml
```
For now though, let's terminate the Minio `kubectl port-forward` and create one for the Argo UI:
```bash
kubectl --namespace kube-system port-forward svc/argo-server 2746:2746
```
Visiting <http://localhost:2746> should take you to the Argo Workflows dashboard.
## Install NGINX Ingress Controller
We will use [NGINX](https://kubernetes.github.io/ingress-nginx/deploy/#using-helm) as our ingress controller. To install it with Helm run the following commands:
```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install --namespace kube-system ingress-nginx ingress-nginx/ingress-nginx
```
## Install Pomerium
Like with Argo we will install Pomerium using the [Helm chart](https://github.com/pomerium/pomerium-helm). First create a `values.yaml` file (replacing the `allowed_users` and IDP `provider`/`clientID`/`clientSecret` with your own):
```yaml
config:
rootDomain: localhost.pomerium.io
policy:
- from: https://argo.localhost.pomerium.io
to: http://argo-server.kube-system.svc.cluster.local:2746
allowed_users:
- REPLACE_ME
authenticate:
idp:
provider: google
clientID: REPLACE_ME
clientSecret: REPLACE_ME
ingress:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: https
```
Run the following commands (replacing the IDP `provider`/`clientID`/`clientSecret` with your own):
```bash
helm repo add pomerium https://helm.pomerium.io
helm repo update
helm install \
--set config.sharedSecret="$(head -c32 /dev/urandom | base64)" \
--set config.cookieSecret="$(head -c32 /dev/urandom | base64)" \
--values values.yaml \
pomerium pomerium/pomerium
```
You should now be able to reach argo by using `kubectl port-forward` with the NGINX ingress controller (binding :443 may require using sudo with kubectl):
```bash
kubectl --namespace kube-system port-forward svc/ingress-nginx-controller 443:443
```
And visit: <https://argo.localhost.pomerium.io/>.

117
docs/guides/cloud-run.md Normal file
View file

@ -0,0 +1,117 @@
---
title: Cloud Run
lang: en-US
meta:
- name: keywords
content: pomerium identity-access-proxy gcp google iap serverless cloudrun
description: >-
This guide covers how to deploy Pomerium to Cloud Run and use it to protect other endpoints via Authorization Headers.
---
# Securing Cloud Run endpoints
This recipe's sources can be found [on github](https://github.com/pomerium/pomerium/tree/master/examples/cloudrun)
## Background
Services on [Cloud Run](https://cloud.google.com/run) and other Google Cloud serverless products can be restricted to only permit access with a properly signed [bearer token](https://cloud.google.com/run/docs/authenticating/service-to-service). This allows requests from other services running in GCP or elsewhere to be securely authorized despite the endpoints being public.
These bearer tokens are not easily set in a browser session and must be refreshed on a regular basis, preventing them from being useful for end user authorization. Pomerium, however, can generate compatible tokens on behalf of end users and proxy the request to these services.
## How it works
- Add an IAM policy delegating `roles/run.invoker` permissions to a service account
- Run Pomerium with access to a key for the corresponding service account
- Publish DNS records for each protected application pointing to Pomerium
- Configure Pomerium with appropriate policy and `enable_google_cloud_serverless_authentication`
The protected application delegates trust to a GCP service account which Pomerium runs as, and Pomerium performs user based authorization on a per route basis. This turns Pomerium into a bridge between a user-centric and service-centric authorization models.
## Pre-requisites
This guide assumes you have Editor access to a Google Cloud project which can be used for isolated testing, and a DNS zone which you are also able to control. DNS does not need to be inside Google Cloud for the example to work.
## Set Up
To deploy Pomerium to Cloud Run securely and easily, a special [image](https://console.cloud.google.com/gcr/images/pomerium-io/GLOBAL/pomerium) is available at `gcr.io/pomerium-io/pomerium-[version]-cloudrun`. It allows sourcing configuration from GCP Secrets Manager, and sets some defaults for Cloud Run to keep configuration minimal. We will be leveraging it in this example to store IdP credentials. Our policy contains no secrets so we can place it directly in an ENV var.
[Dockerfile](https://github.com/pomerium/pomerium/blob/master/.github/Dockerfile-cloudrun)
Based on [vals-entrypoint](https://github.com/pomerium/vals-entrypoint)
The image expects a config file at `/pomerium/config.yaml`. Set `VALS_FILES=[secretref]:/pomerium/config.yaml` and set any other
Pomerium Environment Variables directly or with secret refs such as `ref+gcpsecrets://PROJECT/SECRET(#/key])`.
### Config
Set up a config.yaml to contain your IdP credentials and secrets (config.yaml):
<<< @/examples/cloudrun/config.yaml
Substitute `cloudrun.pomerium.io` for your own subdomain and your e-mail domain if
appropriate (policy.template.yaml):
<<< @/examples/cloudrun/policy.template.yaml
### DNS
Substitute `cloudrun.pomerium.io` for your own subdomain (zonefile.txt):
<<< @/examples/cloudrun/zonefile.txt
Or set an equivalent CNAME in your DNS provider.
## Deploy
Ensure you have set a default project:
```shell
glcoud config set default-project MYTESTPROJECT
```
<<< @/examples/cloudrun/deploy.sh
## Results
### Overview
We should see two applications deployed. The `hello` app is our protected app, and pomerium is...Pomerium!
![Cloud Run Overview](./img/cloud-run/cloudrun-overview.png)
Notice that Pomerium allows unauthenticated access, but `hello` does not.
Here are the domain mappings set up:
![Cloud Run Domains](./img/cloud-run/cloudrun-domains.png)
### Direct Access
Let's verify we cannot access the main application directly by visiting [https://hello-direct.cloudrun.pomerium.io](https://hello-direct.cloudrun.pomerium.io)
![Hello Direct Access](./img/cloud-run/hello-direct.png)
You should see a 403 error because you do not have the proper credentials.
### Authenticated Access
Now let's access via [https://hello.cloudrun.pomerium.io](https://hello.cloudrun.pomerium.io)
We should get an auth flow through your IdP:
![Hello Sign In](./img/cloud-run/hello-signin.png)
And a hello page:
![Hello](./img/cloud-run/hello-success.png)
### Non-GCP Applications
If your target application is not running on GCP, you can also perform your own header validation.
Browse to [https://httpbin.cloudrun.pomerium.io](https://httpbin.cloudrun.pomerium.io/headers)
You should see your identity header set:
![Hello](./img/cloud-run/headers.png)
See [getting user's identity](/docs/reference/getting-users-identity.html) for more details on using this header.

Binary file not shown.

After

Width:  |  Height:  |  Size: 483 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 423 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 549 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 407 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 433 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 526 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 285 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 454 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 256 KiB

39
docs/guides/istio.md Normal file
View file

@ -0,0 +1,39 @@
## Istio
[istio]: https://github.com/istio/istio
[certmanager]: https://github.com/jetstack/cert-manager
[grafana]: https://github.com/grafana/grafana
- Istio provides mutual TLS via sidecars and to make Istio play well with Pomerium we need to disable TLS on the Pomerium side.
- We need to provide Istio with information on how to route requests via Pomerium to their destinations.
- The following example shows how to make Grafana's [auth proxy](https://grafana.com/docs/grafana/latest/auth/auth-proxy) work with Pomerium inside of an Istio mesh.
#### Gateway
We are using the standard istio-ingressgateway that comes configured with Istio and attach a Gateway to it that deals with a subset of our ingress traffic based on the Host header (in this case `*.yourcompany.com`). This is the Gateway to which we will later attach VirtualServices for more granular routing decisions. Along with the Gateway, because we care about TLS, we are using Certmanager to provision a self-signed certificate (see Certmanager [docs](https://cert-manager.io/docs) for setup instructions).
<<< @/examples/kubernetes/istio/gateway.yml
#### Virtual Services
Here we are configuring two Virtual Services. One to route from the Gateway to the Authenticate service and one to route from the Gateway to the Pomerium Proxy, which will route the request to Grafana according to the configured Pomerium policy.
<<< @/examples/kubernetes/istio/virtual-services.yml
#### Service Entry
If you are enforcing mutual TLS in your service mesh you will need to add a ServiceEntry for your identity provider so that Istio knows not to expect a mutual TLS connection with, for example `https://yourcompany.okta.com`.
<<< @/examples/kubernetes/istio/service-entry.yml
#### Pomerium Configuration
For this example we're using the Pomerium Helm chart with the following `values.yaml` file. Things to note here are the `insecure` flag, where we are disabling TLS in Pomerium in favor of the Istio-provided TLS via sidecars. Also note the `extaEnv` arguments where we are asking Pomerium to extract the email property from the JWT and pass it on to Grafana in a header called `X-Pomerium-Claim-Email`. We need to do this because Grafana does not know how to read the Pomerium JWT but its auth-proxy authentication method can be configured to read user information from headers. The policy document contains a single route that will send all requests with a host header of `https://grafana.yourcompany.com` to the Grafana instance running in the monitoring namespace. We disable ingress because we are using the Istio ingressgateway for ingress traffic and don't need the Pomerium helm chart to create ingress objects for us.
<<< @/examples/kubernetes/istio/pomerium-helm-values.yml
#### Grafana ini
On the Grafana side we are using the Grafana Helm chart and what follows is the relevant section of the `values.yml` file. The most important thing here is that we need to tell Grafana from which request header to grab the username. In this case that's `X-Pomerium-Claim-Email` because we will be using the user's email (provided by your identity provider) as their username in Grafana. For all the configuration options check out the Grafana documentation about its auth-proxy authentication method.
<<< @/examples/kubernetes/istio/grafana.ini.yml

View file

@ -0,0 +1,366 @@
---
title: Kubernetes Dashboard
lang: en-US
meta:
- name: keywords
content: pomerium identity-access-proxy kubernetes helm k8s oauth dashboard
description: >-
This guide covers how to add authentication and authorization to kubernetes dashboard using single-sing-on, pomerium, helm, and letsencrypt certificates.
---
# Securing Kubernetes Dashboard
The following guide covers how to secure [Kubernetes Dashboard] using Pomerium. Kubernetes Dashboard is a powerful, web-based UI for managing Kubernetes clusters. Pomerium can act as a **forward-auth provider** _and_ as an independent **identity-aware access proxy** improving and adding single-sign-on to Kubernetes Dashboard's default access control. This guide aims to demonstrate a concrete example of those two methods of access control.
![fresh kubernetes dashboard install](./img/k8s-fresh-dashboard.png)
This tutorial covers:
- Installing [Helm] a package manger for Kubernetes
- Deploying [NGINX Ingress Controller]
- Install and configure [Cert-Manager] to issue [LetsEncrypt] certificates
- Deploying Pomerium
- Deploying [Kubernetes Dashboard]
- Secure Kubernetes Dashboard access:
- _directly_, using Pomerium's proxy component
- _indirectly_, using Pomerium as a [forward-auth] provider
:::warning
nginx-ingress [version 0.26.2](https://github.com/helm/charts/issues/20001) contains a regression that breaks external auth and results in an infinite loop.
:::
## Background
Though securing [kubernetes dashboard] as an example may seem contrived, the damages caused by an unsecured dashboard is a real threat vector. In late 2018, Telsa [determined](https://redlock.io/blog/cryptojacking-tesla) that the hackers who were running [crypto-mining malware](https://arstechnica.com/information-technology/2018/02/tesla-cloud-resources-are-hacked-to-run-cryptocurrency-mining-malware/) on their cloud accounts came in through an unsecured [Kubernetes Dashboard] instance.
![tesla hacked from kubernetes dashboard](./img/k8s-tesla-hacked.png)
## Helm
First, we will install [Helm]. Helm is a package manager similar to `apt-get` or `brew` but for Kubernetes and it's what we'll use to install Pomerium, nginx-ingress, cert-manager, and the dashboard.
### Install
There are two parts to Helm: the client, and the server. This guide will cover the most common installation path. Please refer to the [Helm install] instructions for more details, and other options.
#### Client
We'll install by installing the helm client on our client on our local machine.
OSX via [homebrew].
```bash
brew install kubernetes-helm
```
Linux via [snap].
```bash
sudo snap install helm --classic
```
A script for the [trusting](https://sysdig.com/blog/friends-dont-let-friends-curl-bash/) 😉.
```bash
curl -L https://git.io/get_helm.sh | bash
```
Add the default repository
```bash
helm repo add pomerium https://helm.pomerium.io
```
## NGINX Ingress
[NGINX ingress controller] is a [Kubernetes Ingress] based on [NGINX] the is a very popular, full-feature reverse-proxy. We will use NGINX in two configurations: as a fronting proxy, and as proxy that delegates every request's access-control decision to Pomerium using forward-auth.
Also, please note that while this guide uses [NGINX Ingress Controller], Pomerium can act as a forward auth-provider alongside other fronting ingresses like [Traefik](https://docs.traefik.io/middlewares/forwardauth/), [Ambassador](https://www.getambassador.io/reference/services/auth-service/), and [envoy](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/security/ext_authz_filter.html) in a similar fashion.
### Install
NGINX Ingress controller can be installed via [Helm] from the official charts repository. To install the chart with the release name `helm-nginx-ingress`:
```bash
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update # important to make sure we get >.30
helm install helm-nginx-ingress stable/nginx-ingress
```
```bash
NAME: helm-nginx-ingress
....
NAMESPACE: default
STATUS: DEPLOYED
```
Confirm the ingress has been installed, and that an external `LoadBalancer` IP has been set.
```sh
$kubectl get svc
```
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helm-nginx-ingress-controller LoadBalancer 10.99.182.128 localhost 80:31059/TCP,443:32402/TCP 15m
helm-nginx-ingress-default-backend ClusterIP 10.108.251.51 <none> 80/TCP 15m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 93m
```
We now have a kubernetes ingress resource that can be used to delegate access control decisions to or front-proxy for Pomerium.
## Certificates
[Cert-manager] is a Kubernetes plugin that helps automate issuance of TLS certificates. In our case, we will use cert-manager to retrieve certs to each of our configured routes.
### Install
Like in previous steps, we will use [Helm] to install [Cert-manager].
```sh
# Install the CustomResourceDefinition resources separately
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
# Create the namespace for cert-manager
$ kubectl create namespace cert-manager
# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
helm repo update
# Install the cert-manager Helm chart
helm install \
--namespace cert-manager \
--version v0.12.0 \
cert-manager \
jetstack/cert-manager
```
And we'll confirm cert-manager is up and running.
```
$ kubectl get pods --namespace cert-manager
```
```
NAME READY STATUS RESTARTS AGE
cert-manager-756d9f56d6-brv6z 1/1 Running 0 23s
cert-manager-cainjector-74bb68d67c-7jdw6 1/1 Running 0 23s
cert-manager-webhook-645b8bdb7-8kgc9 1/1 Running 0 23s
```
### Configure
Now that cert-manager is installed, we need to make one more configuration to be able to retrieve certificates. We need to add a [http-01 issuer](https://letsencrypt.org/docs/challenge-types/) for use with [LetsEncrypt].
```sh
$ kubectl apply -f docs/recipes/yml/letsencrypt-prod.yaml
```
<<< @/examples/yml/letsencrypt-prod.yaml
And confirm your issuer is set up correctly.
```bash
$ kubectl describe issuer
```
```bash
Name: letsencrypt-prod
...
API Version: cert-manager.io/v1alpha2
Kind: Issuer
Metadata:
Spec:
Acme:
Private Key Secret Ref:
Name: letsencrypt-prod
Server: https://acme-v02.api.letsencrypt.org/directory
Solvers:
Http 01:
Ingress:
Class: nginx
Selector:
Status:
Acme:
Last Registered Email: ....
Uri: https://acme-v02.api.letsencrypt.org/acme/acct/69070883
Conditions:
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
```
If you see something like the above, cert-manager should be all set to help issue you new certificates when you create a new `https` protected ingress. Note, if you need wild-card certificates, you may also need a [DNS-01](https://docs.cert-manager.io/en/latest/tasks/issuers/setup-acme/dns01/) type issuer.
## Dashboard
[Kubernetes Dashboard] is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself.
![kubernetes dashboard login page](./img/k8s-dashboard-login.png)
### Install
As with the previous steps, we can use [Helm] to install our instance of [Kubernetes Dashboard].
```sh
helm install \
helm-dashboard \
stable/kubernetes-dashboard \
--set ingress.enabled="false" \
--set enableSkipLogin="true"
```
That's it. We've now configured kubernetes dashboard to use the default service account, if none-is provided. We've also explicitly told helm that we are going to deploy our own custom, nginx / Pomerium / cert-manager enabled ingress.
## Pomerium
Pomerium is an identity-aware access proxy that can used to serve as an identity-aware reverse proxy, or as a forward-auth provider.
### Configure
Before installing, we will configure Pomerium's configuration settings in `values.yaml`. Other than the typical configuration settings covered in the quick-start guides, we will add a few settings that will make working with Kubernetes Dashboard easier.
We can retrieve the token to add to our proxied policy's authorization header as follows.
```sh
$ kubectl describe secret helm-dashboard
```
```Name: dashboard-kubernetes-dashboard-token-bv9jq
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-kubernetes-dashboard
kubernetes.io/service-account.uid: 18ab35ee-eca1-11e9-8c75-025000000001
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.......
```
The above token then needs to be assigned to our route configuration and policy.
```yaml
# values.yaml
authenticate:
idp:
provider: "google"
clientID: YOUR_CLIENT_ID
clientSecret: YOUR_SECRET
forwardAuth:
enabled: true
config:
sharedSecret: YOUR_SHARED_SECRET
cookieSecret: YOUR_COOKIE_SECRET
rootDomain: domain.example
policy:
# this route is directly proxied by pomerium & injects the authorization header
- from: https://dashboard-proxied.domain.example
to: https://helm-dashboard-kubernetes-dashboard
allowed_users:
- user@domain.example
tls_skip_verify: true # dashboard uses self-signed certificates in its default configuration
set_request_headers:
Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.....
# this route is indirectly checked for access using forward-auth
- from: https://dashboard-forwardauth.domain.example
to: https://helm-dashboard-kubernetes-dashboard
allowed_users:
- user@domain.example
ingress:
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/issuer: "letsencrypt-prod" # see `le.issuer.yaml`
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
secretName: pomerium-ingress-tls
```
### Install
Finally, we get to install Pomerium! 🎉 Once again, we will use Helm to deploy Pomerium.
```bash
helm install \
"helm-pomerium" \
pomerium/pomerium \
--values values.yaml
```
## Putting it all together
Now we just need to tell external traffic how to route everything by deploying the following ingresses.
```sh
$kubectl apply -f docs/recipes/yml/dashboard-forwardauth.ingress.yaml
```
<<< @/examples/yml/dashboard-forwardauth.ingress.yaml
```sh
$kubectl apply -f docs/recipes/yml/dashboard-proxied.ingress.yaml
```
<<< @/examples/yml/dashboard-proxied.ingress.yaml
And finally, check that the ingresses are up and running.
```sh
$kubectl get ingress
```
```sh
NAME HOSTS ADDRESS PORTS AGE
dashboard-forwardauth dashboard-forwardauth.domain.example 80, 443 42h
dashboard-proxied dashboard-proxied.domain.example 80, 443 42h
helm-pomerium *.domain.example,authenticate.domain.example 80, 443 42h
```
You'll notice this is the step where we put everything together. We've got [nginx] handling the initial requests, [cert-manager] handling our public certificates, and Pomerium handling access control.
## Conclusion
Though the net result will be similar between using forward-auth and direct proxying, there are a few differences:
- By having Pomerium **directly proxy the requests**, you as an administrator have control control over the underlying request. In this example, we are able to inject an authenticating bearer token header to the downstream request which arguably makes for a better user experience.
<video controls muted="" playsinline="" width="100%" height="600" control=""><source src="./img/k8s-proxied-example.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
* Conversely, going the **forward-auth** route potentially means using the ingress / reverse proxy you are are already accustomed to or have already modified to support your particular deployment.
<video controls muted="" playsinline="" width="100%" height="600" control=""><source src="./img/k8s-fwd-auth-example.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
In the end, you should choose whichever option makes the most sense for your use-case and environment.
Whichever option you choose to go with, 🎉🍾🎊 **congratulations** 🎉🍾🎊! You now have a single-sign-on enabled [Kubernetes Dashboard] protected by Pomerium and automatically renewing [LetsEncrypt] certificates.
[bearer token]: https://kubernetes.io/docs/admin/authentication/
[cert-manager]: https://github.com/jetstack/cert-manager
[command line proxy]: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#command-line-proxy
[creating sample users]: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
[dashboard ui]: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#deploying-the-dashboard-ui
[dns01 challenge provider]: https://docs.cert-manager.io/en/latest/tasks/issuers/setup-acme/dns01/index.html
[forward-auth]: ../docs/reference/reference.html#forward-auth
[helm install]: https://helm.sh/docs/using_helm/#installing-the-helm-client
[helm]: https://helm.sh
[homebrew]: https://brew.sh
[kubernetes dashboard]: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
[kubernetes ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
[kubernetes securing a cluster]: https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/
[letsencrypt]: https://letsencrypt.org
[nginx ingress controller]: https://github.com/kubernetes/ingress-nginx
[nginx]: https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/
[securing your helm installation]: https://helm.sh/docs/using_helm/#securing-your-helm-installation
[snap]: https://github.com/snapcrafters/helm
[with pomerium]: ../docs/reference/reference.html#forward-auth
[your dashboard]: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

267
docs/guides/kubernetes.md Normal file
View file

@ -0,0 +1,267 @@
---
title: Kubernetes
lang: en-US
meta:
- name: keywords
content: pomerium identity-access-proxy kubernetes helm k8s oauth
description: >-
This guide covers how to add authentication and authorization to kubernetes apiserver using single-sing-on and pomerium.
---
# Securing Kubernetes
The following guide covers how to secure [Kubernetes] using Pomerium.
## Kubernetes
This tutorial uses an example Kubernetes cluster created with [`kind`](https://kind.sigs.k8s.io/docs/user/quick-start/). First create a config file (`kind-config.yaml`):
```yaml
# kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30443
hostPort: 30443
```
Next create the cluster:
```bash
kind create cluster --config=./kind-config.yaml
```
### Pomerium Service Account
Pomerium uses a single service account and user impersonatation headers to authenticate and authorize users in Kubernetes. To create the Pomerium service account use the following config: (`pomerium-k8s.yaml`)
```yaml
# pomerium-k8s.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: pomerium
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pomerium-impersonation
rules:
- apiGroups:
- ""
resources:
- users
- groups
- serviceaccounts
verbs:
- impersonate
- apiGroups:
- "authorization.k8s.io"
resources:
- selfsubjectaccessreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pomerium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pomerium-impersonation
subjects:
- kind: ServiceAccount
name: pomerium
namespace: default
```
Apply it with:
```bash
kubectl apply -f ./pomerium-k8s.yaml
```
### User Permissions
To grant access to users within Kubernetes, you will need to configure RBAC permissions. For example:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: someuser@example.com
```
Permissions can also be granted to groups the Pomerium user is a member of.
## Certificates
For this tutorial we will generate wildcard certificates for the `*.localhost.pomerium.io` domain using [`mkcert`](https://github.com/FiloSottile/mkcert):
```bash
mkcert '*.localhost.pomerium.io'
```
This creates two files:
- `_wildcard.localhost.pomerium.io-key.pem`
- `_wildcard.localhost.pomerium.io.pem`
## Pomerium
### Configuration
Our Pomerium configuration will route requests from `k8s.localhost.pomerium.io:30443` to the kube-apiserver. Create a Kubernetes YAML configuration file (`pomerium.yaml`):
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: pomerium
labels:
app: pomerium
spec:
replicas: 1
selector:
matchLabels:
app: pomerium
template:
metadata:
labels:
app: pomerium
spec:
containers:
- name: pomerium
image: pomerium/pomerium:master
ports:
- containerPort: 30443
env:
- name: ADDRESS
value: "0.0.0.0:30443"
- name: AUTHENTICATE_SERVICE_URL
value: "https://authenticate.localhost.pomerium.io:30443"
- name: CERTIFICATE
value: "..." # $(base64 -w 0 <./_wildcard.localhost.pomerium.io.pem)
- name: CERTIFICATE_KEY
value: "..." # $(base64 -w 0 <./_wildcard.localhost.pomerium.io-key.pem)
- name: COOKIE_SECRET
value: "..." # $(head -c32 /dev/urandom | base64 -w 0)
- name: IDP_PROVIDER
value: google
- name: IDP_CLIENT_ID
value: "..."
- name: IDP_CLIENT_SECRET
value: "..."
- name: POLICY
value: "..." #$(echo "$_policy" | base64 -w 0)
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: pomerium
spec:
type: NodePort
selector:
app: pomerium
ports:
- port: 30443
targetPort: 30443
nodePort: 30443
```
Make sure to fill in the appropriate values as indicated.
The policy should be a base64-encoded block of yaml:
```yaml
- from: https://k8s.localhost.pomerium.io:30443
to: https://kubernetes.default.svc
tls_skip_verify: true
allowed_domains:
- pomerium.com
kubernetes_service_account_token: "..." #$(kubectl get secret/"$(kubectl get serviceaccount/pomerium -o json | jq -r '.secrets[0].name')" -o json | jq -r .data.token | base64 -d)
```
Applying this configuration will create a Pomerium deployment and service within kubernetes that is accessible from `*.localhost.pomerium.io:30443`.
## Kubectl
Pomerium uses a custom Kubernetes exec-credential provider for kubectl access. This provider will open up a browser window to the Pomerium authenticate service and generate an authorization token that will be used for Kubernetes API calls.
The Pomerium Kubernetes exec-credential provider can be installed via go-get:
```bash
env GO111MODULE=on GOBIN=$HOME/bin go get github.com/pomerium/pomerium/cmd/pomerium-cli@master
```
Make sure `$HOME/bin` is on your path.
To use the Pomerium Kubernetes exec-credential provider, update your kubectl config:
```shell
# Add Cluster
kubectl config set-cluster via-pomerium --server=https://k8s.localhost.pomerium.io:30443
# Add Context
kubectl config set-context via-pomerium --user=via-pomerium --cluster=via-pomerium
# Add credentials command
kubectl config set-credentials via-pomerium --exec-command=pomerium-cli --exec-args=k8s,exec-credential,https://k8s.localhost.pomerium.io:30443
```
Here's the resulting configuration:
1. Cluster:
```yaml
clusters:
- cluster:
server: https://k8s.localhost.pomerium.io:30443
name: via-pomerium
```
2. Context:
```yaml
contexts:
- context:
cluster: via-pomerium
user: via-pomerium
name: via-pomerium
```
3. User:
```yaml
- name: via-pomerium
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- k8s
- exec-credential
- https://k8s.localhost.pomerium.io:30443
command: pomerium-cli
env: null
```
With `kubectl` configured you can now query the Kubernetes API via pomerium:
```
kubectl --context=via-pomerium cluster-info
```
You should be prompted to login and see the resulting cluster info.

153
docs/guides/local-oidc.md Normal file
View file

@ -0,0 +1,153 @@
---
title: local oidc
lang: en-US
meta:
- name: keywords
content: pomerium identity-access-proxy oidc
description: >-
This guide covers how to use Pomerium with a local OIDC provider using [qlik/simple-oidc-provider].
---
You can use the same below configs for other supported [identity provider].
## Configure
### Docker-compose
```yaml
version: "3"
services:
pomerium:
image: pomerium/pomerium:latest
environment:
# Generate new secret keys. e.g. `head -c32 /dev/urandom | base64`
- COOKIE_SECRET=<reducted>
volumes:
# Mount your domain's certificates : https://www.pomerium.io/docs/reference/certificates
- ./_wildcard.localhost.pomerium.io-key.pem:/pomerium/privkey.pem:ro
- ./_wildcard.localhost.pomerium.io.pem:/pomerium/cert.pem:ro
# Mount your config file : https://www.pomerium.io/docs/reference/reference/
- ./config.yaml:/pomerium/config.yaml
ports:
- 443:443
- 5443:5443
- 17946:7946
depends_on:
- identityprovider
httpbin:
image: kennethreitz/httpbin:latest
expose:
- 80
identityprovider:
image: qlik/simple-oidc-provider
environment:
- CONFIG_FILE=/etc/identityprovider.json
- USERS_FILE=/etc/identityprovider-users.json
volumes:
- ./identityprovider.json:/etc/identityprovider.json:ro
- ./identityprovider-users.json:/etc/identityprovider-users.json:ro
ports:
- 9000:9000
```
You can generate certificates for `*.localhost.pomerium.io` using [this instruction](https://www.pomerium.io/docs/reference/certificates.html#certificates-2)
### Pomerium config
```yaml
# config.yaml
# See detailed configuration settings : https://www.pomerium.io/docs/reference/reference/
authenticate_service_url: https://authenticate.localhost.pomerium.io
autocert: false
certificate_file: /pomerium/cert.pem
certificate_key_file: /pomerium/privkey.pem
idp_provider_url: http://identityprovider:9000
idp_provider: oidc
idp_client_id: foo
idp_client_secret: bar
# Generate 256 bit random keys e.g. `head -c32 /dev/urandom | base64`
cookie_secret: <reducted>
# https://www.pomerium.io/configuration/#policy
policy:
- from: https://httpbin.localhost.pomerium.io
to: http://httpbin
allowed_domains:
- example.org
```
### identityprovider.json
```json
{
"idp_name": "http://identityprovider:9000",
"port": 9000,
"client_config": [
{
"client_id": "foo",
"client_secret": "bar",
"redirect_uris": [
"https://authenticate.localhost.pomerium.io/oauth2/callback"
]
}
],
"claim_mapping": {
"openid": [ "sub" ],
"email": [ "email", "email_verified" ],
"profile": [ "name", "nickname" ]
}
}
```
### identityprovider-user.json
```json
[
{
"id": "SIMPLE_OIDC_USER_ALICE",
"email": "alice@example.org",
"email_verified": true,
"name": "Alice Smith",
"nickname": "al",
"password": "abc",
"groups": ["Everyone", "Engineering"]
},
{
"id": "SIMPLE_OIDC_USER_BOB",
"email": "bob@example.org",
"email_verified": true,
"name": "Bob Smith",
"nickname": "bobby",
"password": "abc",
"groups": ["Everyone", "Sales"]
}
]
```
## Run
### Edit hosts file
Add following entry to `/etc/hosts`:
```
127.0.0.1 identityprovider
```
### Start services
```shell script
$ docker-compose up -d identityprovider
$ : wait identityprovider up
$ docker-compose up -d
```
Now accessing to `https://httpbin.localhost.pomerium.io` and you will be redireted to OIDC server for authentication.
[identity provider]: ../docs/identity-providers/
[qlik/simple-oidc-provider]: https://hub.docker.com/r/qlik/simple-oidc-provider/

114
docs/guides/mtls.md Normal file
View file

@ -0,0 +1,114 @@
---
title: mTLS
lang: en-US
meta:
- name: keywords
content: pomerium identity-access-proxy mtls client-certificate
description: >-
This guide covers how to use Pomerium to implement mutual authentication
(mTLS) using client certificates with a custom certificate authority.
---
# Implementing mTLS With Pomerium
Secure communication on the web typically refers to using signed server certificates with the TLS protocol. TLS connections are both private and authenticated, preventing eavesdropping and impersonation of the server.
To authenticate clients (users), we typically use an identity provider (IDP). Clients must login before they can access a protected endpoint. However the TLS protocol also supports mutual authenticate (mTLS) via signed client certificates.
As of version 0.9.0, Pomerium supports requiring signed client certificates with the `client_ca`/`client_ca_file` configuration options. This guide covers how to configure Pomerium to implement mutual authentication using client certificates with a custom certificate authority.
## Creating Certificates
We will use the `mkcert` application to create the certificates. To install `mkcert` follow the instructions on [Github](https://github.com/FiloSottile/mkcert#installation).
For this guide the `localhost.pomerium.io` domain will be our root domain (all subdomains on `localhost.pomerium.io` point to `localhost`). First create a trusted root certificate authority:
```bash
mkcert -install
```
Next create a wildcard certificate for `*.localhost.pomerium.io`:
```bash
mkcert '*.localhost.pomerium.io'
```
This creates two files in the current working directory:
- `_wildcard.localhost.pomerium.io.pem`
- `_wildcard.localhost.pomerium.io-key.pem`
We will use these files for the server TLS certificate.
Finally create a client TLS certificate by running:
```bash
mkcert -client -pkcs12 '*.localhost.pomerium.io'
```
This creates a third file in the current working directory:
- `_wildcard.localhost.pomerium.io-client.p12`
## Configure Pomerium
Create a `config.yaml` file in the current directory. (You can replace `/YOUR/MKCERT/CAROOT` in this example with the value of `mkcert -CAROOT`)
```yaml
# config.yaml
address: ":8443"
authenticate_service_url: "https://authenticate.localhost.pomerium.io:8443"
certificate_file: "_wildcard.localhost.pomerium.io.pem"
certificate_key_file: "_wildcard.localhost.pomerium.io-key.pem"
# "$(mkcert -CAROOT)/rootCA.pem"
client_ca_file: "/YOUR/MKCERT/CAROOT/rootCA.pem"
# generate with "$(head -c32 /dev/urandom | base64)"
cookie_secret: "NvNqawPTQQelACkTovVcnfZQ3mP25Tv3DxeiUkRFyTA="
shared_secret: "NvNqawPTQQelACkTovVcnfZQ3mP25Tv3DxeiUkRFyTA="
# replace with your IDP provider
idp_provider: "google"
idp_client_id: YOUR_CLIENT_ID
idp_client_secret: YOUR_SECRET
policy:
- from: "https://httpbin.localhost.pomerium.io:8443"
to: "https://httpbin.org"
allow_public_unauthenticated_access: true
```
Start Pomerium with:
```bash
pomerium -config config.yaml
```
Before visiting the page in your browser we have one final step.
## Install Client Certificate
Because `https://httpbin.localhost.pomerium.io:8443` now requires a client certificate to be accessed, we first need to install that client certificate in our browser. The following instructions are for Chrome, but client certificates are supported in all major browsers.
Go to <chrome://settings/certificates>:
![chrome settings](./img/mtls/01-chrome-settings-certificates.png)
Next click on Import and browse to the directory where you created the certificates above. Choose `_wildcard.localhost.pomerium.io-client.p12`:
![import client certificate](./img/mtls/02-import-client-certificate.png)
You will be prompted for the certificate password. The default password is **`changeit`**:
![enter certificate password](./img/mtls/03-enter-certificate-password.png)
You should see the `org-mkcert development certificate` in the list of your certificates:
![certificate list](./img/mtls/04-certificate-list.png)
## Using the Client Certificate
You can now visit **<https://httpbin.localhost.pomerium.io>** and you should be prompted to choose a client certificate:
![choose client certificate](./img/mtls/05-select-client-certificate.png)

12
docs/guides/readme.md Normal file
View file

@ -0,0 +1,12 @@
# Overview
This section contains applications, and scenario specific guides for Pomerium.
- The [ad-guard](./ad-guard.md) recipe demonstrates how pomerium can be used to augment web applications that only support simplistic authorization mechanisms like basic-auth with single-sign-on driven access policy.
- The [Cloud Run](./cloud-run.md) recipe demonstrates deploying pomerium to Google Cloud Run as well as using it to Authorize users to protected Cloud Run endpoints.
- The [kubernetes](./kubernetes.md) guide covers how to add authentication and authorization to kubernetes dashboard using helm, and letsencrypt certificates. This guide also shows how third party reverse-proxies like nginx/traefik can be used in conjunction with pomerium using forward-auth.
- The [visual studio code](./vs-code-server.md) guide demonstrates how pomerium can be used to add access control to third-party applications that don't ship with [fine-grained access control](https://github.com/cdr/code-server/issues/905).
- The [argo](./argo.md) guide demonstrates how pomerium can be used to add access control to [Argo](https://argoproj.github.io/projects/argo).
- The [mTLS](./mtls.md) guide demonstrates how pomerium can be used to add mutual authentication using client certificates and a custom certificate authority.
- The [local OIDC](./local-oidc.md) guide demonstrates how pomerium can be used with local OIDC server for dev/testing.
- The [TiddlyWiki](./tiddlywiki.md) guide demonstrates how pomerium can be used to add authentication and authorization to web application using authenticated header.

59
docs/guides/tiddlywiki.md Normal file
View file

@ -0,0 +1,59 @@
---
title: TiddlyWiki
lang: en-US
meta:
- name: keywords
content: pomerium identity-access-proxy wiki tiddlywiki
description: >-
This guide covers how to add authentication and authorization to a hosted, fully, online instance of TiddlyWiki.
---
# Securing TiddlyWiki on Node.js
This guide covers using Pomerium to add authentication and authorization to an instance of [TiddlyWiki on NodeJS](https://tiddlywiki.com/static/TiddlyWiki%2520on%2520Node.js.html).
## What is TiddlyWiki on Node.js
TiddlyWiki is a personal wiki and a non-linear notebook for organizing and sharing complex information. It is available in two forms:
- a single HTML page
- [a Node.js application](https://www.npmjs.com/package/tiddlywiki)
We are using the Node.js application in this guide.
## Where Pomerium fits
TiddlyWiki allows a simple form of authentication by using authenticated-user-header parameter of [listen command](https://tiddlywiki.com/static/ListenCommand.html). Pomerium provides the ability to login with well-known [identity providers](../docs/identity-providers/readme.md#identity-provider-configuration).
## Pre-requisites
This guide assumes you have already completed one of the [quick start] guides, and have a working instance of Pomerium up and running. For purpose of this guide, We will use docker-compose, though any other deployment method would work equally well.
## Configure
### Pomerium Config
```yaml
jwt_claims_headers: email
policy:
- from: https://wiki.example.local
to: http://tiddlywiki:8080
allowed_users:
- reader1@example.com
- writer1@example.com
```
### Docker-compose
<<< @/examples/tiddlywiki/docker-compose.yaml
### That's it
Navigate to your TiddlyWiki instance (e.g. `https://wiki.example.local`) and log in:
* as reader1@example.com: user can read the wiki, but there is no create new tiddler button is show up.
* as writer1@example.com: user can read the wiki and create new tiddlers.
* as another email: pomerium displays a permission denied error.
[quick start]: ../docs/quick-start

View file

@ -0,0 +1,124 @@
---
title: VS Code Server
lang: en-US
meta:
- name: keywords
content: pomerium identity-access-proxy visual-studio-code visual studio code authentication authorization
description: >-
This guide covers how to add authentication and authorization to a hosted, fully, online instance of visual studio code.
---
# Securing Visual Studio Code Server
## Background
This guide covers using Pomerium to secure an instance of [Visual Studio Code Server]. Pomerium is an identity-aware access proxy that can add single-sign-on / access control to any service, including visual studio code.
### Visual Studio Code
[Visual Studio Code] is an open source code editor by Microsoft that has become [incredibly popular](https://insights.stackoverflow.com/survey/2019#technology-_-most-popular-development-environments) in the last few years. For many developers, [Visual Studio Code] hits the sweet spot between no frills editors like vim/emacs and full feature IDE's like Eclipse and IntelliJ. VS Code offers some of the creature comforts like intellisense, git integration, and plugins, while staying relatively lightweight.
One of the interesting attributes of [Visual Studio Code] is that it is built on the [Electron](<https://en.wikipedia.org/wiki/Electron_(software_framework)>) framework which uses a headless instance of Chrome rendered as a desktop application. It didn't take long for folks to realize that if we already had this great IDE written in Javascript, it may be possible to make [Visual Studio Code] run remotely.
> "Any application that can be written in JavaScript, will eventually be written in JavaScript." — [Jeff Atwood](https://blog.codinghorror.com/the-principle-of-least-power/)
### VS Code Server
[Visual Studio Code Server] is an open-source project that allows you to run [Visual Studio Code] on a **remote** server, through the browser. For example, this is a screenshot taken at the end of this tutorial.
![visual studio code building pomerium](./img/vscode-pomerium.png)
## Pre-requisites
This guide assumes you have already completed one of the [quick start] guides, and have a working instance of Pomerium up and running. For purpose of this guide, I'm going to use docker-compose, though any other deployment method would work equally well.
## Configure
### Pomerium Config
```
# config.yaml
# See detailed configuration settings : https://www.pomerium.io/docs/reference/reference/
authenticate_service_url: https://authenticate.corp.domain.example
# identity provider settings : https://www.pomerium.io/docs/identity-providers.html
idp_provider: google
idp_client_id: REPLACE_ME
idp_client_secret: REPLACE_ME
policy:
- from: https://code.corp.domain.example
to: http://codeserver:8443
allowed_users:
- some.user@domain.example
allow_websockets: true
```
### Docker-compose
```yaml
codeserver:
image: codercom/code-server:latest
restart: always
ports:
- 8443:8443
volumes:
- ./code-server:/home/coder/project
command: --allow-http --no-auth --disable-telemetry
```
Note we are mounting a directory called`./code-server`. Be sure to give the default docker user write permissions to that folder by running `chown -R 1000:1000 code-server/`.
### That's it!
Simply navigate to your domain (e.g. `https://code.corp.domain.example`).
![visual studio code pomerium hello world](./img/vscode-helloworld.png)
### (Example) Develop Pomerium in Pomerium
As a final touch, now that we've done all this work we might as well use our new development environment to write some real, actual code. And what better project is there than Pomerium? 😉
To build Pomerium, we must [install go](https://golang.org/doc/install) which is as simple as running the following commands in the [integrated terminal].
```bash
wget https://dl.google.com/go/go1.12.7.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.12.7.linux-amd64.tar.gz
```
Then add Go to our [PATH].
```bash
# add the following to $HOME/.bashrc
export PATH=$PATH:/usr/local/go/bin
export PATH=$PATH:$(go env GOPATH)/bin
```
Reload [PATH] by opening the [integrated terminal] and sourcing the updated `.bashrc` file.
```bash
source $HOME/.bashrc
```
Finally, now that we've got Go all we need to go is grab the latest source and build.
```bash
# get the latest source
$ git clone https://github.com/pomerium/pomerium.git
# grab make
$ sudo apt-get install make
# build pomerium
$ make build
# run pomerium!
$ ./bin/pomerium --version
v0.2.0+e1c00b1
```
Happy remote hacking!!!😁
[visual studio code server]: https://github.com/cdr/code-server
[visual studio code]: https://code.visualstudio.com/
[synology nas]: ../docs/quick-start/synology.md
[quick start]: ../docs/quick-start
[integrated terminal]: https://code.visualstudio.com/docs/editor/integrated-terminal
[path]: https://en.wikipedia.org/wiki/PATH_(variable)