OpenVPN and Minikube

Garun Vagidov
2 min readDec 18, 2017

Recently at work, we had a need for a VPN set up on our local minikube cluster to connect to our development environment. This is the simplest way to get OpenVPN to be part of your minikube setup, which lets your pods have access to the network over VPN. I am using Windows 10 with minikube on Hyper-V.

There is a great article by Niels Grewe about how to Integrate a CoreOS host into an OpenVPN-based VPN. Since Minikube is a Linux distribution of CoreOS, we decided to follow those directions and got OpenVPN working with that set of instructions. The setup was successful but unfortunately, on the reboot of the virtual machine, all that configuration was cleared out and we were left with no connections to VPN again.

We decided to create a simple helm chart to deploy OpenVPN with minimal configuration and have it live as part of Kubernetes, instead of the virtual machine. You will first need to download or have access to your OpenVPN configuration file. The easiest way is for you to login to your OpenVPN website and download the autologin profile for yourself.

OpenVPN download profile screen example.

Here is a necessary helm chart: https://github.com/garunski/open-vpn-connect-chart.

To use the chart, provide the client.ovpn file at the root of the chart and helm install the new chart to your cluster.

The chart loads the OpenVPN configuration file as a ConfigMap and mounts it as a file.

apiVersion: v1
kind: ConfigMap
metadata:
name: openvpn-config
namespace: kube-system
data:
{{ (.Files.Glob "*.ovpn").AsConfig | indent 2 }}

Since the chart will pick up all files that end with ovpn, you will have to name yours client.ovpn. The daemon set will use that configuration to start OpenVPN.

command: ["/bin/sh"]
args:
- -c
- openvpn --config /vpn/client.ovpn;

The chart installs a daemon set and will deploy an OpenVPN pod onto each node in the cluster. The pod needs to run in a privileged security context and will take over all network connections for the node.

The magic here is -cap-add=NET_ADMIN --device /dev/net/tun --net=host, which grants the container enough privileges to set up the tun devices, and makes it visible in the host’s networking stack. (http://nie.gr/2016/04/04/coreos-openvpn/)

In Kubernetes that translates to:

securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
volumeMounts:
readOnly: true
- mountPath: /dev/net/tun
name: dev-net-tun

This is a simple and quick solution that can be improved upon. I think that this could potentially be a part of a more complex chart for hosting the OpenVPN in a cluster.

--

--