This is going to be a short blog describing how I got minikube up and running on Windows 10 with HyperV. For those that don't know, Minikube allows you to run a single-node Kubernetes cluster locally for development purposes.
minikube's README is fairly self explanatory and easy to follow, however, I found a few grey areas that I thought I'd call out explicitly in this blog in case anyone else runs into the same issues.
Firstly, make sure you have all of the pre-requisites for Windows:
VT-x/AMD-v virtualization must be enabled in BIOS
kubectl (ensure it is in your PATH var)
Once you've got the above ready, you can go and grab the latest Windows minikube bits from their releases page. When downloaded, make sure to move it to a suitable directory and add it to your path.
Now that you've got minikube installed, you need to create an external network switch on HyperV. This is called out at the bottom of the drivers page but not immediately obvious. In order to do this, open HyperV, select the Virtual Switch Manager from the Actions side panel.
Then create a new External Virtual Switch called something like
primary-vswitch. It's probably best to reboot your machine once you have done this to reset everything.
After you've rebooted the machine, open an Adminstrator Powershell prompt. Execute the following command
minikube start --vm-driver=hyperv --hyperv-virtual-switch=primary-vswitch --v=7. The
v=7 argument is optional depending on the level of logging you want.
If you get a response similar to the following:
Starting local Kubernetes cluster... Starting VM... E0418 07:18:48.778911 12012 start.go:116] Error starting host: Error starting stopped host: exit status 1. Retrying. E0418 07:18:48.780914 12012 start.go:122] Error starting host: Error starting stopped host: exit status 1
It'll probably be because you have tried to run
minikube start before and left the virtual machine in an unstable state. If you stop the VM in HyperV and then delete your local minikube files by running
Remove-Item -Recurse -Path ~/.minikube -Force and then rerun the start command it should start over with a clean deployment.
minikube start command has returned, you should have a new VM running in HyperV with
kubectl configured to talk to it. You can view your kubectl config using
cat ~/.kube/config unless you've manually moved it elsewhere. You can test your kubectl connection by running
kubectl cluster-info which should show you the Kubernetes master, dns and dashboard running with their respective urls.
Now you can create a new deployment using the following command
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080. At this point the official docs use the command
kubectl expose deployment hello-minikube --type=NodePort to expose the hello-minikube deployment as a public service. This didn't work for me, I kept getting the error
Error from server: deployments.extensions "hello-minikube" not found. I'm not sure if this is because my version of kubectl doesn't yet have support for deployments - either way, I got it working by using the older ReplicationController method
kubectl expose rc hello-minikube --type=NodePort. Once exposed, I could now query my service with curl like so
curl $(minikube service hello-minikube --url) or simply get the url using
minikube service hello-minikube --url and open it in a browser.
Now your cluster is up and running and you've (hopefully) got a service working too. Now you can configure, manipulate and exploit your cluster to your heart's content. I'm sure there are better ways to get round some of these obstacles, however, in the short term, I thought this might help anybody who just wants to get up and running quickly.