Why?
-
Want to test out different container runtimes.
-
Want to test out tools such as IPVS, AppArmor, Falco e.tc.
-
Want clear, understandable and extensible Ansible playbooks.
-
Vagrant installed on your local host.
At the moment the vagrant script requires Virtualbox be installed. However this can easily be changed on the script,
Vagrantfile
. -
Ansible version >= 2.10 installed on your local host.
-
kubectl installed on your local host. This is optional.
Ansible is not supported on Windows and the 'best' solution is to allow Vagrant and Ansible run the playbooks on the guest virtual machine.
vagrant up --provision --provider virtualbox
This should take a short while, but upon successful completion you should have a cluster running, reachable via the assigned private ip, on port 6443.
In addition, a copy of kubeconfig
will have been downloaded into the cluster directory, cluster/
. It can be used for authentication on the cluster for execution of commands.
For example, to check node status;
kubectl --kubeconfig ./cluster/kubeconfig get nodes
or
export KUBECONFIG=$(pwd)/cluster/kubeconfig
kubectl get nodes
If kubectl is not installed on your local host, you can ssh into the control node and run commands;
vagrant ssh control01
kubectl get nodes
After successful provisioning of the cluster, you can manage the nodes as follows;
- stopping the nodes
vagrant halt
- restarting the nodes
vagrant up
- destroying the nodes
vagrant destroy
- if a node is running to re-provision it
vagrant provision [node name/virtual machine name]
For additional details on these commands and others, consult Vagrant documentation.
-
In order to facilitate dashboard access the provisioner will create a dashboard html stub file on the cluster directory,
cluster
, together with a corresponding login token. From a file browser you can double-click on the stub file to open the dashboard.
-
Its runtime class name is
gvisor
-
IPVS is installed by default but not enabled.
To enable and use IPVS:
- Edit
kube-proxy
config-map and set itsmode
toipvs
:
kubectl -nkube-system edit cm kube-proxy
- Re-create all the
kube-proxy
pods:
kubectl -nkube-system delete po -l k8s-app=kube-proxy
- Edit