Skip to content

Latest commit

 

History

History
141 lines (82 loc) · 3.74 KB

README.md

File metadata and controls

141 lines (82 loc) · 3.74 KB

Provision a Kubernetes cluster with Ansible and Vagrant

Why?

  • Want to test out different container runtimes.

  • Want to test out tools such as IPVS, AppArmor, Falco e.tc.

  • Want clear, understandable and extensible Ansible playbooks.

Requirements:

Linux

  • Vagrant installed on your local host.

    At the moment the vagrant script requires Virtualbox be installed. However this can easily be changed on the script, Vagrantfile.

  • Ansible version >= 2.10 installed on your local host.

  • kubectl installed on your local host. This is optional.

Windows

Ansible is not supported on Windows and the 'best' solution is to allow Vagrant and Ansible run the playbooks on the guest virtual machine.

Getting started

  vagrant up --provision --provider virtualbox

This should take a short while, but upon successful completion you should have a cluster running, reachable via the assigned private ip, on port 6443.

In addition, a copy of kubeconfig will have been downloaded into the cluster directory, cluster/. It can be used for authentication on the cluster for execution of commands.

For example, to check node status;

  kubectl --kubeconfig ./cluster/kubeconfig get nodes

or

  export KUBECONFIG=$(pwd)/cluster/kubeconfig
  kubectl get nodes

If kubectl is not installed on your local host, you can ssh into the control node and run commands;

  vagrant ssh control01
  kubectl get nodes

After successful provisioning of the cluster, you can manage the nodes as follows;

  • stopping the nodes
  vagrant halt
  • restarting the nodes
  vagrant up
  • destroying the nodes
  vagrant destroy
  • if a node is running to re-provision it
  vagrant provision [node name/virtual machine name]

For additional details on these commands and others, consult Vagrant documentation.

Installed features

Kubernetes Dashboard

  • Kubernetes dashboard

    In order to facilitate dashboard access the provisioner will create a dashboard html stub file on the cluster directory, cluster, together with a corresponding login token. From a file browser you can double-click on the stub file to open the dashboard.

Metrics

Container runtimes

  • gVisor

    Its runtime class name is gvisor

Policy

IPVS

  • IPVS is installed by default but not enabled.

    To enable and use IPVS:

    1. Edit kube-proxy config-map and set its mode to ipvs:
      kubectl -nkube-system edit cm kube-proxy
    
    1. Re-create all the kube-proxy pods:
      kubectl -nkube-system delete po -l k8s-app=kube-proxy
    

Security

Alternatives