Create a Local Vagrant K3s cluster
clone the infctl-cli
repo in order to have local access to its scripts, files and manifests to use later on.
Where you put this is up to you but we will work on the assumption that this will be in $HOME/projects
:
mkdir -p $HOME/projects/cd $HOME/projectsgit clone https://codeberg.org/headshed/infctl-cli.gitcd infctl-cli
take a look at the script ./scripts/install_vagrant_nodes.sh
to familiarize yourself with what it does
basically it will run vagrant up
on what are to be your local cluster
next take a look at ./scripts/configure_vagrant_k3s.sh
this checks the vagrant hosts and creates an ansible inventory file for use in a later step
finally, check out ./scripts/install_vagrant_workstation.sh
which runs a vagrant up command to create a workstation from which ansible will be run
the Vagrantfile at vagrant/dev/ubuntu/Vagrantfile
are used by vagrant
for each of these processes to coordinate each build
a final script ./vagrant/dev/ubuntu/ansible/provision_workstation.sh
which is quite a bit longer, is used by vagrant
to provision our workstation and to finalize the cluster.
if you are ready to run the pipeline, this can be run in a single command with infctl
which we can configure to use a pipeline file at pipelines/dev/vagrant-k3s.json
this marshals each of the above tasks into a single, repeatable operation
LOG_FORMAT=none infctl -f pipelines/dev/vagrant-k3s.json
if all has gone well, a cluster will now be running on your local system comprising of 3 nodes and a workstation
we can check status by now switching directory to the vagrant dev folder and running a vagrant status
command :
cd vagrant/dev/ubuntu/vagrant statusCurrent machine states:
vm1 running (virtualbox)vm2 running (virtualbox)vm3 running (virtualbox)workstation running (virtualbox)
This environment represents multiple VMs. The VMs are all listedabove with their current state. For more information about a specificVM, run `vagrant status NAME`.
to work on our cluster we must first connect to the workstation
and then use kubectl
commands to interact with k3s
:
❯ vagrant ssh workstationWelcome to Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-144-generic x86_64)
* Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/pro
System information as of Sat Aug 16 15:59:20 UTC 2025
System load: 0.0 Processes: 94 Usage of /: 6.4% of 38.70GB Users logged in: 0 Memory usage: 30% IPv4 address for enp0s3: 10.0.2.15 Swap usage: 0%
* Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s just raised the bar for easy, resilient and secure K8s cluster deployment.
https://ubuntu.com/engage/secure-kubernetes-at-the-edge
Expanded Security Maintenance for Applications is not enabled.
17 updates can be applied immediately.17 of these updates are standard security updates.To see these additional updates run: apt list --upgradable
Enable ESM Apps to receive additional future security updates.See https://ubuntu.com/esm or run: sudo pro status
New release '24.04.3 LTS' available.Run 'do-release-upgrade' to upgrade to it.
Last login: Sat Aug 16 13:00:06 2025 from 10.0.2.2Agent pid 6586/home/vagrant/machines/*/virtualbox/private_key: No such file or directoryThe agent has no identities.vagrant@ansible-workstation:~$ kubectl get nodesNAME STATUS ROLES AGE VERSIONvm1 Ready control-plane,etcd,master 4h11m v1.33.3+k3s1vm2 Ready control-plane,etcd,master 4h11m v1.33.3+k3s1vm3 Ready control-plane,etcd,master 4h10m v1.33.3+k3s1
if you have got this far, congratulation you have a locally hosted k3s cluster running in 3 virtual machines and a workstation that can be used to manage it using kubectl
and ansible
if if you need it also.