[Estimated Reading Time: 10 minutes]

Following the Vagrant experiment (reminding me of a Bill Bailey metaphor… “a long walk down a windy beach to a cafe that was closed“) I next set about automating my VM creation using PowerShell, with greater success. Though still not perfect, the final gap to full automation is something I could close, if I wished. And we get to install kubernetes itself!

The PowerShell Solution

Reasoning that anything the Hyper-V provider for Vagrant was able to do must be possible via some other means, my first thought was to explore the Hyper-V module for PowerShell. Sure enough, it had everything I needed.

Far more quickly than I achieved with Vagrant, I soon had a simple script to create all my VM’s:

create-vms.ps1

NOTE: Unfortunately I have had to resort to posting the scripts as screen-shots due to an issue with WordPress. Something about these scripts made the post editor unable to save my post.

I tried with a variety of plugins as well as using a plain preformatted, unhighlighted block, without success. Trying to resolve this issue is what has delayed this post!

I can only hope whatever issue is behind this isn’t causing problems with code posted in historic posts! If you encounter any, please let me know.

This straight-forward script declares a function (lines 1-39) that does all the heavy lifting which is then called 4 times, once for each VM that I want.

With default values defined for most parameters to the function, each call only has to name the VM being created and provide the desired MAC address (again, the values in the script here are illustrative only).

The function prepares each VM. The initial new-vm command (line 18) creates the VM with a (virtual) HDD. Immediately a benefit over Vagrant is apparent in that the required network switch is also specified in this command.

Further tweaks to the VM are then applied, including setting the RAM and CPU resources, disabling secure boot and applying the required properties to the network adapter of the VM (VLAN ID and MAC address).

Then follows the most complicated part of the script (lines 24-35). This section:

  1. Adds a virtual DVD drive with the Ubuntu Server installation iso mounted
  2. Removes the network from the boot device list
  3. Sets the virtual DVD as the first boot device

None of this is strictly necessary but is a convenience for what follows.

Finally, auto-checkpoints are disabled since I intend to create specific checkpoints myself at various points in the process that follows and don’t want these conflated and confused with automatic ones.

OS Installation

Actual installation of the Ubuntu Server OS in each VM remains a manual step in this approach.

This is the automation gap I mentioned, that I could close in the future if I wished by creating fully unattended installs for my Ubuntu machines.

For now, I install the OS “manually” in each VM. The PowerShell script to ensure that each new VM is configured to initially boot straight into the installation process provides enough convenience for the time being.

Although it (currently) has to be repeated for each VM, installation is far more straightforward with these VM’s. Installation for use as a Kubernetes node is almost “vanilla”, with defaults accepted for almost all installation options other than network hostname, my preferred user account name and formatting the full capacity of the virtual HDD since, for some reason, the installation intends to format only half the capacity by default. This may be related to the virtual and dynamic nature of the “disk”, though I’m not sure.

A good sign that the diminutive Asus PC is more than capable of supporting these 4 VM’s was that I was able to launch all 4 newly created VM’s and go through the installation of each in parallel, without any sign of the platform straining under the load or even triggering the cooling fans on the host – an increasingly common measure of system load. 🙂

One slight concession to the scripted steps that were to follow, was to copy my ssh key to each VM and to make my user account capable of password-less sudo, making it possible to run subsequent bash scripts entirely remotely using ssh.

Onward, To Kubernetes!

Apologies. bash scripts in this section are by necessity all reproduced as screen-shots to avoid the WordPress issue mentioned above.

With the VM’s ready to go, I then had just 3 scripts left to run to create my cluster. They had to be run in a particular order, which was:

  1. main.sh
  2. master.sh
  3. workers.sh

Each of these is a script that runs another script in one, some or all of each of my 4 VMs. As an example, let’s look at main.sh which runs a node-install.sh script on all 4 of the VM’s:

main.sh

For anyone with scripting over ssh, this should be familiar (if somewhat naive in places, as I am still an egg when it comes to this stuff).

The node-install.sh script itself is the largest of all of the scripts, but although it does a fair amount of work it is all very straight-forward:

node-install.sh

Tasks 1 thru 5 in this script (lines 1-42) are all about preparing the system for Kubernetes.

After some initial housekeeping to elevate the script to sudo and ensure all apt packages are up to date, the system swap file and firewall are turned off.

Tasks 3 thru 5 then configure Kubernetes to use the containerd runtime.

Kubernetes itself is then installed by first (Task 6, lines 44-46) adding the kubernetes repo and the key for it, before finally (Task 7, lines 48-49) installing kubeadm, kubelet and kubectl.

To recap on those three packages:

kubeadmCommand-line utility for performing Kubernetes administrative operations such as initialising a cluster or joining a node to an existing cluster.
kubeletThe process that runs on a Kubernetes node to communicate with other nodes in the cluster (this process is fundamentally what turns a machine into a node).
kubectlCommand-line utility that provides the primary means of interacting with a Kubernetes cluster. It is used to add, remove and query the status of objects within the cluster, among other things.

Now is a good time for a brief diversion, to talk about containerd.

Docker vs containerd

Some time ago, the Kubernetes developers announced their intention to remove support for Docker. This did not mean that Kubernetes would be unable to “run Docker containers” only that it would no longer support “using Docker to run containers“, i.e. using Docker as the container runtime.

A subtle but important difference.

The alternative is to use a container runtime called containerd.

Containers themselves are a standardised commodity. Indeed, they were already a “thing” before Docker made them popular.

A “Docker container” is just “a container” that can be run on any standards-compliant container platform. Whilst Docker does indeed use a standards-compliant platform to run its containers, that platform lives inside a Docker-ish environment. For Kubernetes to work with the underlying platform, it has to use a compatibility layer to get through the Docker-ish layers in-between.

And the standards-compliant runtime that Docker employs is, in fact, … containerd.

You may already be thinking that this all sounds a bit… messy. The Kubernetes team agrees with you.

The prior support for this was only ever intended as a temporary compatibility concession. It introduces additional “layers” between Kubernetes and the container runtime itself and ideally Kubernetes would work directly with the underlying container platform (containerd). Arguably that’s the whole point of deprecating Docker support in Kubernetes… to wean people off of the Docker compatibility layer and switch over to running directly on containerd.

It is more efficient and involves one less thing (the Docker compatibility layer) to have to maintain and which could go wrong.

When I first manually installed Kubernetes I went down the docker route since this is what the instructions I was following did. Although that didn’t require any configuration (beyond installing the docker.io package itself), it also didn’t initially work and although the configuration tweak required to get it working was fairly small, it had a bad “smell” about it, like I was changing something that wasn’t intended to be changed (well, it was deprecated after all).

Ultimately I decided that since Docker support was going away I should get used to not using it sooner rather than later and found a more current guide that provided the containerd configuration steps that I have now incorporated in my scripts.

What About Kubernetes in Docker?

If you use Docker Desktop, you may be familiar with the fact that you can in fact create yourself a Kubernetes cluster with a single click by simply Enabling Kubernetes in Docker Desktop itself.

This is unrelated to the “Kubernetes support for Docker” issue and is purely a convenient means of standing up a single-node Kubernetes cluster on a machine that already has Docker Desktop installed.

Why didn’t I just use that to learn about Kubernetes?

As a way of learning to work with Kubernetes it is indeed a great place to start, but being a single-node cluster it doesn’t provide scope for more learning about aspects of Kubernetes that arise in more complex – i.e. realistic – scenarios. Besides, I suspected that toggling a check-box was perhaps not representative of what was involved in establishing a “proper” cluster. 🙂

Back to Installing Kubernetes

So, back to the business of installing Kubernetes.

With node-install.sh having been run on each on my machine, I now had 4 VM’s capable of being Kubernetes nodes but which were not yet actual nodes.

Three steps remain:

  1. Initialise (create) a cluster
  2. Configure the master node and install a container network
  3. Join the worker nodes to the cluster

This of course was the purpose of the remaining scripts, although the first step is manual.

Creating the Cluster

To initialise the cluster itself, all that is required is to ssh to the node that is to be the master node and run the following command:

sudo kubeadm init

If all goes well, this creates the cluster with a swathe of diagnostic output, before finally providing some information that is critical for the steps that follow. This is one reason it remains a manual step (that and the fact that it is trivially simple and not worth scripting).

The output you get after successfully initialising a cluster should be similar to this:

Successful output from kubeadm init

The three critical pieces of information are highlighted red, blue and green.

The red instruction is easily automated as indeed is the blue instruction, to install a pod network. This is what establishes the internal network within the cluster over which the pods in that cluster communicate with each other. There are numerous options available in this space, all described at the Url provided in the output.

I did a little research and chose Weave, attracted by the observation that it provided a plug-and-play option with simple installation and no configuration required.

The two steps to configure the master and install the Weave network add-on were incorporated into a master-config.sh script:

master-config.sh

This is the first demonstration of installing something into Kubernetes. The command:

kubectl apply -f <filename>

asks Kubernetes to apply the file identified. Within the specified file is a description of services and other objects that Kubernetes should ensure are available. If those services need container images, then Kubernetes will pull those images as required.

In this case, the file describes the Weave addon and is referenced by URL, with a bit of bash magic to ensure that the version of the file is used that is appropriate to the version of Kubernetes installed.

I then created a master.sh script which runs that config script on the master node over ssh and then downloads the configuration required to enable kubectl running on my workstation to communicate with the cluster:

master.sh

The configuration file downloaded is simply placed in the current folder. On a “fresh” workstation it can then be simply moved and renamed as ~/.kube/config. This isn’t done automatically since if a machine already has Kubernetes configuration for working with other clusters, then the contents need to be incorporated into that existing configuration.

The final green instruction provided the command to run on each of the worker nodes that wish to join the cluster.

I created a script to do this (node-join.sh) which contained the command as provided in the output from kubeadm above, then created a further (workers.sh) script to run that script on each of my worker nodes:

workers.sh

Testing the Cluster

All being well, I now had a Kubernetes cluster up and running!

To test that, all that remained was to install Kubernetes command-line tools on my workstation and configure them.

Since I already had Docker Desktop installed, the Kubernetes CLI tools (specifically, kubectl) was already available as part of that. Indeed, after installing the Kubernetes CLI with homebrew, my system was still using the Docker Desktop tools, which was undesirable since they were older. Homebrew Doctor helps identify such problems and suggest fixes, which in this case meant removing a symlink that was pointing kubectl at the Docker Desktop version.

Since I had no other Kubernetes cluster configuration to worry about, I then simply moved and renamed the kube-config file downloaded from my master node to ~/.kube/config and I was all set!

As a simple test, I could ask Kubernetes to list all of the nodes in my default cluster by issuing the command:

kubectl get nodes

If everything has gone smoothly, the result should resemble:

Nodes in the cluster

Success!

The “12d” age of my nodes reflects the fact that this screenshot was taken 12 days after having done all this.

But what are the nodes doing?

We can find out by asking Kubernetes to list all of the services that are running in the cluster, with this command:

kubectl get services -A

The -A option directs Kubernetes to list all services in all namespaces, and we should get something resembling:

Services in the cluster

Well, actually, you won’t.

In a fresh cluster, you will have a default namespace and a kube-system namespace, and that’s it.

The Kubernetes Dashboard service is not something that is installed by default. And the fact that this service is of type LoadBalancer is also not something that a cluster is capable of providing out of the box.

Next: Dashboard, LoadBalancers and Namespaces

Next time we’ll look at how we establish these things, why and what they do for us, as well as looking at the concept of namespaces.

One thought on “Kubernetes @ Home: PowerShell VM’s”

Comments are closed.