01 Oct 2022

But why?

Whatever you think about Kubernetes, it is here to stay. At work it's replaced all our custom orchestration and ECS usage. It solves infrastructure problems in a common way that most Cloud providers have agreed to, which is no small feat.

So why would you want to install it locally anyway? For work your company will most likely provide you access to a cluster and give you a set of credentials that somewhat limit what you can do. Depending on your interests that won't be enough to properly learn about what is going on under the hood. Getting familiar with how Kubernetes stores containers, container logs, how networking is handled, DNS resolution, load balancing etc will help remove a huge barrier.

The best thing I can say about Kubernetes is that when you dig into it, everything makes sense and turns out to be relatively simple. You will see that most tasks are actually performed by a container, one that has parameters, logs and a github repo somewhere you can check. Your DNS resolution is not working? Well then take a look at "kubectl -n kube-system logs coredns-d489fb88-n7q9p", check the configuration in "-n kube-system get deployments coredns -o yaml", proxy or port-forward to it, check the metrics...

In my particular case I also always have a bunch of code I need to run, scripts to aggregate and publish things, renew my Letsencrypt certificates, update podcast and RSS services, installed tools like Grocy and others. Over the years I've tried various solutions to handle all of this. Currently I have a bunch of scripts running on my main machine (Linux) that I usually have always running, some of those spawning containers (to ease dependency management, sometimes it's not completely under my control) and other tasks are done by my Synology DS.

For me, having kubernetes seems like a logical step even if it's just to see if this solution will age better over time. I like the declarative nature of the YAML configuration and the relative isolation of having Dockerfiles with their own minimal needs defined.

But let's be clear, for most non developers this is a pretty silly thing to do!

Let's reuse some hardware

There are multiple ways to have a local kubernetes cluster. One of the most popular is minikube but this time we are going to try the Microk8s offer from Canonical (Ubuntu) running in a spare Linux box (rather than a VM).

Microk8s is a developer-orientated kubernetes distribution. You should most certainly not use it as your day to day driver but it's a good option to play with Kubernetes without getting too lost in the actual setup

Do you have a spare computer somewhere? I have an old Mac Mini I was using as my main OS X (for iOS builds) and as a secondary backup system. Now it has been superseded by a much better Intel Mac Mini I kind friend gave me, leaving me with a machine that is too nice to just shelve but too cumbersome to maintain as another Mac OS X machine (it won't be plugged in to a monitor most of the time).

  • I installed a minimal Ubuntu 22.04.1 LTS on it (Server edition)
  • Cpu according to /proc/cpuinfo is i5-4278U CPU @ 2.60GHz
  • RAM according to /proc/raminfo is 16GB of RAM
  • And according to hdparm -I I have two SSDs since some time ago I replaced the old HDD with a spare SDD: Samsung SSD 860 EVO 500GB and the original APPLE SSD SM0128F
    • I decided to mount the smaller Apple SSD as ext4 in / and the bigger 500GB one as zfs in a separate mountpoint called /main. Mostly because I have tendency to reinstall the whole OS but like to keep the ZFS partition between installs.

Setting up microk8s

Once the Mac Mini linux installation was setup (nothing extra needed to do there), I enabled SSH and the remaining setup was done from inside SSH sessions.

Installation didn't start very promising as it failed the first time, but re-running did finish, following the official installation guide:

$ sudo snap install microk8s --classic
microk8s (1.25/stable) v1.25.2 from Canonical✓ installed

For ease of use I've given my own user access to it with:

sudo usermod -a -G microk8s osuka
sudo chown -f -R osuka ~/.kube
# exiting the ssh session and logging in again is needed at this point

That you have the correct permissions can be checked by running microk8s status --wait-ready As per the guide, enable some services:

$ microk8s status
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none
addons:
  enabled:
    ha-cluster           # (core) Configure high availability on the current node
    helm                 # (core) Helm - the package manager for Kubernetes
    helm3                # (core) Helm 3 - the package manager for Kubernetes
    metrics-server       # (core) K8s Metrics Server for API access to service metrics
  disabled:
    cert-manager         # (core) Cloud native certificate management
    community            # (core) The community addons repository
    dashboard            # (core) The Kubernetes dashboard
    ...

We activate the following services, by running microk8s enable XXXX for each one:

  • dashboard: Provides us with a web UI to check the status of the luster. See more on github
  • dns: installs coredns which will allow us to have cluster-local DNS resolution
  • registry: installs a private container registry where we can push our images

We enable also the "community" services with microk8s enable community which gives us additional services we can enable. From the community, we install

  • istio: this creates a service mesh which provides load balancing, service-to-service authentication and monitoring on top of existing services, see their docs

Make microk8s kubectl the default command to run:

  • add alias mkctl="microk8s kubectl"
  • this is just the standard kubectl, just provided via microk8s so we on't need to install it separately - we can still use kubectl from other machines

Let's use it remotely

Access the cluster itself with kubectl

The last thing I want is to have yet another computer I have to keep connecting to a display so this installation will be used remotely. Also I don't really want to be ssh'ing inside of it just to run kubectl commands.

From inside the ssh session, check stuff is running:

$ mkctl get pods --all-namespaces
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          kubernetes-dashboard-74b66d7f9c-6ngts        1/1     Running   0          18m
kube-system          dashboard-metrics-scraper-64bcc67c9c-pth8h   1/1     Running   0          18m
kube-system          metrics-server-6b6844c455-5t85g              1/1     Running   0          19m
...

There's quite a bit of documentation on how to do most common tasks, check the how to section of the Microk8s site. To configure access:

  • Expose configuration
$ microk8s config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRV... # (certificate)
    server: https://192.168.x.x:16443   # (ip in local network)
  name: microk8s-cluster
contexts:
- context:
    cluster: microk8s-cluster
    user: admin
  name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
  user:
    token: WDBySG1.......  # direct token authentication

At some point we will create users but for initial development tasks all we need is to concatenate the contents of this configuration into our ~/.kube/config file (or create it if it's the first time using kubectl).

After doing this, all kubectl commands will work as usual. I've renamed my context to 'minion':

❯ kubectl config use-context minion
❯ kubectl get pods --all-namespaces

More details for this specific step in the official howto.

Access the dashboard

Since we haven't enabled any other mechanism, to access the dashboard you will need a token. At this point you could use the same token you have in .kube/config, but the recommended way is to create a temporary token instead:

$ kubectl create token default
... a token is printed
$ kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443

Then connect to https://localhost:10443 and paste the output from the first command. See the next post for an explanation of the token used there.


(Follow me on Twitter, on Facebook or add this RSS feed) (Sígueme en Twitter, on Facebook o añade mi feed RSS)
details

Comments