Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Patch CDK #2: Multi arch support — s390x

This article is more than 7 years old.


This article originally appeared on Konstantinos’ blog

In our last post we discussed the steps required to build the Canonical Distribution of Kubernetes (CDK). That post should give you a good picture of the components coming together to form a release. Some of these components are architecture agnostic, some not. Here we will update CDK to support IBM’s s390x architecture.

The CDK bundle is made of charms in python running on Ubuntu. That means we are already in a pretty good shape in terms of running on multiple architectures.

However, charms deploy binaries that are architecture specific. These binaries are of two types:

  1. snaps and
  2. juju resources

Snap packages are cross-distribution but unfortunately they are not cross-architecture. We need to build snaps for the architecture we are targeting. Snapped binaries include Kubernetes with its addons as well as etcd.

There are a few other architecture specific binaries that are not snapped yet. Flannel and CNI plugins are consumed by the charms as Juju resources.

Build snaps for s390x

CDK uses snaps to deploy a) Kubernetes binaries b) add-on services and,
c) etcd binaries.

Snaps with Kubernetes binaries

Kubernetes snaps are built using the branch in, https://github.com/juju-solutions/release/tree/rye/snaps/snap. You will need to login to your s390x machine, clone the repository and checkout the right branch:

On your s390x machine:
> git clone https://github.com/juju-solutions/release.git
> cd release
> git checkout rye/snaps
> cd snap

At this point we can build the Kubernetes snaps:

On your s390x machine:
> make KUBE_ARCH=s390x KUBE_VERSION=v1.7.4 kubectl kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubeadm kubefed

The above will fetch the Kubernetes released binaries from upstream and package them in snaps. You should see a bunch of *_1.7.4_-s390x.snap files in the snap directory.

Snap Kubernetes addon

Apart from the Kubernetes binaries CDK also packages the Kubernetes addons as snaps. The process of building the cdk-addons_1.7.4_s390x.snap is almost identical to the Kubernetes snaps. Clone the cdk-addons repository and make the addons based on the Kubernetes version:

On your s390x machine:
> git clone https://github.com/juju-solutions/cdk-addons.git
> cd cdk-addons
> make KUBE_VERSION=v1.7.4 KUBE_ARCH=s390x

Snap with etcd binaries

The last snap we will need is the snap for etcd. This time we do not have fancy Makafiles but the build process is still very simple:

> git clone https://github.com/tvansteenburgh/etcd-snaps
> cd etcd-snaps/etcd-2.3
> snapcraft — target-arch s390x

At this point you should have etcd_2.3.8_s390x.snap. This snap as well as the addons one and one snap for each of the kubectl kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubeadm kubefed will be attached to the Kubernetes charms upon release.

Make sure you go through this great post on using Kubernetes snaps.

Build juju resources for s390x

CDK needs to deploy the binaries for CNI and Flannel. These two binaries are not packaged as snaps (they might be in the future), instead they are provided as Juju resources. As these two binaries are architecture specific we need to build them for s390x.

CNI resource

For CNI you need to clone the respective repository, checkout the version you need and compile using a docker environment:

> git clone https://github.com/containernetworking/cni.git cni 
> cd cni
> git checkout -f v0.5.1
> docker run — rm -e “GOOS=linux” -e “GOARCH=s390x” -v ${PWD}/cni:/cni golang /bin/bash -c “cd /cni && ./build”

Have a look at how CI creates the tarball needed with the CNI binaries: https://github.com/juju-solutions/kubernetes-jenkins/blob/master/resources/build-cni.sh

Flannel resource

Support for s390x was added from version v0.8.0. Here is the usual cloning and building the repository.

> git clone https://github.com/coreos/flannel.git flannel
> cd flannel 
> checkout -f v0.8.0
> make dist/flanneld-s390x

If you look at the CI script for Flannel resource you will see that the tarball also contains the CNI binaries and the etcd client: https://github.com/juju-solutions/kubernetes-jenkins/blob/master/resources/build-flannel.sh

Building etcd:

> git clone https://github.com/coreos/etcd.git etcd 
> cd etcd
> git checkout -f v2.3.7
> docker run --rm -e "GOOS=linux" -e "GOARCH=s390x -v ${PWD}/etcd:/etcd golang /bin/bash -c "cd /etcd &&./build"

Update charms

As we already mentioned, charms are written in python so they run on any platform/architecture. We only need to make sure they are fed the right architecture specific binaries to deploy and manage.

Kubernetes charms have a configuration option called channel. Channel points to a fallback snap channel where snaps are fetched from. It is a fallback because snaps are also attached to the charms when releasing them, and priority is given to those attached snaps. Let me a explain how this mechanism works. When releasing a Kubernetes charm you need to provide a resource file for each of the snaps the charm will deploy. If you upload a zero sized file (.snap) the charm will consider this as a dummy snap and try to snap-install the respective snap from the official snap repository. Grabbing the snaps from the official repository works towards a single multi arch charm since the arch specific repository is always available.

For the non snapped binaries we build above (cni and flannel) we need to patch the charms to make sure those binaries are available as Juju resources. In this pull request we add support for multi arch non snapped resources. We add an additional Juju resource for cni built s390x arch.

# In the metadata.yaml file of Kubernetes worker charm
cni-s390x: type: file
  etype: file
  filename: cni.tgz
  description: CNI plugins for amd64

The charm will concatenate “cni-” and the architecture of the system to form the name of the right cni resource. On an amd64 the resource to be used is cni-amd64, on an s390x the cni-s390x is used.

We follow the same approach for the flannel charm. Have a look at the respective pull request.

Build and release

We would need to build the charms, push them to the store, attach the resources and release them. Lets trace these steps for kubernetes-worker:

> cd 
> charm build
> cd 
> charm push . cs:~kos.tsakalozos/kubernetes-worker-s390x

You will get a revision number for your charm. In my case it was 0. Lets list the resources we have for that charm:

> charm list-resources cs:~kos.tsakalozos/kubernetes-worker-s390x-0

And attach the resources to the uploaded charm:

> cd 
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kube-proxy=./kube-proxy.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kubectl=./kubectl.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kubelet=./kubelet.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 cni-s390x=./cni-s390x.tgz
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 cni-amd64=./cni-amd64.tgz

Notice how we have to attach one resource per architecture for the cni non-snapped resource. You can try to provide a valid amd64 build for cni but since we are not building a multi arch charm it wouldn’t matter as it will not be used on the targeted s390x platform.

Now let’s release and grant read visibility to everyone:

> charm release cs:~kos.tsakalozos/kubernetes-worker-s390x-0 — channel edge -r cni-s390x-0 -r cni-amd64-0 -r kube-proxy-0 -r kubectl-0 -r kubelet-0
> charm grant cs:~kos.tsakalozos/kubernetes-worker-s390x-0 everyone

We omit building and releasing rest of the charms for brevity.

Summing up

Things have lined up really nice for CDK to support multiple hardware architectures. The architecture specific binaries are well contained into Juju resources and most of them are now shipped as snaps. Building each binary is probably the toughest part in the process outlined above. Each binary has its own dependencies and build process but the scripts linked here reduce the complexity a lot.

In the future we plan to support other architectures. We should end-up with a single bundle that would deploy on any architecture via a juju deploy canonical-kubernetes. You can follow our progress on our trello board. Do not hesitate to reach out and suggest improvements and tell us what architecture you think we should target next.

Ubuntu cloud

Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Join Canonical in Paris at Dell Technologies Forum

Canonical is thrilled to be joining forces with Dell Technologies at the upcoming Dell Technologies Forum – Paris, taking place on 19 November. This premier...

Bringing automation to open source 5G software at Ubuntu Summit 2024

In today’s massive private mobile network (PMN) market, one of the most common approaches to PMN software and infrastructure are proprietary private business...

Life at Canonical: Freyja Cooper’s perspective as a new joiner in Communications

Canonical has developed a unique onboarding process that enables new hires to quickly settle and establish themselves in our globally distributed environment....