Caleb Woodbine's blog
βͺ Back
π Automating Kubernetes Deployments with GitLab CI
π Background
Over the past few years, I've been managing and building many sites. During this time, I have been slowly streamlining the way that I deploy software.
I've been gathering a collection of software that I use together in a cohesive manner.
π¦ GitLab
GitLab has been my CI of choice for a number of years now. I find that the system has very good configuration primitives that are highly reusable. The ability to choose to run your CI jobs anywhere, whilst not being dependent on a single site is also quite valuable to me.
π¦ Containers
Containers are the best. For me, they provide consistency and portability so that I can run my things anywhere.
π§° Kaniko
Kaniko is a container building tool maintained by Google. It is daemonless and requires no special privileges to perform builds. This is perfect for my builds.
Bonus! π It even runs on arm64.
βΈοΈ Kubernetes
It is important for my operations that I manage all of my things generally the same way. Kubernetes provides:
- a standard way for management
- the same generic types of resources for web deployments
- scaling
- given the CPU or memory become too high, more replicas of Pods (or Nodes, if needed) are created
- reliability
- if a server or Pod dies, another will schedule to take it's place
π go-http-server
While working on projects at Safe Surfer, there was need for a web server that could:
- serve sites well,
- run rootless,
- be minimal (only contain the binary),
- run to scale in Kubernetes
- handle Vue.js history mode rewriting, and
- provide process metrics
Before this, a rootless nginx container that didn't meet all the criteria was used. So I wrote go-http-server (gh) and I've been using it with pretty much anything and everything ever since.
It also supports arm64! π
π¨ Building the pipeline
When I build things, something that's important to me is that I can both run it anywhere and bring it up fast.
Pipeline stages
The pipeline is composed of the following stages.
Pages
This stage is used for pushing to GitLab Pages. I find that GitLab Pages is a well integrated and easy to use site hosting service, if you're used to using a git+YAML based CI system.
The pages stage in the pipeline will use a small Hugo container image to build site and place in a folder named public, which is picked up by GitLab. The stage only runs if there's an environment variable named KUBECONFIG_FROM_ENV that hasn't been set.
pages:
image: klakegg/hugo:0.81.0-ext-alpine-ci
stage: pages
variables:
HUGO_ENV: production
HUGO_DESTINATION: public
script:
- hugo -b "$CI_PAGES_URL"
artifacts:
paths:
- public
only:
- master
except:
variables:
- $KUBECONFIG_FROM_ENV
The stage would be replaced with something else, given if the site is not Hugo.
Build Container
There are several snippets that are used for running a build on a container in the pipeline, one which is
.kaniko-build-taggable-name: &kaniko-build-taggable-name |
export APP_VERSION=$(date --date="${CI_COMMIT_TIMESTAMP//[T+]/ }" '+%Y.%m.%d.%H%M')
export KANIKO_RELEASE_DESTINATIONS="--destination $CI_REGISTRY_IMAGE:latest-$ARCH --destination $CI_REGISTRY_IMAGE:$APP_VERSION-$ARCH"
export PROJECT_PATH=$(echo $CI_PROJECT_PATH | tr '[:upper:]' '[:lower:]')
It is important to note that in the KANIKO_RELEASE_DESTINATIONS variable, the container images will be built with their architecture appended for later use. Then in the next snipped
.kaniko-build: &kaniko-build |
/kaniko/executor \
--snapshotMode=redo \
--context $CI_PROJECT_DIR \
--dockerfile $CI_PROJECT_DIR/Dockerfile \
--build-arg ARCH=$ARCH \
$KANIKO_DESTINATIONS \
$KANIKO_RELEASE_DESTINATIONS
The command to run Kaniko can be generated to push to the correct destination and build with the correct args.
The other snippet is an important part to configuring each container build
.build-container: &build-container
stage: build-container
variables:
GIT_SUBMODULE_STRATEGY: recursive
image:
name: gcr.io/kaniko-project/executor:debug-v1.3.0
entrypoint: [""]
retry: 2
only:
refs:
- master
variables:
- $KUBECONFIG_FROM_ENV
before_script:
- *kaniko-build-pre-script
- *kaniko-build-taggable-name
script:
- *kaniko-build
It is responsible for cloning the submodules, running the previous snippets as before_script and script blocks. Finally only allowing container builds if the environment variable KUBECONFIG_FROM_ENV is declared.
These snippets are implemented as such
build-container-amd64:
<<: *build-container
variables:
ARCH: "amd64"
build-container-arm64:
<<: *build-container
variables:
ARCH: "arm64"
By implementing this way, it allows for easy adding of architecture support.
Post Build
Given the container build is multi-arch and the images and built with destinations that include their architecture, the next thing to do is to create the tag that joins them all together.
Here there are some important variables declared.
variables:
GIT_SUBMODULE_STRATEGY: recursive
ARCHES: amd64 arm64
ARCH_DEFAULT: amd64
REGISTRIES: $CI_REGISTRY
The variables allow for the adding of multiple architecture and container registries.
In this stage, there's iteration over each tag, registry, and archecture to produce a local manifest which is then pushed off.
tag-container-image:
stage: post-build
image: docker:20.10.6
before_script:
- *kaniko-build-pre-script
- *kaniko-build-taggable-name
script:
- |
set -x
cp -rf /kaniko/.docker $HOME/.docker
if [[ -z "$CI_COMMIT_TAG" ]]; then
APP_BUILD_VERSION=
fi
for tag in $APP_VERSION latest; do
for reg in $REGISTRIES; do
export IMAGE_TAG_WITH_ARCH=""
for arch in $ARCHES; do
IMAGE="$reg/$PROJECT_PATH:$tag-$arch"
export IMAGE_TAG_WITH_ARCH="$IMAGE_TAG_WITH_ARCH $IMAGE"
done
docker manifest create $reg/$PROJECT_PATH:$tag $IMAGE_TAG_WITH_ARCH
for arch in $ARCHES; do
EXTRA_ARGS=""
if [ "$arch" = "arm64" ]; then
EXTRA_ARGS="--variant v8"
fi
docker manifest annotate $reg/$PROJECT_PATH:$tag $reg/$PROJECT_PATH:$tag-$arch --os linux --arch $arch $EXTRA_ARGS
done
docker manifest push $reg/$PROJECT_PATH:$tag
done
done
only:
refs:
- master
variables:
- $KUBECONFIG_FROM_ENV
This stage works very well, but still could be classed as experimental. When I tag images I like to make sure that latest is always the last built from master.
What's nice is that Kaniko picks up the same auth config.json format as Docker's CLI does.
Deploy
Now that the container image has been tagged and is usable to support multiple architectures it can now be applied in the cluster.
deploy:
stage: deploy
image:
name: alpine:3.12
entrypoint: [""]
variables:
KUBECTL_VERSION: v1.20.2
KUBECTL_HASH: 2583b1c9fbfc5443a722fb04cf0cc83df18e45880a2cf1f6b52d9f595c5beb88
MANIFEST_FOLDER: $CI_PROJECT_DIR/manifests
before_script:
- *kaniko-build-taggable-name
- apk add --no-cache curl gettext
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBECTL_VERSION/bin/linux/amd64/kubectl
- echo "$KUBECTL_HASH kubectl" | sha256sum -c -
- chmod +x kubectl
- mv kubectl /usr/local/bin/kubectl
- mkdir -p $HOME/.kube
- echo ${KUBECONFIG_FROM_ENV} | base64 -d > $HOME/.kube/config
- cd $MANIFEST_FOLDER
script:
- |
for MANIFEST in $MANIFEST_FOLDER/*; do
if cat $MANIFEST | grep -q ci-sa:noapply=true; then
continue
fi
envsubst < $MANIFEST | kubectl apply -f -
done
only:
refs:
- master
variables:
- $KUBECONFIG_FROM_ENV
In the block above, kubectl is installed and verified, and the kubeconfig is put into place. In the script block, each manifest in the manifests folder is applied, where they don't contain a comment to not apply.
This is not a particularly sophisticated set of instructions; It does not handle reconciling of resources, like kubernetes-sigs/boskos does, and will only apply them.
βοΈ Creating a template
Once all of this was put together and I ended up building several Hugo based sites, I created a template which utilises all of the things summarised in this article.
The template is available here. Please give it a star if you find it useful! Portions of this template are found in my site, other places such as flattrack.io, and pair.sharing.io.
π Alternatives
So many components can be swapped out for other ones.
Thing I used | Replacement |
---|---|
GitLab CI | GitHub Actions |
Kaniko | Docker + buildx |
go-http-server | nginx |
Kubernetes | Docker-Swarm |
I'm currently unaware how to use buildkit in rootless mode for multi-arch container CI builds, perhaps I'll swap that out at somepoint instead of Kaniko if I can get it to work. Currently, Kaniko is especially good since it doesn't require privileged mode.
βοΈ Final notes
You too can pick up such a template and automate your pipelines for publishing and deploying! Automation has come a long way, I'm sure there many more things that can be automated and managed.
Hope you've enjoyed this read. Thank you for checking it out.