Kubernetes — Article 1 — Publishing a private container using Github containers registry

Hi, that's the first post on this "new space", and I'm going to talk about different things involving the code development world, starting with one of my favorite topics, Kubernetes.

Since this year, I've been working (actively) with Kubernetes, and learning from my mistakes and discoveries I'm going to write a few articles that may help you on this fantastic Kubernetes world.

Well, I'm mostly using Kubernetes to run the infrastructure of many microservices + 2 web apps, all of them written in Javascript, or being more specific, in NodeJS and VueJS.

So, I have multiple repositories in GitHub, each one of them has its own NPM package (using GitHub NPM package registry) and I have to run a single Kubernetes pod that uses a container for each one of them.

Well, if you are familiar with Kubernetes, Docker, and all this container world you would know I could just use a NodeJS or an NGINX official container image, copy my files and that's it. Often, we need to install some extra libraries, customize config files, or even set specific permissions for certain directories/files.

There are different approaches on how to accomplish that, from the top of my head I would say we can use the official docker images, create a start-up script that will copy the files, and set up everything we need, but these actions can take minutes and delay and deployment. Or even worse, one of those steps can fail and the pod goes through an infinite error loop.

So for me, the ideal approach would be creating private container images from the official images, installing everything, setting up all permissions, compiling the project files, and then releasing a container image with everything ready to go.

Most companies are using Docker Hub to publish their own private container images, it's a great service. But in my case, I already have a premium subscription for GitHub where I have all my repositories and release my NPM private packages.

Here's where GitHub Container Registry comes in. We can also use GitHub to build and host our private container images, in my case having everything in one place (repository + NPM package + container image). How great is it???

I also recommend you to enable the "Improved Container Support". It's currently in Beta but it works great. Check the link for more info.

Once your container image was built and pushed to GitHub you'll see it inside the repository, like this:

(Figure 1) NPM and container image linked to the repository

Okay, now it's time to have some action!

We first need to build our docker container image. Using one of my microservices as an example, I need NodeJS, with curl and git installed. Then I'll clone my private repository into this container, compile the project and copy the startup, liveness, and readiness scripts, these last two used by Kubernetes.

To make that happen, I've created a docker directory in my repository, where I have a file named Dockerfile, and I have the startup.sh, liveness.sh and readiness.sh scripts.

My Dockerfile is something like this:

FROM node:15-alpine

#Path where we store our app
WORKDIR /home/node/

#Install curl
RUN apk add curl

#Install Git
RUN apk add git

#Copy startup, readiness and liveness scripts
COPY startup.sh /home/node/startup.sh
COPY liveness.sh /home/node/liveness.sh
COPY readiness.sh /home/node/readiness.sh
#Clone repository
RUN git clone --branch master https://username:token@github.com/namespace/repository.git /home/node/app
#Install packages and compile
RUN cd /home/node/app && npm i && npm run build
#Linux user responsible to run the node app
USER node

#Execute the startup bash script on the startup
ENTRYPOINT ["sh", "/home/node/startup.sh"]

My startup.sh script has a few additional commands that I need before I start my main node script. But it's something like this:

#!/bin/sh

#Run commands that I need
...

#Start the app
cd /home/node/app && node lib/index.js

That's it! Our script to build our docker container image is ready.

I, as a developer like to automate everything possible. For my projects, I like to once a code was merged into the master branch has GitHub triggering an action to release a new version of my NPM package, build the docker container image and then deploy it to the production Kubernetes cluster. All of that can be automated using GitHub actions, but I won't dig into it this time as it's not our focus, I'll only explain how to add a step to build our container and how to use it in our Kubernetes cluster.

So, in my GitHub actions script I've added this new step:

- name: Publish docker image
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: my-company/my-repo/my-container-name
username: ${{ secrets.DeploymentUser }}
password: ${{ secrets.DeploymentPassword }}
registry: ghcr.io
tags: "latest"
workdir: docker

Using the name declaration you define what's the image name, GitHub requires you to starts with the namespace (company or username depending on your GitHub account/repository), repository name, and then the image name.

I usually set the company name, then the repository name, and then the repository name again. The tags field is used to define what tag is going to be built on this image build, I always use latest because it's going to be the latest image that has been built and will be used by my Kubernetes cluster.

Username will be a username with permissions to push this container image, and the password should be a GitHub token generated from the username, granting access to push container images.

elgohr/Publish-Docker-Github-Action@master is a GitHub action script that can be used to build and push Docker container images.

workdir defines where is our dockerfile used to build our docker image, in my case I've created a docker directory where I have my dockerfile.

Okay, our step to building our container image is ready, now I need another step to deploy this new image to my Kubernetes cluster.

Using the ameydev/gke-kubectl-action@master GitHub actions I can just restart a deployment on Google Cloud Kubernetes cluster, and this deployment will deploy a new pod using the latest container image. If you are running your cluster in AWS or Azure you will need to find another GitHub action script. So check out our step:

- name: Deploy to Google Cloud Kubernetes (GKE)
uses: ameydev/gke-kubectl-action@master
env:
PROJECT_ID: ${{ secrets.PROJECT_ID }}
APPLICATION_CREDENTIALS: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS }}
CLUSTER_NAME: ${{ secrets.GKE_CLUSTER_NAME }}
ZONE_NAME: us-central1-c
with:
args: rollout restart deployment my-deployment-name -n=my-namespace

This step requires our GKE project id, the application credentials, the cluster name, the zone name, and then you can add the Kubernetes command to be executed, in the args tag. Our command will be rollout restart deployment my-deployment-name -n=my-namespace.

Okay, all the steps to build and publish our private container image are ready, but now it's time to configure our deployments to use these images. As they are private container images we need to find a way to make our deployments able to find and pull those images.

This Kubernetes doc explains how to pull a private container image. We basically need to create a Kubernetes secret using the base64 encoded value that follows this format:

{"auths":{"your.private.registry.example.com":{"username":"janedoe","password":"xxxxxxxxxxx","email":"jdoe@example.com","auth":"c3R...zE2"}}}

In our case our "your.private.registry.example.com" will be ghcr.io, the username will be the GitHub username, the password will be our GitHub token, and we don't need the auth field, we can get rid of it. So it will be:

{"auths":{"ghcr.io":{"username":"janedoe","password":"xxxxxxxxxxx","email":"jdoe@example.com"}}}

Putting this to a YAML config file we have:

apiVersion: v1
kind: Secret
metadata:
name: my-secret-name
namespace: my-namespace
data:
.dockerconfigjson: mybase64value
type: kubernetes.io/dockerconfigjson

Or if you prefer to create this secret using the command line:

kubectl create secret docker-registry my-secret-name --docker-server=ghcr.io --docker-username=username --docker-password=password -n=my-namespace

You have your secret, now it's time to add this secret to your deployment. In your deployment file, you will have something like:

spec:
containers:
- name: my-name
image: ghcr.io/namespace/repository-name/image-name:latest
...

And now you have to add your secret to be used when Kubernetes pulls the private container image:

spec:
containers:
- name: my-name
image: ghcr.io/namespace/repository-name/image-name:latest
...
imagePullSecrets:
- name: my-secret-name

Aaaaaaaaand that's it! Your deployment is ready to pull your private container images from GitHub, and your repository is also ready to publish images every time you push code.

That's my first post, I want to write more, write about Kubernetes, Docker, Linux, PHP, NodeJS, and Magento 2, those are the technologies I use every day. So all the feedback is welcome.

See you in the next post and stay safe!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store