Photo by Guillaume Bolduc on Unsplash
Automating updates deployment in Docker and Kubernetes with Python
Introduction
Docker and Kubernetes are two key technologies in the field of containerization and container orchestration, revolutionizing how applications are developed, distributed, and managed.
Docker
Docker is an open-source platform that provides a solution for creating, distributing, and running applications in containers. Containers are lightweight and isolated environments that contain everything needed to run an application, including code, libraries, and dependencies. Docker simplifies the creation of consistent environments across development, testing, and production, eliminating issues related to configuration differences between systems.
With Docker, you can package an application and its dependencies into a single container, ensuring consistent execution across any environment with Docker installed. All these features have made Docker a fundamental tool for developers and system administrators.
Kubernetes
Kubernetes is an open-source system for automating and managing containers in production environments. It provides features for orchestrating containers, managing load balancing, automatically scaling applications, deploying updates, and handling application state.
The primary goal of Kubernetes is to simplify and automate the management of containerized applications on a large scale. With Kubernetes, you can declare the desired state of an application, and the system automatically ensures that the actual state matches the declared one. This allows efficient management of complex applications distributed across machine clusters.
In summary, Docker provides the technology for creating and managing containers, while Kubernetes offers tools for orchestrating and automating the deployment of these containers on a large scale. Together, they provide a powerful and flexible environment for developing and managing distributed applications.
Deplyment
In summary, distribution in Docker is carried out with these steps:
Dockerfile Creation: Create a file called Dockerfile containing instructions for building a Docker image. This file defines the runtime environment for the application.
Image Building: Use the
docker build
command to build a Docker image based on the Dockerfile. The image contains the application and all its dependencies.Image Push (Optional): If the image needs to be distributed to a remote container registry (such as Docker Hub), you can push it using the
docker push
command.Run the container: On each host where you want to run the application, use the
docker run
command to start a container based on the created image.
To deploy an application in kubernetes instead, the steps are:
Manifest Definition: Create YAML files called manifests that describe the desired configuration of the application, including services, deployments, volumes, etc.
Manifest Application: Use the
kubectl apply
command to apply the manifests to a Kubernetes cluster. This instructs Kubernetes on how to create and manage the resources described in the manifests.State Monitoring: Use the
kubectl get
command or other monitoring tools to check the status of resources within the cluster. For example, monitor the deployment to ensure the desired number of replicas is running.Scaling and Update Management: Use the
kubectl scale
orkubectl set image
commands to scale the application or update the image version without downtime.
In both scenarios, the approaches diverge significantly. Docker is geared towards the creation and execution of individual containers, whereas Kubernetes excels in orchestrating and managing containers on a large scale, providing advanced features for resource management.
Update
The update process for containers can be managed differently between Docker and Kubernetes:
Updating Containers with Docker:
Image Update: Firstly, create a new version of the Docker image with the desired code modifications.
Stop and Remove Existing Container: Stop and remove the running container based on the previous version of the image using the
docker stop
anddocker rm
commands.Pull and Run the New Image: Download the new version of the Docker image using the
docker pull
command, then start a new container based on this image using thedocker run
command.
Updating Containers with Kubernetes:
Image Update: Similarly, create a new version of the Docker image properly tagged and uploaded to a Docker image repository such as Docker Hub, GitLab Container Registry, Google Container Registry, or others.
Manifest Modification: Update the manifest (usually a YAML file) describing the application's deployment. The modification typically involves the image field to specify the new version of the image.
Apply the Updated Manifest: Use the
kubectl apply
command to apply the manifest modification to the Kubernetes cluster. Kubernetes handles the update process without downtime.Monitoring the Update: Utilize the
kubectl rollout status
command to monitor the update status and ensure all replicas of the new version are in the desired state.
Kubernetes manages container updates in a more advanced way compared to Docker, allowing the specification of rollout strategies, automatic rollback in case of errors, and the ability to update an application without service interruptions using techniques such as rolling updates.
In summary, Docker requires a more manual approach where containers need to be stopped and replaced, while Kubernetes automates the entire update process, providing greater control and flexibility.
How can we automate using Python?
For example, let's consider a container based on Node.js. Let's envision wanting to automatically publish the new service, after updating the source code, in either Docker or Kubernetes.
Prerequisites
Before proceeding, it's essential to ensure that you have Python installed (refer to the page https://www.python.org/downloads/ for instructions). To interact with Docker, we will utilize the Docker SDK for Python package (see the page https://docker-py.readthedocs.io/en/stable). Regarding Kubernetes, we will employ the Kubernetes Python Client package (https://github.com/kubernetes-client/python).
Step 1: Update the Image
The first step is to create a new Docker image of the updated node. Make sure to update the version in the package.json file and execute the following procedure in Python:
import docker
# Retrieve version
with open('package.json') as file:
content = file.read()
package_json = json.loads(content)
version = package_json['version']
# Image info
image_name = "mynode"
image_tag = f"v{version}"
container_registry = "registry.gitlab.com/mynamespace/containers/mynode"
def createDockerImage():
try:
# Connecting to the Docker client
client = docker.from_env()
# Build the image
print(f"Building image {image_name}:{image_tag} ...\r", end="")
dockerfile_path = "./"
image, _ = client.images.build(
path=dockerfile_path,
tag=f"{image_name}:{image_tag}",
forcerm=True,
)
print(f"New image {image_name}:{image_tag} built")
if image:
return True
return False
except docker.errors.APIError as err:
print(err.explanation)
return False
In this script, dedicated to creating a Docker image for a Node.js-based service, the initial lines extract the version from the package.json file and set variables that will be used throughout the example, including the name and tag of the image, and the URL of the container registry.
The createDockerImage()
function, after establishing a connection with the Docker client, builds the container image by calling the client.images.build
()
function. It takes as arguments the path to the Dockerfile containing all the instructions to create the container, and the name of the image to be created in the format <image_name>:<image_tag>
. Specifying the tag is crucial in the Docker environment; if omitted, the system will automatically assign the tag "latest."
Step 2: Push to the Container Registry
When we need to publish our container to a public Docker or Kubernetes service, we must first "park" our image in a public repository that Docker or Kubernetes services can access to download the image and subsequently create the containers.
def pushImageToRegistry():
try:
# Connecting to the Docker client
client = docker.from_env()
# Tag the image with the container registry url
if client.api.tag(
f"{image_name}:{image_tag}",
f"{container_registry}/{image_name}",
image_tag,
force=True,
) == False:
return False
# Delete the untagged image
client.images.remove(f"{image_name}:{image_tag}")
# Authentication with the container registry
client.login(
username="myregistryusername",
password="myregistrypassword",
registry=container_registry
)
# Push and show progress
res = client.api.push(
repository=f"{container_registry}/{image_name}",
tag=image_tag,
stream=True,
decode=True,
)
for line in res:
if ('status' in line and line['status'] == "Pushing"):
print(f"{line['progress']}\r", end="")
print()
return True
except docker.errors.APIError as err:
print(err.explanation)
return False
except TimeoutError as err:
print(err.explanation)
return False
This function manages this process by initially applying the appropriate tag to the image, concatenating the container registry path, the image name, and its tag. Subsequently, the function calls client.api.push()
. As emphasized earlier, the image tag is crucial. In the case of using a service other than GitLab, it is recommended to refer to the documentation of the chosen service to ensure the correct usage of the tag format.
Step 3a: Deploy in Docker
Now that we have our image, we can finally create the new container in Docker through a function similar to the following:
def updateDokerContainer():
try:
# Connecting to the Docker client
client = docker.DockerClient(base_url='docker_server_url')
# Stop and remove the existing container
containers = client.containers.list(filters={'name': image_name})
if len(containers) == 1:
containers[0].stop()
containers[0].remove()
# Create and run the container
ports = {…}
volumes = […]
env = […]
client.containers.run(
image=f"{container_registry}/{image_name}:{image_tag}",
detach=True,
environment=env,
name=f"{image_name}",
ports=ports,
volumes=volumes,
)
return True
except docker.errors.APIError as err:
print(err.explanation)
return False
except TimeoutError as err:
print(err.explanation)
return False
This function, which needs to be adapted and customized to meet specific requirements, is tasked with the following:
Connect to the Docker client
Unlike previous functions, unless Docker is installed locally, we need to create a client to communicate with the Docker server through an instance of the DockerClient class. The constructor's arguments depend on the connection configuration, which can be found in the documentation provided by the Docker service provider and on the page https://docker-py.readthedocs.io/en/stable/client.html in the Docker SDK documentation.
Stop and remove the running container
Before creating the new container, it is crucial to stop and remove the previous one (if it exists). Checking if the container already exists is important to prevent the stop() and remove() functions from generating an exception.
containers = client.containers.list(filters={'name': image_name})
if len(containers) == 1:
containers[0].stop()
containers[0].remove()
Start the new container
To create this automatic startup procedure, we must first define variables that will specify the ports to be mapped on the host, volumes, environment variables, etc., which will then be passed as arguments to the client.containers.run
()
function. In addition to these variables, for which I refer you to the documentation at https://docker-py.readthedocs.io/en/stable/containers.html in the SDK, it is important to specify the image argument with the full name of the tag assigned to the image previously, name with the container's name, and detach. The latter is crucial for the container to run in the background.
To automatically start the new container, it is necessary to define variables to specify the ports to be mapped on the host, any volumes, environment variables, etc. These variables are then passed as arguments to the client.containers.run
()
function. In addition to these, the image argument with the full name of the tag assigned to the image previously, name with the name to be assigned to the new container, and detach are crucial. The latter is essential to run the container in the background. For further details on variables and arguments, please refer to the documentation at https://docker-py.readthedocs.io/en/stable/containers.html in the Docker SDK.
Step 3b: Deploy in Kubernetes
Surprisingly, implementing an update in Kubernetes is much simpler because all we need to do is update the Manifest. The function updateKubernetesManifest()
below first reads the deployment.yaml file, which defines the deployment Manifest, modifies the version and image tag, and updates the Kubernetes state using the api.patch_namespaced_deployment()
function.
from kubernetes import client, config
import ruamel.yaml
def updateKubernetesManifest():
try:
# Update the deployment.yaml
with open(filename) as file:
content = file.read()
yaml = ruamel.yaml.YAML()
deployment_yaml = yaml.load(content)
deployment_yaml['metadata']['labels']['version'] = version
deployment_yaml['spec']['template']['spec']['containers'][0]['image'] = f"{container_registry}/{image_name}:{image_tag}"
# Load configuration
config.load_kube_config(config_file='kube-config.yaml')
# Apply the deployment
api = client.AppsV1Api()
api.patch_namespaced_deployment(
name=deployment_yaml['metadata']['name'],
namespace=deployment_yaml['metadata']['namespace'],
body=deployment_yaml,
)
return True
except:
return False
Thanks to its declarative nature, Kubernetes will autonomously handle all necessary operations such as downloading the new image from the container registry, deleting old containers, and creating new ones. Unlike Docker, there is no need to worry about defining volumes, ports, environment variables, or anything else here because everything has already been defined previously.