A Beginner’s Guide to Kubernetes

Pranav T P
5 min readOct 18, 2021

If you are new to container orchestration and Kubernetes, do not worry. We will have a glance at what container is and what Kubernetes do for us. Finally, we will set up your first Kubernetes cluster together. Let’s start now to save time.

1- What is a Container?

A container is a standard unit of software package consist of actual executable code and all its dependencies. So the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable software package that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings.

Docker is a containerization software that performs operating-system-level-virtualization. The developer of this software is Docker, Inc. The initial release year of this software is 2013. It is written in Go programming language.

Container images: Container images become containers at runtime, and in the case of Docker containers — images become containers when they run on Docker Engine. They are available for both Linux and Windows-based applications. Containerized software will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that it works uniformly despite differences, for instance, between development and staging.

Docker Engine: Docker container technology was launched in 2013 as an open-source Docker Engine. Docker’s technology is unique because it focuses on the requirements of developers and systems operators to separate application dependencies from infrastructure. The technology available from Docker, and it is open source. Integrated into cloud technologies by all major data centre vendors and cloud providers. Many of these providers are leveraging Docker for their container-native IaaS offerings. Additionally, the leading open-source serverless frameworks utilize Docker container technology.

2- What Kubernetes do?

As enterprises move their applications to microservices and the cloud, it causes a growing demand for container orchestration solutions. While there are many solutions available, some are mere re-distributions of well-established container orchestration tools, enriched with features and, sometimes, with certain limitations in flexibility. There are a number of paid or free-to-use container orchestration tools and services available, and currently most popular of them is Kubernetes.

Kubernetes is an open-source platform that supports the automation of deployment, scaling, and management of containerized services. Kubernetes originally developed by Google and maintained by the Cloud Native Computing Foundation.

When you deploy Kubernetes, you get a cluster. A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node. A node may be a virtual or physical machine. Each node is managed by the control plane and contains the services necessary to run pods. Each pod is a logical host for a container. The worker node(s) host the pods that are the components of the application workload. The control node manages the worker nodes and the Pods in the cluster.

3- Why Organizations should use Kubernetes ?

Kubernetes can save organizations money because it takes less manpower to manage IT. They will use Kubernetes to facilitate the versatile application support that can cut down on hardware costs and lead to more efficient architecting. It is one of several choices in new container architectures, for bringing a higher level of innovation to the design of a hardware and software environment.

4- Kubernetes Architecture

A Kubernetes Architecture consists of a set of worker nodes that run containerized applications. The nodes host pods which are parts of the application. The cluster is controlled through a master. The master contains an API server that receives commands (as JSON or YAML) from kubectl (a CLI on the local workstation).

Installation

Since the MicroK8s is packaged with snap, snapd is required on the host to install. The latest Ubuntu distribution comes with snapd already built in.

Install the latest MicroK8s with the following command

$ sudo snap install microk8s --classic

Now you are ready to use kubernetes on the workstation!

Enable addons

Enable necessary addons from the list above. The command below enables dashboard and dns addons.

$ sudo microk8s.status
$ sudo microk8s.enable dashboard dns

Access kubernetes

Check the deployment status with ‘kubectl get all’ command. This should list all resources including the dashboard and dns addons enabled earlier.

$ sudo microk8s.kubectl get all --all-namespaces

Kubernetes dashboard

Notice the Cluster-IP and port of kubernetes-dashboard service in the above listing — 10.152.183.114. The dashboard UI will be accessible from the workstation through this IP:Portfor example — https://10.152.183.114/

Use either kubeconfig, or token to sign in.

# set token
$ token=$(sudo microk8s.kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)# print token
$ sudo microk8s.kubectl -n kube-system describe secret $token

Successful log in takes you to the homepage

Cluster Information

Use the ‘sudo microk8s.kubectl cluster-info’ command to see the url, navigate to the portal to view the dashboard.

Create deployments

Now the basic setup is complete, you can start deploying your apps to the MicroK8s enviroment.

Use the following command to deploy a Nginx Application

# Creates deployment
$ sudo microk8s.kubectl create deployment nginx-node --image="nginx"

Create a service to expose the deployment

$ sudo microk8s.kubectl expose deployment nginx-node --type=LoadBalancer --port=80

Use the ‘sudo microk8s.kubectl get all --all-namespaces’ command to see the new service.

Since the service is exposed as NodePort type, it will be available on the localhost at the port shown on the cluster IP. e.g. http://10.1.107.36/

--

--

Pranav T P

I'm a Pranav T P. pursuing my Master (Mtech) at PES University, Banglore in a stream of Cloud Computing