Managing Docker containers with the help of Kubernetes

Adam Marton Beke
11 min readNov 20, 2020

Before we get started, let’s discuss the fancy word…
What is Kubernetes?

Let’s start with Kubernetes…
Kubernetes or K8s for short is an open source toolkit for building a fault-tolerant, scalable platform designed to automate and centrally manage containerized applications. The platform itself can be deployed within almost any infrastructure — in the local network, server cluster, data center, any kind of cloud — public (Google Cloud, Microsoft Azure, AWS, etc.), private, hybrid, or even over the combination of these methods. It is noteworthy that Kubernetes supports the automatic placement and replication of containers over a large number of hosts.

This brings us to the second question…
What is Docker?

Docker portrays a new outlook on how to look at code. It’s an image of a whale/boat that is shipping containers, and each container could be filled with code.
When the boat docks at a “node”, it can drop off the inbound code, load it up with outbound code and ship it off to another node.
To add onto that, let’s say the boat is also automated, so you don’t need to worry about driving it, all you have to do is orchestrate what goes in/out of those containers.
This is where Kubernetes ties into Docker, we can go a level even deeper and say instead of orchestrating what goes in/out of the containers, we can get Kubernetes to automate that as well, so AKS can decide what goes in/out of the containers.

Docker works with containers, Kubernetes works with the container groups.
It’s a lot deeper than that, but that’s the nutshell.

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly without the concern of what type of operating system the host and end-user may use. With Docker, you can manage your infrastructure in the same ways you manage your applications.

For more information on Docker visit my friend Bryant’s article :

Let’s get started with this exercise

Step 1:

Pick a Hypervisor.
Hyper-V can be installed in 4 steps:

  1. Right click on the Windows button and select ‘Apps and Features’.
  2. Select Programs and Features on the right under related settings.
  3. Select Turn Windows Features on or off.
  4. Select Hyper-V and click OK.

(Option 2)
Oracle V-Box Can be installed in 1 step :

Or you can use VMware, KVM, or whatever your preference may be.

Next, Install Kubernetes — there are 2 packages we need to follow along with this article.
1st package is Install and Set Up kubectl:
2nd package is Install Minikube

Next: Install Docker
Please visit the official docker page here to download docker to your local machine:

Once you have downloaded and installed both packages, Rename the downloaded minikube-windows-amd64.exe file as minikube.exe and place it to the same folder where you have kubectl.exe, in that way you avoid adding another location in Path. Below is an example of what I am talking about.

Verify that kubectl is properly installed :
On cygwin (recommended over cmd, you can use cmd also, but don’t mix terminals) type — kubectl
If there are no errors, then kubectl is correctly installed.

Step 2:

Start minikube using command, For Hyper-V →
minikube start — vm-driver=”hyperv” — hyperv-virtual-switch=<name_of_your_switch>

For Virtual Box →minikube start
If above doesn’t work, try this → minikube start — vm-driver=virtualbox
Download of ISO images will take a lot of time for first time

Troubleshooting: Now incase it doesn’t work, it will give you a error responses specifying why, and a main reason why could be… you do not have AMD-V enabled.
If you do not have it enabled, you will have to enter BIOS on your computer. Every new computer these days boots up way too fast to get into BIOS, so I’m gonna make a quick summary of how to get into it.
First thing, go to System Information and check your BIOS type.
It will either be UEFI, or Legacy. Mine is Legacy.

If you are using UEFI follow this article:
If you are using Legacy:
Open the start menu, and hold down shift when you click shut down. This bypasses fast-boot on your next startup allowing you to spam Deleted the second you press the button to turn your pc on.
From here you have to go under Advanced Options (F7 if it doesn’t show in the menu) , go under CPU configuration, and enable SVM Mode.
Upon restart, you should be good to go.
Open CMD back up and go back to the beginning of Step 2.

Did you get this error? Try to run these commands:
minikube stop
minikube start

If that doesn’t work, go under your computer C:\users\yourusername — Delete the .minikube folder.
Open CMD and run start minikube again

it might clear out, if that doesn’t work — refer to this article

Step 3:

If you open up your Hypervisor (I am using V-BOX), we can verify that the minikube VM has been created.

In the CMD run minikube dashboard

If that doesn’t work, go under your computer C:\users\yourusername — Delete the .minikube folder.
Open CMD and run start minikube again

If you recieved a “This site can’t be reached” error, you might just have to wait a minute or two and then refresh the page.
If it did work, then yay! We are progressing.

Step 4:

As you can see from the Dashboard, we have a lot of side menus and a lot of information on a lot of stuff! It can be overwhelming to look at, but with time we will slowly make sense of the chaos.

In this walkthrough we will mostly be focusing on Workloads > Deployments and Service > Services

As we talked about earlier, Kubernetes is made to easily deploy docker images. We will see in the upcoming steps how simplified it can be to do something so complex.

Step 5:

In order to being the deployment, all we need to do is click the Plus/Create symbol at the top right hand corner of the dashboard.

When you see this screen, go ahead and click “Create from form” or “Create an app” depending on the version you are using, they both bring you to the same menu.

This is the menu that you are probably seeing now, we will want to fill this form out.
Pick any app name that you would like.
Under Container Image I have chosen this one :
You can pick another one if you’d like, but I recommend using the same one so we can be on the same page.

Step 6:

Click on the little copy icon to copy that Container Image URL, then we will go back into the Kubernetes Resource Creation page and paste it under the Container Image Field.

Next we choose any number of Pods we want created across our cluster. For this exercise I’m just going to pick 3.

Next we will decide if the Service is None, Internal or External

Internal : The service won’t be exposed to external users, it will only be available to users within the pod layer. This prevents anyone from the outside to access the service.

External : The service will be available to external users

Let’s make this simple and go with the External Service option.

We can select a Port, Target Port and the Protocol.
Lets fill in 31000 for the port, 80 for Target Port and TCP is great for the protocol.

This is what the current configuration looks like. Let’s go ahead and Deploy it.

So here we can see the dashboard showing our new deployment, all 3 pods, shows our replica sets, and load balancing services.
They may take some time to load up, so refresh the page after a couple minutes.

You may notice that this service is taking a long time to load… don’t worry, it isn’t broken. If you click on the name you can see that it is a Load Balancer

The reason why it hasn’t loaded yet is because you most likely don’t have a Load Balancer in your local machine, we usually use them through Azure Cloud, AWS, or Google Cloud.

Step 7:

So to fix this, we just need to change the type from Load Balancer to NodePort
We do this by:

Select the Fun + Easy edit icon and….

Whoah! We just got a block of Code. Don’t panic, I will explain everything going on in here.

You can click on the YAML tab or the JSON tab. They are both showing the same information, but JSON uses a lot more brackets and might be harder to look at.

Let’s take a moment to look past all the symbols, numbers and confusing stuff… the first thing it tells us is that it is a Service. Yes, okay.
Then it gives us the version of the api running the service = v1.

the meta-data shows something familiar… angular-app. Remember we named it this? Okay, so the code is just reflecting all the things we wrote into the blank spaces when creating the service. So then we can see the rest, we can scroll down and see when it was created, the port, the target port, the protocol, the node port, the cluster IP, the type… okay. So now that we looked at it and realized it’s all familiar, the code isn’t so scary anymore, I hope!

Let’s come down to the bottom of the code, and find type: LoadBalancer.
We are changing this value to NodePort.
By changing the value to NodePort, our Service will then be listening on the NodePort number : 31996 as specified on line 44. (I think it auto-generates a number, but you can change it)
It will be using port 31996 to redirect end users to your application.

If you are editing in YAML change it like it is above

If you are editing in JSON, remember to include the “quotes” and the comma ,

Then click Update at the bottom of the code box.

Click on the Kubernetes logo in the top left to go back to your Overview of the app.

Scroll down and now you see the green question mark beside angular-app, so now we are winning the fight.

We can now click on it to see more details —

At the bottom we see some IP addresses and the Ports associated with the IPs and they say “Ready”

Step 8:

This is the fun part, let’s check out what we made!

in your Url it probably shows
This is showing that is your machine and the :<portnumber> is the port number of your minikube dashboard. Now we want to head to our angular app service.
So in a new tab type in <- this is our NodePort port number.

Bam! I works. Now we can view our angular app in our browser no problem.

Did you get this error message?
I am working on a solution for this problem at the moment, some users can go through this entire process and it works, others will do the same and it won’t work. Please be patient while I find a solution, and I will update this as soon as possible.
If you know of possible solutions to this issue, please reach out.

Conclusion/Ending Points to talk about

Kubernetes also makes it very easy to update and scale applications. To update apps we can change the tags that the image use, and to scale we can increase the number of replicas used by our docker image/aka, the number of pods used by our service.

So how do we increase pods?

First step, go under Workloads -> Deployments
Second step, click the scale icon in the top right corner

You can change the number here, OR you can edit the YAML/JSON code like we did earlier and find line 20 where it says “replicas”: 3, and you can change it there when you have traffic exceeding your usage.

That’s all I have for now, please reach out to me on LinkedIn if you have any issues and I will do my best to help you.