This is the last part of this Kubernetes tutorial series. In this article, I will guide you on how to deploy an application from scratch on Kubernetes. To do so, I have created a full-stack app which you can find the code here: https://github.com/egeaksoz/fullstackapp. I think we are ready to learn how to deploy a full stack application in Kubernetes. Let’s get started!
Please follow the tutorial step by step to understand so that you can grasp the entire concept behind the Kubernetes.
The application is pretty straightforward, a text input with a single button that you can add the text input to the database and another button that you can delete all the words in the database. After adding the text input we can see the list of the inputs.
This application consists of react frontend, flask backend, and PostgreSQL database. If you are going to deploy this project to a Kubernetes cluster that operates in the cloud, you should be using a load balancer but this project is mainly intended for a local run.
If you have a docker-compose file where your application is already containerized and working, you may convert it to Kubernetes by using a tool called kompose. For more information on it, please refer here: https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/. In this article, we will build everything in Kubernetes so that we can understand the concepts behind Kubernetes better.
We will deploy the application to the default namespace because Kubernetes documentation suggests: “For clusters with a few to tens of users, you should not need to create or think about namespaces at all“. If you want to know how to do it you may refer to the Kubernetes objects article.
Let’s start our project with the database.
Database
For this application, we will be using PostgreSQL.
The application just needs a database that has a table where we can add a text and/or fetch all the texts from it.
This means, somehow we need to have a database inside the Kubernetes pod that already has the table created. There are multiple ways of doing it. I’d like to show you only the 2 ways of how to do it in this article.
One way to do it is to build our own docker image that the table created and use that image in our Kubernetes file.
The Postgres docker image documentation suggests ( please check “Initialization Scripts” ) Postgres would run the script that has been added under “/docker-entrypoint-initdb.d” before starting. This means we have to add our table creation script under that path.
The database table creation script is as simple as this:
CREATE TABLE text (
id serial PRIMARY KEY,
text VARCHAR ( 100 ) UNIQUE NOT NULL
);
And our dockerfile looks like this:
FROM postgres:latest
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
ENV POSTGRES_DB postgres
ADD CreateDB.sql /docker-entrypoint-initdb.d/
Once we build this docker image, for instance with the name “fullstackappdb”:
docker build -t fullstackappdb .
This image can be used in a Kubernetes pod file for instance:
apiVersion: v1 kind: Pod metadata: name: postgres spec: containers: - image: fullstackappdb imagePullPolicy: Never name: postgres ports: - containerPort: 5432 env: - name: POSTGRES_USER value: "postgres" - name: POSTGRES_PASSWORD value: "postgres"
Please note that “imagePullPolicy” is set to “Never” so that Kubernetes has to look for the image in our local machine.
IMPORTANT NOTE: Depending on your cluster setup you may need to do additional steps for using local images. For instance, I am using kind to create my local Kubernetes cluster and you have to load docker-image explicitly to kind: “` kind load docker-image fullstackapp:latest “` otherwise you will end up with ImagePullBackOff error. For other cluster setup methods please refer to their web pages or conduct a google search.
Now if I apply this YAML file and check inside the database I will observe the table is created:
The other way to make it work is to use a config map with your Postgres Kubernetes deployment. This time we are not going to create our own docker image but use the official Postgres image and add our script there instead.
To do so, first, we need to create a config map that has our script. Our config map looks like this:
apiVersion: v1 data: CreateDB.sql: |- CREATE TABLE text ( id serial PRIMARY KEY, text VARCHAR ( 100 ) UNIQUE NOT NULL ); kind: ConfigMap metadata: name: pg-init-script
After defining our config map, we will create a Postgres deployment and add the above config map as a volume like shown below:
apiVersion: apps/v1 kind: Deployment metadata: name: postgres labels: app: database spec: replicas: 1 selector: matchLabels: app: database template: metadata: labels: app: database spec: containers: - name: postgres image: postgres:latest ports: - containerPort: 5432 volumeMounts: - name: sqlscript mountPath: /docker-entrypoint-initdb.d env: - name: POSTGRES_USER value: "postgres" - name: POSTGRES_PASSWORD value: "postgres" volumes: - name: sqlscript configMap: name: pg-init-script
This will get us a working Postgres with the desired table created.
We do not want users to communicate with the database directly but the database should communicate with the backend. Therefore we should define a service type of ClusterIP for this deployment:
apiVersion: v1 kind: Service metadata: name: pg-service labels: app: database spec: type: ClusterIP ports: - port: 5432 selector: app: database
Our database is all set! Now let’s create the backend part of the application.
Backend
Our backend application is running a flask ( python ) app.
This time there is no easier approach than to create our own image of the application.
To do so, navigate inside the backend folder and build the image:
docker build -t backend .
Now it is time to create a Kubernetes file for the flask backend application. Simply put, we have to create a new deployment resource and a service for it.
apiVersion: apps/v1 kind: Deployment metadata: name: backend labels: app: backend spec: replicas: 1 selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - name: flask-backend image: backend:latest imagePullPolicy: Never ports: - containerPort: 5000 env: - name: DATABASE_URI value: pg-service
Please observe that it expects an environment variable that can be observed in the flask code. The value of this variable should be the name of the database service.
Since the flask app only communicates inside the cluster, a ClusterIP type service that exposes the port 5000 is required.
apiVersion: v1 kind: Service metadata: name: flask-service labels: app: backend spec: type: ClusterIP selector: app: backend ports: - port: 5000 targetPort: 5000
Now we have a backend that is communicating with a database. It is time to create the frontend.
Frontend
On the front-end, the application is using React. React server should somehow communicate with the backend server as it will send the input taken from the user to the backend server. There are different approaches to connect frontend and backend servers but since this is an example intended to be used locally, I will not be using a load balancer but ‘proxy’ ( available for react-script@0.2.3 ) as mentioned here.
For using it, simply add:
"proxy": "http://flask-service:5000"
to your package.json. Please observe, I have hardcoded the name of the backend service ( flask-service). Basically, it will proxy any request from “localhost:3000” to “http://flask-service:5000”.
Let’s create the Dockerfile for our frontend:
FROM node:16-alpine
ADD . /frontend
WORKDIR /frontend
RUN npm install --silent
CMD ["npm", "start"]
And build it ( I will tag it as ‘frontend‘ ):
docker build -t frontend .
Now for running it inside Kubernetes, we just need to add the folder and install dependencies inside the container and then start the server.
And we have to use this image in our deployment for frontend:
apiVersion: apps/v1 kind: Deployment metadata: name: frontend labels: app: frontend spec: replicas: 1 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - name: react-frontend image: frontend:latest imagePullPolicy: Never ports: - containerPort: 3000
Since this is the UI of the application, this app should be exposed to the outer world. Therefore, service should be in NodePort type:
apiVersion: v1 kind: Service metadata: name: react-service spec: type: NodePort selector: app: frontend ports: - port: 3000 targetPort: 3000 nodePort: 30000
Here I’ve hardcoded NodePort to ‘30000’. If I haven’t so, NodePort will be a random port and I should have checked it by:
kubectl get svc
and read the port for the react-service.
Now first check if all three applications are up and running:
As they are up and running, I can visit the UI by navigating to ‘localhost:30000’ using any browser:
We did it!
I hope you enjoyed the Kubernetes Series!
Kubernetes Series
- Episode 1: Introduction to Kubernetes
- Episode 2: How to Create a Kubernetes Cluster
- Episode 3: Kubectl
- Episode 4.1: Kubernetes Objects
- Episode 4.2: Kubernetes Workloads
- Episode 4.3: Kubernetes Services
- Episode 4.4: Kubernetes Storage
- Episode 4.5: Kubernetes Configuration Objects
- Episode 5: Scheduling in Kubernetes
- Episode 6: Kubernetes Upgrade and Deployment Strategies
- Episode 7: Kubernetes Security
- Episode 8: Deploy a Full Stack Application in Kubernetes
Thanks for reading,
Ege Aksoz

Holding a BSc in Mechatronics, Ege loves to automate. He is now working as a Software Development Engineer In Test at XebiaLabs, Amsterdam.