Docker - The Complete Guide to Build and Deploy your Application
Mar 2, 2024
0
Docker has been a de facto standard for containerization and is widely used in the industry. Docker ability to package and run applications in a loosely isolated environment called a container has made it popular among developers.
Docker solves the problem of inconsistent environments and dependencies by providing a standard way to package and distribute applications. Docker containers are lightweight and contain everything needed to run an application, including code, runtime, system tools, system libraries, and settings.
"It run in my system" is a common phrase we as a developer always say right, Docker solves this problem by providing a consistent environment for development, testing, and production. Docker containers are portable and can run on any system that supports Docker.
Docker Key Concepts#
Image: An image is a lightweight, standalone, and executable software package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.
Container: A container is a runtime instance of an image. A Docker container consists of an image, an execution environment, and a standard set of instructions.
Volume: Docker Volume is a directory that is accessible to the containers in a Docker environment. Volumes are used to persist data generated by and used by Docker containers.
Dockerfile: Dockerfile is a text document thats contains all the command to build a docker image from scratch for your application.
Docker Compose: Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services.
Images#
Docker images are the basis of containers. An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime.
We can build custom image for our application and if our app needs other services like Database, Runtime or anything we can get those services from the Docker Hub.
Create custom image#
We need to create a Dockerfile
in the root of our application and define all the steps to build the image. For example if we want to build a node application image.
Since we are building a node application, we need to have node installed in our image. We can use the official node image from the docker hub.
FROM: This is the base image for our application. We are using the official node image from the docker hub. Depending on the application, you can use other images like
alpine
,ubuntu
,centos
, etc.WORKDIR: This is the working directory inside the container.
COPY: This command copies the package.json file to the working directory. you can also copy other files like
yarnc.lock
orpackage-lock.json
. This is done to take advantage of Docker's caching mechanism.RUN: This command runs the
npm install
command to install all the dependencies. You can use other package managers likeyarn
orpnpm
as well.COPY: This command copies all the files from the current directory(root of the application) to the working directory(inside the container).
EXPOSE: This command exposes the port 3000. This is the port on which our application is running. Exposing the port is not mandatory but it's a good practice to do so.
CMD: This command runs the
npm start
command to start the application.
Now we can build the image using the following command.
Dockerize your Fullstack app for development
Dockerize your Node.js (Express) and Next.js (React) applications for faster, easier development using Docker and Docker Compose.
Read Full PostContainers#
A container is a runtime instance of an image. A Docker container consists of an image, an execution environment, and a standard set of instructions.
We can run the new container using the following command after building the image.
-d: This flag is used to run the container in the background.
-p: This flag is used to map the port of the host to the port of the container. In this case, we are mapping the port 3000 of the host to the port 3000 of the container.
--rm: This flag is used to remove the container when it is stopped.
-n: This flag is used to give a name to the container.
All these flags are optional and you can use them as per your requirement.
What if we want to run a container with a different environment variable or different command. We can do that using the following command.
in same way we can run a official image from the docker hub. docker will check if the image is available locally, if not it will pull the image from the docker hub and run the container.
What if you want to do some operation inside the container. We can do that using the following command.
-it: This flag is used to run the container in the interactive mode.
sh: This is the shell that we want to run inside the container. You can also use
bash
orzsh
as per your requirement.
Now we can do any operation inside the container. For example, we can run the ls
command to list all the files inside the container.
Volumes#
Docker volumes are used to persist data generated by and used by Docker containers. A volume is a directory that is accessible to the containers in a Docker environment.
Volumes are important since docker containers are ephemeral, which means that the data inside the container is lost when the container is stopped or removed.
Now since it's tedious to create a volume and then attach it to the container, we can use docker-compose
to do that for us. And trust me docker-compose
is a life saver. and you gonna use it a lot in your development and production environment.
So, let's see how we can use docker-compose
to create a volume and attach it to the container. we will use mongo image as an example.
This is basic example of how we can use docker-compose
to create a volume and attach it to the container. Now even if the container is stopped or removed, the data will be persisted. and when we run the container again, the data will be available.
Now we can run the container using the following command. make sure you are in the same directory as the docker-compose.yml
file.
Now what if we want to use docker for development for our node application and when do changes in the code, we don't want to rebuild the image and run the container again and again. We can use bind mount
to do that. and with docker-compose
it's very easy to do that.
Since node-app
is a custom image, we are using the build
key to specify the path of the Dockerfile
. We are also using the volumes
key to specify the path of the application. In this case, we are using .
to specify the current directory.
Now after this config we run and do changes in the code, the changes will be reflected in the container without rebuilding the image and running the container again.
When we use Volumes of services like mongodb
or mysql
to bind he path we
can follow official documentation of the image to know the path of the volume.
For example, for mongodb
the path is /data/db
.
Docker Compose#
Docker compose makes it easy to run multi-container applications. With Compose, you use a YAML file to configure your application's services.
It's life saver since we don't have to remember all the commands to run the container, create a volume and attach it to the container. We can do all that with a single command.
Let's create a multi-container application using docker-compose
. We will use the same example of the node application and the mongo image.
version: This is the version of the docker-compose file. We are using version 3.8.
services: This is the section where we define all the services that we want to run. In this case, we are running a service called
mongodb
. you can name it anything you want.image: This is the image that we want to run. In this case, we are running the official mongo image from the docker hub.
restart: This is the restart policy for the container. In this case of the container stops, it will be restarted automatically.
volumes: This is the section where we define all the volumes that we want to create and attach to the container. In this case, we are creating a volume called
data
and attaching it to the container./data/db
is the path inside the container where the data will be stored.depends_on: This is the section where we define the dependencies of the services. In this case, we are saying that the
node-app
service depends on themongodb
service.
Now we can run the multi-container application using the following command.
There are many other things that we can do with docker-compose like networking
, secrets
, healthcheck
and many more. You can check the official documentation of the docker-compose to know more about it.
But we gonna learn about the Networking since it's very important and we gonna use it a lot in our development and production environment.
Networking#
Since we know docker containers are isolated from each other, we need to create a network to make them communicate with each other. Docker provides a default network called bridge
that allows the containers to communicate with each other.
When we use docker-compose
to run the multi-container application, it creates a default network called docker-compose_default
and attaches all the services to that network.
In case of our example, the node-app
service and the mongodb
service are attached to the docker-compose_default
network. and if we want to make them communicate with each other, we can use the service name as the hostname.
for example we can pass the MONGO_URL
environment variable to the node-app
service.
In this case, mongodb
is the service name and 27017
is the port on which the mongodb
service is running.
But in case of our server app to communicate with client app in development we cant use the service name as the hostname. This is because our client app is running on the host machine and not inside the container. We have to expose the port of both services and then we can use localhost
as the hostname.
But when we know both the services are running inside the container, we can use the service name as the hostname and we don't have to expose the port of the services.
Docker for Production#
Let's we wanna deploy a React application using docker for production. There are few things that we need to consider when deploying an application using docker.
As you can see, we are using a multi-stage build to build the image for the production environment. We are using the official node image to build the application and the official nginx image to run the application.
Also notice we are using alpine
version of the images. This is because the alpine version is lightweight and contains only the necessary things to run the application.
Since when build the app we got static files, we can use the official nginx image to run the application. We are copying the static files to the /usr/share/nginx/html
directory and exposing the port 80.
Now we can build image and push to Docker Hub or any other registry. and then use cloud services like AWS, GCP, Azure to deploy the application.
Conclusion#
There are many things that we can do with docker and we have just scratched the surface. Docker is a powerful tool and is widely used in the industry. The need and importnace of docker shine when we work with microservices and kubernetes. Eeven when we work on old projects where we have to maintain the old version of the application, docker is very useful.
I have just covered the most used things you gonna use when working with docker. Docker official documentation is very good and you can check that to know more about docker.
I hope you liked the article and learned something new. If you have any questions or suggestions, feel free to comment below.
Comments (1)