Hello friends 👋
Today I’m going to give you a complete guide about Docker. Over the past few weeks, I got the opportunity to learn Docker, and here I’m excited to share the knowledge I gained.
Let’s start with a small story that my instructor shared during the course.
📌 The Story
There were two friends, James and Frank, who attended a job interview together. They both got selected.
On their first day at work, they were assigned a project to build a simple To-Do application. They divided the tasks between them and started working. The deadline was Friday.
On Friday morning, they combined the parts they had developed individually.
But here’s the problem… 💥
The application didn’t work! Both James and Frank were confused because the application worked perfectly on their own machines. The tech lead came and asked what happened. After listening to them, he explained:
“This is called dependency hell.”
📌 The Story
There were two friends, James and Frank, who attended a job interview together. They both got selected.
On their first day at work, they were assigned a project to build a simple To-Do application. They divided the tasks between them and started working. The deadline was Friday.
On Friday morning, they combined the parts they had developed individually.
But here’s the problem… 💥
The application didn’t work! Both James and Frank were confused because the application worked perfectly on their own machines. The tech lead came and asked what happened. After listening to them, he explained:
“This is called dependency hell.”
🤯 What Happened?
- James was using Node.js 16
- Frank was using Node.js 20
- James was working on Ubuntu
- Frank was working on Windows
They had different:
- Node.js versions
- Operating systems
- System configurations
- Package versions
Because of these differences, the application failed when combined.
This situation is what we call dependency hell when software doesn’t run consistently due to differences in environments, package versions, or operating systems.
So Where Does Docker Come In?
Docker solves this problem.
Instead of depending on each developer’s local machine setup, Docker allows us to package:
- The application
- The Node.js version
- All dependencies
- Required configurations
into a container.
This container runs the same way on:
- Your machine
- Your teammate’s machine
- The testing server
- Production
No more “It works on my machine” problems 😄
What About Virtual Machines?
Yes, we can also solve this using Virtual Machines (VMs). But there’s a difference:
- A Virtual Machine runs a full operating system on top of your current OS.
- It consumes more RAM and system resources.
Docker containers, however:
- Share the host OS kernel
- Are lightweight
- Start faster
- Use less memory
That’s why Docker is more efficient compared to traditional VMs in most cases.
Now that you understand why we use Docker and what problem it solves, in the next section we can move into the practical side and start working with Docker step by step.
Before we start working with Docker, the first step is to install it on your machine. You can easily download Docker from the official Docker website. The installation process is simple, and there are plenty of helpful videos on YouTube that guide you step by step through downloading and installing it. If you need more detailed information or run into any issues, the official Docker documentation is also a great resource.
For now, I’ll assume that you have successfully installed Docker on your machine. To confirm whether Docker is installed correctly, open your Command Prompt (or terminal) and simply type docker, then press Enter. If Docker is installed properly, you will see an output similar to the one shown below.
C:\Users\exampleuser>docker
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Common Commands:
run Create and run a new container from an image
exec Execute a command in a running container
ps List containers
build Build an image from a Dockerfile
pull Download an image from a registry
push Upload an image to a registry
images List images
login Authenticate to a registry
logout Log out from a registry
search Search Docker Hub for images
version Show the Docker version information
info Display system-wide information
Management Commands:
ai* Ask Gordon - Docker Agent
builder Manage builds
buildx* Docker Buildx
checkpoint Manage checkpoints
compose* Docker Compose
container Manage containers
context Manage contexts
debug* Get a shell into any image or container
desktop* Docker Desktop commands (Beta)
dev* Docker Dev Environments
extension* Manages Docker extensions
feedback* Provide feedback, right in your terminal!
image Manage images
init* Creates Docker-related starter files for your project
manifest Manage Docker image manifests and manifest lists
network Manage networks
plugin Manage plugins
sbom* View the packaged-based Software Bill Of Materials (SBOM) for an image
scout* Docker Scout
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes
Swarm Commands:
config Manage Swarm configs
node Manage Swarm nodes
secret Manage Swarm secrets
service Manage Swarm services
stack Manage Swarm stacks
swarm Manage Swarm
Commands:
attach Attach local standard input, output, and error streams to a running container
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
export Export a container's filesystem as a tar archive
history Show the history of an image
import Import the contents from a tarball to create a filesystem image
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
save Save one or more images to a tar archive
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
wait Block until one or more containers stop, then print their exit codes
Global Options:
--config string Location of client config files
-c, --context string Name of the context to use
-D, --debug Enable debug mode
-H, --host list Daemon socket to connect to
-l, --log-level string Set the logging level
--tls Use TLS
--tlscacert string Trust certs signed only by this CA
--tlscert string Path to TLS certificate file
--tlskey string Path to TLS key file
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit
Run 'docker COMMAND --help' for more information on a command.
When you open Docker Desktop, you will see sections like Images and Containers. So what exactly are these? What is a Docker image, and what is a Docker container?
A Docker image is basically a blueprint or template. It contains all the instructions needed to run an application such as the runtime (for example, Node.js), libraries, dependencies, environment variables, and the application code itself. In simple terms, a Docker image includes everything required to run your software. You can think of it like a Windows ISO file that we use to install Windows on a machine. It contains all the necessary files, but it is not running yet.
The great thing is, you don’t always have to build images yourself. You can find and download ready to use Docker images from Docker Hub which is the official repository of Docker images. Docker Hub hosts a wide range of images for different programming languages, frameworks, databases, and popular applications, making it easy to get started quickly.
Then what is a container?
A Docker container is a running instance of that image. When you take a Docker image and execute it, it becomes a container. If the image is the blueprint, the container is the actual working application created from that blueprint. You can create multiple containers from the same image, and each one will run independently.
So in short, the image is the package with all the instructions, and the container is the live, running version of that package.
Now you have a clear idea of what Docker is, why we use it, and what Docker images and containers are. So let’s move on to some practical commands. Earlier, I mentioned that Docker has images and containers but how can we actually check them?
To see the Docker images available on your machine, you can use the following command:
This command will list all the images you have downloaded locally, along with their repository name, tag, image ID, creation time, and size.
Now, how do we create a container from an image? For that, we use the docker run command:
Here, -d stands for detached mode, which means the container will run in the background.
Let’s take a real example. Suppose you want to run the Nginx image. You can start a container like this:
In this command, nginx is the image name and latest is the tag. A tag represents the version of the image. When you visit Docker Hub and look at an image, you will notice that it has multiple tags. Each tag corresponds to a specific version or variant of that image.
If you do not specify a tag, like in the example below:
Docker will automatically use the latest tag by default.
Earlier, we used a command to check the Docker images available on our machine. So naturally, there must be a command to check the containers as well, right? Yes, there is.
To see the containers, you can use:
This command shows the running containers. It lists details such as the container ID, image name, status, ports, and container name.
You can also use:
This command does the same thing it shows only the running containers. (In fact, docker ps is just a shorter version of docker container ls.)
If you want to see all containers, including stopped ones, you can use:
- Check Docker images
- Run an image as a container
- View running containers
- View all containers
For this article, I would like to introduce one more important concept: Port Exposing.
What is port exposing ?
Port exposing means making a port inside the container accessible from your host machine.
By default, when a container runs, it is isolated. Even if the application inside the container is running on a specific port (for example, port 80), you cannot access it directly from your browser unless you map that port to your host machine.
For example, suppose:
- Your container application runs on port 80
- You want to access it from your computer using port 8080
In that case, we map the host port 8080 to the container port 80.
This means:
Host Port → 8080
Container Port → 80
How Do We Do Port Mapping?
So to do this mapping We use the -p option with the docker run command:
So this command starts an Nginx container in the background and maps your local port 8080 to port 80 inside the container.
Now, if you open your browser and go to:
http://localhost:8080
You will see the Nginx welcome page running from inside the container.
And that’s a wrap for today’s Docker guide! 🎉
By now, you should have a solid understanding of what Docker is, why it’s useful, and how to work with images, containers, and port mapping. You’ve learned how to check images and containers on your machine, run containers in detached mode, and expose container ports to access applications from your host computer.
This is just the beginning! In the next article, we’ll dive deeper into Docker exploring Docker volumes, Dockerfiles, and advanced container management. These concepts will help you take your Docker skills to the next level and make your applications even more robust and portable.
So stay tuned, and let’s continue our Docker journey together!


