Docker has become one of the most important tools in modern software development, especially for teams adopting DevOps, microservices, or cloud-native workflows. But if you’re just getting started, the idea of “containers” can feel confusing - How do they work? Why do developers use them? And why has Docker become the default standard for packaging and running applications?
In simple terms, Docker allows you to package an application and everything it needs - like libraries, dependencies, and configuration into a single, portable unit called a container. This container runs the same way on your laptop, on a server, or in the cloud, eliminating the classic problem of “it works on my machine.”
Before Docker came along, developers often relied on heavy virtual machines or manually configured environments. These were slow to start, consumed a lot of resources, and were hard to maintain. Docker changed this by introducing lightweight, fast, and consistent environments that can be created or destroyed in seconds.
This guide is designed for complete beginners. You’ll learn what Docker containers are, how to install Docker, how to run your first container, how to create your own container image, and how to manage containers effectively. By the end, you’ll have a solid foundation to start using Docker in real projects with confidence.
What Are Docker Containers?
Before you start creating and running containers, it’s important to understand what they actually are and why developers rely on them. A Docker container is essentially a lightweight, isolated environment that contains an application and everything required to run it code, dependencies, runtimes, and configuration. Think of it as a neatly packaged box that behaves the same no matter where you open it.
Understanding Containerization
Traditional environments often rely on full virtual machines, each with its own operating system. Containers take a different approach. Instead of virtualizing hardware, they virtualize the operating system, which makes them far more efficient. Multiple containers can run on a single machine without consuming unnecessary resources.
Images vs Containers
This is a key beginner concept:
- A Docker image is like a blueprint or recipe. It defines what the container should contain.
- A Docker container is the running instance created from that image.
You can create many containers from one image, just like baking multiple cakes from the same recipe.
How Docker Works at a High Level
Docker uses features built into the operating system - such as namespaces and cgroups - to keep processes separated and control resource usage. This ensures each container behaves independently while staying extremely lightweight.
Real-World Examples of Container Usage
Containers are commonly used for:
- Microservices
- Web servers and APIs
- Development environments
- Testing automation
- Cloud-native applications
In short, Docker containers simplify development, increase reliability, and speed up deployments.
Installing Docker
Before you can start creating and running containers, you’ll need Docker installed on your machine. The good news is that Docker supports all major operating systems and provides a simple installation process for each.
Platforms Supported by Docker
Docker Desktop is available for:
Windows 10/11 (with WSL 2 support)
macOS (Intel and Apple Silicon)
Linux distributions such as Ubuntu, Debian, Fedora, CentOS
Each platform has its own installation package, which you can download directly from the official Docker website.
Installing Docker Desktop (Windows & macOS)
For most beginners, Docker Desktop is the easiest way to get started. Simply download the installer and follow the setup wizard. Docker Desktop includes everything you need: the Docker Engine, a UI dashboard, and tools for managing images and containers.
Installing Docker Engine on Linux
Linux users can install Docker Engine using their package manager. The Docker website provides commands for each distribution, but the process usually involves:
Adding Docker’s official repository
Installing the Docker Engine
Starting the Docker service
Verifying Your Installation
Once Docker is installed, you can confirm it's working by checking the version:
docker --version
If this command prints a version number, Docker is ready to use.
Your First Docker Container
Now that Docker is installed, it’s time to run your very first container. This is where the fun begins - Docker makes it possible to launch a complete, isolated environment with a single command.
Running a Simple Container
Docker provides an introductory image called hello-world that confirms your setup is working correctly. You can run it using:
docker run hello-world
When you execute this, Docker will:
Check if the image exists locally
Download it from Docker Hub (if missing)
Create a container instance
Run it and display a confirmation message
This simple process demonstrates Docker’s core workflow: pull → create → run.
Understanding What Just Happened
The docker run command performed several tasks automatically:
- Pulled the image from Docker Hub
- Set up a container from that image
- Executed the container’s default instructions
- Shut down the container once the task completed
Even though this example is small, it showcases how Docker isolates processes inside containers.
Pulling Images Manually
You can also download container images without running them:
docker pull ubuntu
This fetches the official Ubuntu image, which you can run later.
Viewing Running and Stopped Containers
To check which containers are currently running:
docker ps
To view all containers - including stopped ones:
docker ps -a
These commands help you understand what’s active in your Docker environment.
With this foundation, you're ready to create and run your own custom containers.
Creating Your First Dockerfile
Running prebuilt images is useful, but the real power of Docker comes from creating your own container images. This is done using a simple text file called a Dockerfile, which defines everything your application needs to run.
What Is a Dockerfile?
A Dockerfile is a set of instructions that tells Docker how to build an image. It specifies:
The base image
Files to copy
Packages to install
The command to run when the container starts
Once built, the resulting image can be shared, deployed, or run anywhere.
A Simple Dockerfile Example
Here’s a minimal example using a basic Python application:
Dockerfile
FROM python:3.10
WORKDIR /app
COPY app.py .
CMD ["python", "app.py"]
This Dockerfile:
- Uses an official Python base image
- Sets /app as the working directory
- Copies your Python script into the image
- Defines the command to run the app
Building Your Custom Image
To build an image from your Dockerfile, run:
docker build -t my-first-app .
The -t flag tags your image with a name.
Running the Image
Once built, start your container:
docker run my-first-app
If your app.py prints “Hello, Docker!”, you’ll see the output in your terminal.
Creating your own images is the most important skill in Docker. It allows you to package any application - web apps, APIs, scripts, automation jobs - into standardized, portable environments.
Managing Docker Containers
Once you’ve created and run your own containers, you’ll need to know how to manage them. Docker provides an intuitive set of commands to start, stop, inspect, and remove containers and images. Learning these basics will help you stay in control of your local Docker environment.
Starting, Stopping, and Restarting Containers
If you want to stop a running container, use:
docker stop <container-id>
To start a stopped container:
docker start <container-id>
To restart it:
docker restart <container-id>
You can get the container ID by running docker ps.
Viewing Logs and Inspecting Containers
Logs help you debug and understand what’s happening inside your container:
docker logs <container-id>
To inspect detailed information, including networking and environment variables:
docker inspect <container-id>
These commands are essential when tracking issues or understanding container behavior.
Listing Images and Containers
To view all containers currently running:
docker ps
To see every container (including stopped ones):
docker ps -a
To list all images on your system:
docker images
Removing Images and Containers
Cleanup keeps your environment organized and saves disk space.
Remove a stopped container:
docker rm <container-id>
Remove an image:
docker rmi <image-name>
Be careful - removing images in use will cause errors until the associated containers are deleted.
With these management skills, you’re well-equipped to run, monitor, troubleshoot, and maintain your Docker environments confidently.
Using Docker Compose
As you grow more comfortable with Docker, you’ll quickly encounter scenarios where you need to run multiple containers together such as a web server and a database. Managing them individually with separate docker run commands becomes inefficient. This is where Docker Compose comes in.
Docker Compose is a tool that lets you define and run multi-container applications using a single configuration file. It helps you manage complex setups easily, especially in development environments.
Why Docker Compose Is Useful
Docker Compose allows you to:
Start multiple services with one command
Automatically handle container networking
Keep service configurations in a clean, reusable format
Define environment variables, volumes, and ports in a single place
Instead of remembering long commands, everything lives in one simple YAML file.
Example docker-compose.yml File
Here’s a beginner-friendly example for a small web app and a Redis cache:
services:
web:
image: my-first-app
ports:
- "8080:8080"
redis:
image: redis
This defines two services - web and redis - that run together seamlessly.
Running Multi-Container Applications
To start everything defined in the file:
docker compose up -d
To stop and remove the containers:
docker compose down
Compose makes managing multi-service applications dramatically easier, especially when your projects grow in complexity.
Best Practices for Beginners
As you start building and running Docker containers, following best practices will help you avoid common pitfalls and ensure your images remain fast, secure, and easy to maintain. These guidelines are simple enough for beginners yet powerful enough to support real-world development.
Use Lightweight Base Images
Base images like alpine are much smaller than full OS images. Smaller images:
Build faster
Pull faster
Use less storage
Reduce attack surface
Whenever possible, choose minimal images unless your application requires a full distribution.
Keep Dockerfiles Clean and Simple
Avoid unnecessary commands or installing large packages you don’t need. A clean Dockerfile is easier to maintain and reduces image size.
Good habits include:
Combining related commands
Keeping layers minimal
Using .dockerignore to exclude unnecessary files
Tag Images Properly
Avoid relying on the default latest tag. Use descriptive tags such as:
- v1, v1.1, prod, dev
This makes version control clearer and deployment safer.
Avoid Running Containers as Root
Running as root inside containers increases security risks. Use non-root users whenever possible - especially in production environments.
Clean Up Unused Images and Containers
Over time, unused images and stopped containers consume disk space. Use cleanup commands regularly to keep your system healthy.
Following these best practices early on will help you build efficient, secure, and scalable containerized applications with confidence.
Common Beginner Mistakes to Avoid
When learning Docker for the first time, it’s normal to run into small issues that slow down development. Most of these problems come from misunderstanding how images, containers, ports, and layers work. By being aware of these common mistakes, you can avoid frustration and build smoother workflows from day one.
Confusing Images With Containers
A very frequent beginner mistake is treating images and containers as the same thing.
Remember:
- Images = blueprints
- Containers = running instances
If you update your code but don’t rebuild the image, your container will still use the old version.
Forgetting to Expose or Map Ports
Running an app inside a container doesn’t mean it’s automatically accessible on your machine. Beginners often forget to map ports using -p, resulting in apps that “don’t work.”
Not Rebuilding the Image After Changes
Any time you modify files referenced by the Dockerfile, you need to rebuild the image. Running an old image leads to outdated behavior or missing features.
Overusing the latest Tag
Pulling or pushing images tagged as latest makes it difficult to know which version you're actually running. Always tag images clearly to avoid deployment confusion.
Creating Bloated Dockerfiles
Unnecessary packages, large base images, and too many layers can lead to slow builds and massive image sizes. Keeping it clean improves performance.
Avoiding these mistakes early makes the learning curve much smoother and helps you build reliable containerized applications.
Final Practical Demo - Build & Run a Complete Container
Now that you understand how Docker works, let’s walk through a simple end-to-end example. This quick demo will show you how to create an app, package it into a Docker image, and run it as a container. It’s one of the easiest ways to build confidence as a beginner.
Step 1: Create a Simple Application
Create a file named app.py:
print("Hello from inside a Docker container!")
This is the application you will containerize.
Step 2: Write a Dockerfile
Create a new file called Dockerfile in the same folder:
FROM python:3.10-slim
WORKDIR /app
COPY app.py .
CMD ["python", "app.py"]
This tells Docker to use a lightweight Python image, copy your script, and run it.
Step 3: Build Your Docker Image
Run:
docker build -t hello-docker .
This creates a reusable image named hello-docker.
Step 4: Run Your Container
Now start your container:
docker run hello-docker
You should instantly see:
Hello from inside a Docker container!
This simple example demonstrates the full workflow: writing an app → creating a Dockerfile → building an image → running a container. Once you’ve mastered this process, you’re ready to build more powerful applications using Docker.
Learning Docker is one of the most valuable steps you can take as a developer or DevOps beginner. With containers, you can create consistent environments, simplify deployments, and run applications anywhere with ease. Once you understand how to build images, manage containers, and work with Docker Compose, you’re already on the path toward more advanced concepts like Kubernetes and cloud-native development.
But as your applications grow, managing containers manually or even handling Kubernetes clusters can quickly become complex. That’s where Atmosly helps teams streamline their workflow. Atmosly offers AI-powered Kubernetes troubleshooting, visual CI/CD pipelines, environment cloning, and cost insights that make scaling containerized applications much easier.
If you're ready to move beyond basic Docker usage and start deploying applications with confidence, Atmosly gives you the tools to build, launch, and manage modern infrastructure without the operational overload.
Start your journey today sign up for Atmosly and simplify the way you run your containerized applications.