Simplifying Multi-Container Development with Docker Compose

Docker composeOdoo
2025-12-05

Docker Compose

Docker Compose is one of the most important tools for working with multi-container Docker environments. Modern applications rarely run using a single container ,most require a combination of services such as databases, backend APIs, workers, caching systems, and more. Managing each container manually quickly becomes complicated, which is where Docker Compose simplifies everything.With Docker Compose, you can define your entire application stack in a single docker-compose.yml file. This includes services, networks, volumes, environment variables, and container dependencies. Once defined, the entire environment can be started with just one command.

To understand Docker Compose better, we’ll use a real-world example: running Odoo (a popular ERP system) along with PostgreSQL, the database it depends on. This example clearly demonstrates how Docker Compose manages multiple containers, networking, data persistence, and service orchestration effortlessly.This blog covers everything you need for understanding Compose basics to writing and explaining each part of the Compose configuration for Odoo + PostgreSQL.

  1.What is Docker Compose?

Docker Compose is a powerful command-line tool that simplifies running applications that require multiple containers. Instead of starting each container individually with long Docker commands, Compose ​lets you define the entire setup, services, networks, volumes, and environment variables in one docker-compose.yml file. 

​Docker Compose allows you to:​

    • Define multiple containers in one file
    • Manage them with simple commands (up, down, restart)
    • Handle networking automatically
    • Share environment variables
    • Mount volumes
    • Scale services​

​This makes Docker Compose an essential tool for developing, testing, and deploying multi-container applications ​​efficiently.

   2.Understanding docker-compose.yml​

​version: "3.9"

​services:

​  db:

​    image: postgres:15

​    container_name: odoo_db

​    environment:

​      POSTGRES_DB: postgres

​      POSTGRES_USER: odoo

​      POSTGRES_PASSWORD: odoo

​    volumes:

​      - db_data:/var/lib/postgresql/data

​    restart: always


​  web:

​    image: odoo:17

​    container_name: odoo_web

​    depends_on:

​      - db

​    ports:

​      - "8069:8069"

​    volumes:

​      - odoo_data:/var/lib/odoo

​      - ./addons:/mnt/extra-addons

​    environment:

​      - HOST=db

​      - USER=odoo

​      - PASSWORD=odoo

​    restart: always


​volumes:

​  db_data:

​  odoo_data:


  • version –Defines the Docker Compose file type, indicating which version of the Compose syntax Docker should use to interpret the configuration. Choosing the correct version ensures compatibility with Docker features and syntax.
  • Services, Images, and Containers 

​In Docker Compose, a service is essentially a container that runs a specific part of your application. Each service is linked to a Docker image, which is a pre-built package containing the application and its ​dependencies. When you start Docker Compose, it creates a container from the image for each service.

​For example, a Compose file for Odoo might have two services:

	services:
  ​ web:
    ​​image: odoo:17
  ​ ​db:
    ​​image: postgres:15

​Here, the web service runs Odoo, and the db service runs PostgreSQL. Docker pulls these images and spins up containers automatically. This separation allows you to independently scale, update, or debug ​each part of your application without affecting the other.

  • depends_on –Ensures the PostgreSQL database container starts before Odoo. This guarantees that Odoo can connect to the database as soon as it starts, preventing startup errors caused by an unavailable database.
  • ports – Exposes Odoo on port 8069 so you can access it through the browser. This maps the container port to the host machine, making the web interface reachable at http://localhost:8069/.
  • Environment Variables in Compose

​ Environment variables are key for configuration. They allow services to communicate and provide sensitive information such as database credentials without hardcoding them in the image.

​For example:

services:
  web:
    environment:
      - HOST=db
      - USER=odoo
      - PASSWORD=odoo
  db:
    environment:
      - POSTGRES_DB=postgres
      - POSTGRES_USER=odoo
      - POSTGRES_PASSWORD=odoo

​The web service uses HOST, USER, and PASSWORD to connect to the db service. This ensures Odoo knows where and how to connect to the database. The db service defines its own environment variables ​so that PostgreSQL can initialize the database with the correct username and password. Using a  .env file is even better, as it keeps credentials out of the Compose file and makes it easy to switch between ​development, staging, and production environments.

  • volumes 

Volumes allow containers to store data persistently. Without volumes, any data inside a container is lost when it stops or is removed.

volumes:
  odoo-web-data:
  odoo-db-data:

services:
  web:
    volumes:
      - odoo-web-data:/var/lib/odoo
      - ./addons:/mnt/extra-addons
  db:
    volumes:
      - odoo-db-data:/var/lib/postgresql/data

​Here, odoo-web-data stores attachments, reports, and other files uploaded to Odoo, while odoo-db-data stores PostgreSQL’s database files. Mapping ./addons to /mnt/extra-addons allows developers to ​add or modify custom Odoo modules from their local machine without rebuilding the container.

    • odoo-db-data – A named volume specifically for storing the PostgreSQL database files. This keeps all database content safe and persistent outside of the container lifecycle.
    • odoo-web-data –A named volume for storing Odoo’s filestore and attachments. This includes uploaded files, reports, and other data generated by Odoo, ensuring it remains available even if the container is recreated.
  • Networks

​Docker Compose automatically creates a default network for every project. All services defined in the Compose file are connected to this network unless you specify otherwise. In the default network, each ​service can communicate with other services using their service names as hostnames. For example, in an Odoo setup, the web service can connect to the db service simply by using the hostname db. ​Docker handles the DNS resolution internally, so you don’t need to manually configure IP addresses.

  • Running Compose
    • docker compose up -d :starts all services in the background (detached mode).
    • docker compose down :stops and removes all containers, networks, and optionally volumes.
    • docker compose restart: web stops and restarts only the Odoo service without affecting the database.
    • Scaling Services: Docker Compose allows you to scale services using the --scale flag. For example, you can run multiple instances of the Odoo web service to handle more requests:
docker compose up --scale web=3

​   This starts three instances of Odoo. Note that scaling requires a load balancer to distribute traffic between instances effectively.

  • ​Logs and Debugging

​The docker compose logs command allows you to view the output of a container’s processes, making it easier to understand what is happening inside each service. By default, it shows all the logs that have ​been generated since the container started, which is helpful for identifying startup errors or configuration issues.

docker compose logs web	

​This displays messages from Odoo, including database connection attempts, module loading, and any errors that occur during startup. You can also use the -f or --follow option to watch logs in real time:

docker compose logs -f web

​This shows logs from all services at once, which helps when debugging interactions between containers, such as Odoo trying to connect to PostgreSQL.

3.Differences Between Dockerfile and Compose File

  • Dockerfile
    • Defines how to build a single container image.
    • Specifies a base image to start from (e.g., FROM odoo:17).
    • Installs dependencies and software required for the application.
    • Copies configuration files, addons, or custom modules into the image.
    • Defines the default command to run when the container starts (CMD or ENTRYPOINT).
    • Focuses on creating a reusable image that can be shared or deployed anywhere.
FROM odoo:17

ENV ODOO_DB_USER=odoo
ENV ODOO_DB_PASSWORD=odoo

USER root
RUN apt-get update && apt-get install -y \
    git \
    python3-pip \
    && rm -rf /var/lib/apt/lists/*

COPY ./custom_addons /mnt/extra-addons
COPY ./odoo.conf /etc/odoo/odoo.conf

RUN chown -R odoo:odoo /mnt/extra-addons /etc/odoo/odoo.conf

USER odoo

CMD ["odoo", "-c", "/etc/odoo/odoo.conf"]

    • FROM odoo:17 → Specifies the base image.
    • ENV → Sets environment variables like database credentials.
    • RUN → Run terminal commands like package installation.
    • COPY → Adds custom Odoo modules and configuration files into the container.
    • RUN chown -R → Ensures proper permissions for Odoo to access the files.
    • USER odoo → Runs the container as the Odoo user instead of root.
    • CMD → Defines the default command to start Odoo using the configuration file.
  • Docker Compose File
    • Defines how to run multiple containers together as a complete application stack.
    • Manages services, networks, volumes, and environment variables.
    • Handles container dependencies with depends_on.
    • Controls data persistence through volumes for databases and application files.
    • Allows easy scaling of services (e.g., multiple web instances).
    • Provides commands to start, stop, restart, and view logs of containers.
    • Focuses on orchestrating containers at runtime, rather than building images.

4.Multi-environment Architecture

​Docker Compose allows developers to manage different environments like development, staging, and production using override files or profiles.​

    • Override files (docker-compose.override.yml) automatically extend the main Compose file. For example, in development, you might mount local volumes to edit code, while in production, you might skip this.
    • Profiles allow selective service activation. Services marked with a profile only start when that profile is enabled, reducing resource usage and simplifying testing.
services:
  web:
    image: odoo:17
    profiles: ["dev"]
  db:
    image: postgres:15

​Running docker compose --profile dev up starts only services associated with the dev profile.

5.Advanced Networking​

  • Custom Networks
    • Custom networks provide more control and isolation.
    • You can create multiple networks for different tiers of your application. For example, a frontend network for the web service and a backend network for worker services.
    • Only services connected to the same network can communicate, which improves security and prevents unintended traffic between services.
networks:
  frontend:
  backend:

services:
  web:
    image: odoo:17
    networks:
      - frontend
  db:
    image: postgres:15
    networks:
      - backend

In this setup, web cannot directly access db unless you explicitly connect them to a common network, isolating traffic and improving structure.

  • DNS and Service Aliases
    • Custom hostnames or aliases allow multiple ways to reach a container.
    • This is useful when different services expect different names or legacy configurations.
dservices:
  web:
    image: odoo:17
    networks:
      odoo-net:
        aliases:
          - odoo-app
  db:
    image: postgres:15
    networks:
      odoo-net:
        aliases:
          - postgres-db

The database container can now be reached by both hostnames: db and postgres-db. This is useful for several reasons:​

    1. Compatibility – Some applications or configurations may expect a specific hostname, like postgres-db. The alias ensures those configurations work without changing the service name.
    2. Multiple aliases – You can assign multiple aliases to a container, allowing it to be referenced differently by multiple services.
    3. Network isolation – The alias only works within the network it is assigned to (odoo-net in this case), so it does not pollute the global DNS.
    4. Flexibility – If you change the service name in the Compose file later, the alias can maintain backward compatibility without affecting other containers.

6.Resource Limits & Container Scheduling Behaviors

​Docker Compose allows you to control how many system resources each service can use. This prevents one container from dominating CPU or RAM, especially in multi-service applications.This helps ​maintain consistent and stable performance, especially when multiple containers are running on the same host. These configurations become even more critical in production environments to avoid ​resource contention and ensure predictable behavior. In Docker Swarm, resource constraints also influence container scheduling, as services are placed only on nodes that meet the specified resource ​requirements.

deploy:
  resources:
    limits:
      cpus: "1.0"
      memory: "1g"
    reservations:
      cpus: "0.5"
      memory: "512m"

​Here, the container cannot exceed 1 CPU core and 1 GB RAM. It is guaranteed at least 0.5 CPU and 512 MB RAM.

7.Healthchecks & dependency patterns

​A running container does not always mean a healthy container, which is why healthchecks are so important in Docker Compose. Healthchecks allow you to define commands that test whether the service is ​actually ready to operate, such as checking a database connection or verifying an API endpoint. This avoids situations where one service tries to connect to another that isn’t fully initialized yet.

services:
  db:
    image: postgres
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "postgres"]
      interval: 10s
      retries: 5

​This checks if Postgres is ready to accept connections. Using this, you can create dependency patterns such as delaying the startup of an app until the database is healthy. Docker Compose does not wait ​automatically, but tools like wait-for-it.sh or depends_on with newer health behavior in Compose v3 help orchestrate service startup. This leads to more resilient microservices that don’t crash when ​dependent services are not ready.

8.Volume management

​Docker uses volumes to store data outside containers, ensuring data is not lost when a container is removed or recreated. There are four major types of storage patterns we typically use:

​1. Bind Mounts

​Bind mounts allow you to map a specific folder from your local machine directly into a container. This means whatever you edit in your local directory updates instantly inside the container. This is very ​useful during development because you can modify files, refresh the page, and immediately see the results without rebuilding the image. Developers often use bind mounts for source code, configuration ​files, and log directories so that they can work faster and interact with files easily from the host system.

services:
  web:
    image: nginx
    volumes:
      - ./html:/usr/share/nginx/html

./html is your local folder, and /usr/share/nginx/html is the folder inside the container. Editing files locally updates the container in real time.

​2. Named Volumes

​Named volumes are fully managed by Docker and stored in Docker’s own volume directory. These volumes don’t depend on the structure of your local machine; instead, Docker handles their creation, ​storage, and persistence. Because they are stable and isolated from your system’s files, named volumes are ideal for storing database data or application data that must survive container restarts and ​updates. They are widely used in production environments because they provide reliable and persistent storage.

services:
  db:
    image: postgres:15
    volumes:
      - pg_data:/var/lib/postgresql/data

volumes:
  pg_data:

​Docker creates a volume named pg_data, and all Postgres database files are stored inside it. Even if the container is removed, the database remains safe.

​3. Tmpfs Volumes

​Tmpfs volumes store data entirely in RAM instead of writing it to disk. Because of this, they offer extremely fast performance but are not persistent data disappears as soon as the container stops. Tmpfs ​volumes are perfect for temporary files, caching, or storing sensitive information that should never be saved permanently. They are also useful in performance-heavy applications where quick read/write ​access is important.

services:
  app:
    image: node:18
    volumes:
      - type: tmpfs
        target: /tmp/cache

​The folder /tmp/cache exists only in memory. It is very fast, but all data is lost when the container stops.

​4. Backing Up Volumes

​Sometimes you need to back up a Docker volume especially before upgrading software, moving servers, or restoring critical data. Docker allows you to attach a volume to a temporary container and ​export it to a backup file. This process ensures you have a complete, restorable copy of your data. Backup volumes are essential for databases, file uploads, and important application files like Odoo ​filestores.

docker run --rm \
  -v pg_data:/data \
  -v $(pwd):/backup \
  busybox \
  tar cvf /backup/db_backup.tar /data

​Here, the pg_data volume is mounted inside a temporary container, and all its content is compressed into db_backup.tar, creating a safe backup that can be restored later.

9.Converting Compose to Swarm

​Docker Swarm extends Docker Compose files into full cluster orchestration using the deploy section. This enables scaling, rolling updates, and placement constraints.

deploy:
  replicas: 3
  update_config:
    parallelism: 1
    delay: 5s

​This configuration ensures the service runs three instances and updates them one at a time with controlled delays. Placement rules help restrict services to certain nodes, for example running databases ​only on nodes with SSDs. The same Compose file can be deployed as a Swarm stack, making migration to clustering seamless.

10.Secrets & configs

​Secrets allow you to pass sensitive data such as passwords securely into containers. Unlike environment variables, secrets do not show up in logs or docker inspect. They appear as read-only files:

secrets:
  db_pass:
    file: ./db_pass.txt

​Configs are similar but intended for non-sensitive configuration files. Both features keep application deployments secure by ensuring credentials are not hardcoded directly into Compose files or images. ​This is crucial for production-grade security and compliance.

11.Multi-stage builds & auto-rebuild workflows

​Multi-stage builds reduce the final image size by separating the build environment from the runtime environment. This lets you compile code in a large base image (such as Node or Go) and then copy only ​the final output into a smaller production image:

FROM node:18 as build
...
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html

​When combined with tools like docker-compose watch or BuildKit’s auto-rebuild features, changes in source code automatically re-trigger builds and restarts. This dramatically speeds up development and  ​reduces manual rebuild steps.

12.Compose file reuse

​YAML anchors help eliminate repeated configuration across multiple services. You can define a reusable block:

x-common: &common
  restart: always
  logging:
    driver: json-file

T​hen extend it in any service:

services:
  api:
    <<: *common

13.Performance optimization for local + CI/CD

Performance in Docker-based workflows can be significantly improved by leveraging caching, efficient images, and smart volume strategies. BuildKit caching is particularly useful: it caches intermediate i ​mage layers so that unchanged steps don’t need to be rebuilt. For example, if your dependencies rarely change, Docker can reuse the cached layer rather than reinstalling them every time:

DOCKER_BUILDKIT=1 docker compose build

Using smaller base images like alpine instead of ubuntu reduces build size and improves startup times. For local development, bind mounts are invaluable they allow the container to see code changes ​instantly without rebuilding, enabling rapid feedback.

​In CI/CD pipelines, caching becomes even more crucial. Separating stable dependencies (like node_modules or Python packages) from frequently changing application code allows pipelines to reuse cached ​layers and drastically reduce build times. Similarly, optimizing the .dockerignore file ensures unnecessary files (logs, temporary files, etc.) are not included in builds, preventing wasted time and space.

# Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]

​In this setup, npm install is cached unless package.json changes, speeding up repeated builds.

  • Using Docker-in-Docker (DinD) in CI/CD

Docker-in-Docker (DinD) allows Docker to run inside a container. This is useful for CI/CD pipelines that need to build images or run integration tests in isolation. For example, in GitLab CI or Jenkins, you ​might spin up a DinD service:

services:
  docker:
    image: docker:24-dind
    privileged: true

​Other services in the pipeline can interact with this container via the Docker socket. DinD ensures reproducibility and sandboxed builds, but it comes with caveats: containers must run in privileged mode, ​which can be a security risk. Performance tuning may also be needed because DinD can have slower I/O compared to native Docker.

14.Debugging and Inspecting Container Lifecycle

​Debugging containerized applications often requires a close look at how containers start, run, and interact. Docker provides multiple tools to help:

  • Real-time events:

docker events

​This command streams events such as container start, stop, healthcheck failures, or crashes. Observing these events in real-time helps identify dependency issues or unexpected restarts.

  • Inspecting containers:

docker inspect container_name

​This provides detailed information about a container, including environment variables, mounted volumes, network settings, and healthcheck results.

  • Logs:

docker logs -f container_name

​Streaming logs alongside events and inspection output gives a full picture of runtime behavior, making it easier to pinpoint failures or misconfigurations.

Conclusion

​Docker Compose is an indispensable tool for modern multi-container application development. It simplifies orchestration by allowing developers to define, manage, and scale entire application stacks with ​minimal effort. From managing services, networks, volumes, and environment variables to handling healthchecks, resource limits, and debugging, Compose provides a complete framework for both ​development and production environments.

​By combining Docker Compose with best practices such as efficient volume management, BuildKit caching, multi-stage builds, Docker-in-Docker setups for CI/CD, and careful resource planning teams can ​achieve faster builds, more reliable deployments, and smoother workflows. Whether you are developing locally, running integration tests in CI pipelines, or deploying in production, Docker Compose ​ensures consistency, reproducibility, and scalability across the board.

​In short, mastering Docker Compose not only reduces the complexity of managing multiple containers but also empowers developers and DevOps teams to build, test, and deploy applications more ​efficiently and confidently.