Month 5, Week 4

Full Stack Deployment with Docker Compose & CI/CD

Automating the Architect's Workflow

Module 1: Docker Compose - The Conductor

Orchestrating Multiple Containers

The Problem with `docker run`

Our NestJS application is now in a container. But a real application has multiple parts:

  • Our NestJS API container.
  • A PostgreSQL database container.
  • A Redis cache container.

Starting them manually with `docker run` is complex. You have to manage networks, volumes for data persistence, and environment variables, leading to long, error-prone commands.

The Solution: Docker Compose

Docker Compose is a tool for defining and running multi-container Docker applications. You use a single YAML file to configure all your application's services.

Analogy: Instead of telling each musician in an orchestra what to play individually, you give a single sheet of music (`docker-compose.yml`) to a conductor (Docker Compose) who orchestrates the entire performance.

Core Concepts of Docker Compose

  • Services: Each container is a "service." Your API, database, and cache are all services.
  • Networks: Compose creates a private network for your services, allowing them to communicate with each other easily using their service names as hostnames.
  • Volumes: Volumes allow you to persist data generated by your containers, which is essential for a database.

Anatomy of `docker-compose.yml`


                        version: '3.8' # Specifies the Compose file format version

                        services:
                          # Our NestJS API service
                          api:
                            build: . # Build the image from the Dockerfile in the current directory
                            ports:
                              - "3000:3000" # Map host port 3000 to container port 3000
                            environment:
                              - DATABASE_HOST=db
                            depends_on:
                              - db # Tells Compose to start the 'db' service before this one

                          # Our PostgreSQL database service
                          db:
                            image: postgres:15-alpine # Use a pre-built image from Docker Hub
                            environment:
                              - POSTGRES_USER=admin
                              - POSTGRES_PASSWORD=secret
                              - POSTGRES_DB=my_app
                            volumes:
                              - pgdata:/var/lib/postgresql/data # Persist the database data

                        volumes:
                          pgdata: # Defines the named volume
                    

Core Compose Commands

  • `docker-compose up`: Builds, (re)creates, starts, and attaches to containers for a service. Add `-d` to run in detached mode.
  • `docker-compose down`: Stops and removes containers, networks, and volumes created by `up`.
  • `docker-compose logs -f `: Follows the log output for a specific service.
  • `docker-compose exec `: Executes a command inside a running container (e.g., `docker-compose exec api sh`).

Module 2: Managing Configuration

Decoupling Code from Configuration

The Catastrophe of Hard-coded Secrets

You should NEVER commit secrets like database passwords, API keys, or JWT secrets directly into your source code or `docker-compose.yml` file.

Doing so is a massive security vulnerability. Your code is meant to be shared; your configuration is not.

The Solution: Environment Variables

We use environment variables to inject configuration into our application at runtime. This decouples the code from the configuration.

For local development, we create a `.env` file to store these variables.

This `.env` file MUST be added to your `.gitignore` and `.dockerignore`.


                        # .env file
                        DATABASE_USER=admin
                        DATABASE_PASSWORD=secret
                        JWT_SECRET=my_super_long_and_random_jwt_secret
                    

Using `.env` Files with Docker Compose

Docker Compose automatically looks for and loads a file named `.env` in the same directory.


                        # docker-compose.yml
                        services:
                          db:
                            image: postgres:15-alpine
                            environment:
                              # These variables are automatically substituted from the .env file
                              - POSTGRES_USER=${DATABASE_USER}
                              - POSTGRES_PASSWORD=${DATABASE_PASSWORD}
                              # ...
                          api:
                            build: .
                            environment:
                              - JWT_SECRET=${JWT_SECRET}
                              # ...
                     

In production, these variables would be set securely by your cloud provider or deployment system, not from a file.

Mid-Lecture Knowledge Check

Module 3: CI/CD with GitHub Actions

The Automated Assembly Line

What is CI/CD?

  • Continuous Integration (CI): The practice of automatically building and testing your application every time a developer pushes a code change. The goal is to catch integration errors early.
  • Continuous Deployment/Delivery (CD): The practice of automatically deploying your application to a testing or production environment after it has successfully passed the CI stage.

CI/CD is about creating a fast, reliable, and automated pipeline from code to production.

Introduction to GitHub Actions

GitHub Actions is a CI/CD platform built directly into GitHub. It allows you to automate your workflows in response to events (like a `push` or `pull_request`).

Workflows are defined in YAML files inside a special `.github/workflows` directory in your repository.

Core Concepts of GitHub Actions

  • Workflow: The entire automated process, defined in a `.yml` file.
  • Event: The trigger that starts a workflow (e.g., `on: [push, pull_request]`).
  • Job: A set of steps that execute on a virtual machine (a "runner").
  • Step: An individual task within a job. Can be a shell command (`run`) or a reusable community script (`uses`).
  • Action: A reusable, pre-packaged unit of code (e.g., `actions/checkout@v3`).

Our CI Pipeline Workflow

For our NestJS project, a typical CI workflow will perform the following jobs:

  1. Checkout the code.
  2. Set up the correct Node.js version.
  3. Install NPM dependencies.
  4. Run the linter to check for code style issues.
  5. Run all the automated tests (unit, integration, e2e).
  6. (Optional) Build the production Docker image.

If any of these steps fail, the entire workflow fails, and the developer is notified immediately.

Example GitHub Actions Workflow

`.github/workflows/ci.yml`


                        name: NestJS CI Pipeline

                        on: [push, pull_request]

                        jobs:
                          build-and-test:
                            runs-on: ubuntu-latest

                            steps:
                            - name: Checkout repository
                              uses: actions/checkout@v3

                            - name: Set up Node.js
                              uses: actions/setup-node@v3
                              with:
                                node-version: 18

                            - name: Install dependencies
                              run: npm install

                            - name: Run linter
                              run: npm run lint

                            - name: Run tests
                              run: npm run test
                     

In-Class Practical Exercise

Orchestrating the Full Stack

Your task is to create a complete `docker-compose.yml` file that orchestrates your NestJS API and a PostgreSQL database, using an `.env` file for configuration.

  1. Create a `.env` file and define variables for `POSTGRES_USER`, `POSTGRES_PASSWORD`, and `POSTGRES_DB`.
  2. Create a `docker-compose.yml` file.
  3. Define a `db` service using the `postgres:15-alpine` image.
    • Use the `environment` key to pass your `.env` variables to the container.
    • Define a named `volume` to persist the PostgreSQL data.
  4. Define an `api` service.
    • Use `build: .` to build it from your existing `Dockerfile`.
    • Map port `3000` to `3000`.
    • Use `depends_on` to ensure the `db` service starts first.
    • Pass the necessary database connection variables to the API via the `environment` key.
  5. Run `docker-compose up` and verify that both your API and database start successfully.

Final Knowledge Check