Month 5, Week 3

Introduction to Docker & Containerization

Shipping Your Code with Confidence

Module 1: The Problem We Must Solve

"But it works on my machine!"

The Architect's Nightmare

Your application works perfectly on your laptop. But when you deploy it to a server, or a new developer joins the team, it crashes.

Why? The environment is different.

  • Your laptop has Node.js v18.1, but the server has v16.9.
  • You have a system dependency (like a graphics library) installed that the server is missing.
  • Environment variables are different.

This inconsistency is a massive source of bugs, lost time, and frustration.

The Old Solution: Virtual Machines (VMs)

A VM emulates an entire computer, including a full guest operating system. This provides total isolation.

Analogy: If you need a place to live, you build a brand new, separate house with its own foundation, plumbing, and electrical systems.

Problem: VMs are huge (gigabytes), slow to boot, and resource-heavy. Running several VMs on one machine is very inefficient.

The Modern Solution: Containers (Docker)

A container packages up your application code along with all its dependencies, but it shares the host operating system's kernel.

Analogy: If you need a place to live, you move into an apartment. You have your own private space, but you share the building's foundation, plumbing, and electrical systems.

Benefit: Containers are tiny (megabytes), boot in seconds, and are incredibly efficient.

Module 2: Setting Up the Docker Environment

Installing the Engine

Docker Desktop (macOS & Windows)

For macOS and Windows, the easiest way to get started is with Docker Desktop.

  1. Go to docker.com/products/docker-desktop
  2. Download the installer for your operating system.
  3. Follow the installation wizard. It will set up the entire Docker environment for you.

Docker Engine (Linux)

On Linux, you install the Docker Engine directly.

For Ubuntu/Debian:


                        # Update package index and install prerequisites
                        sudo apt-get update
                        sudo apt-get install ca-certificates curl gnupg

                        # Add Docker's official GPG key
                        sudo install -m 0755 -d /etc/apt/keyrings
                        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
                        sudo chmod a+r /etc/apt/keyrings/docker.gpg

                        # Set up the repository
                        echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

                        # Install Docker Engine
                        sudo apt-get update
                        sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
                    

Verification

Once installed, open a new terminal and run this command to verify everything is working:


                        docker run hello-world
                    

You should see a message starting with "Hello from Docker!" This confirms your installation is successful.

Mid-Lecture Knowledge Check

Module 3: The Dockerfile

The Blueprint for Your Container

What is a `Dockerfile`?

A `Dockerfile` is a simple text file that contains a list of instructions for how to build a Docker image.

  • An **Image** is a blueprint: a read-only template containing your application, its dependencies, and the instructions for what to run.
  • A **Container** is a running instance of an image. You can run many containers from the same image.

Core `Dockerfile` Instructions

  • `FROM`: Specifies the base image to start from (e.g., an official Node.js image).
  • `WORKDIR`: Sets the working directory for subsequent commands.
  • `COPY`: Copies files from your host machine into the image.
  • `RUN`: Executes a command inside the image during the build process (e.g., `npm install`).
  • `EXPOSE`: Informs Docker that the container listens on a specific network port.
  • `CMD`: Provides the default command to run when a container is started from the image.

A Simple `Dockerfile` for a Node App


                        # Use an official Node.js runtime as a parent image
                        FROM node:18-alpine

                        # Set the working directory in the container
                        WORKDIR /usr/src/app

                        # Copy package.json and package-lock.json
                        COPY package*.json ./

                        # Install app dependencies
                        RUN npm install

                        # Copy the rest of your app's source code
                        COPY . .

                        # Make port 3000 available to the world outside this container
                        EXPOSE 3000

                        # Define the command to run your app
                        CMD [ "node", "src/main.js" ]
                    

The `.dockerignore` File

Similar to `.gitignore`, a `.dockerignore` file tells Docker which files and folders to exclude when copying files into the image.

You should ALWAYS ignore `node_modules` and other local artifacts.


                        # .dockerignore
                        node_modules
                        npm-debug.log
                        dist
                        .git
                        .env
                     

Building and Running

Now we use the Docker CLI to build the image and run the container.


                        # Build the image from the Dockerfile in the current directory
                        # The -t flag "tags" (names) the image
                        docker build -t my-nest-app .

                        # Run a container from the image
                        # The -p flag "publishes" a port, mapping host port 3000 to container port 3000
                        docker run -p 3000:3000 my-nest-app
                    

Module 4: Production-Grade Dockerfiles

Multi-Stage Builds for Security & Size

The Problem with a Simple `Dockerfile`

Our simple `Dockerfile` has two major problems for production:

  1. Large Image Size: It includes all our `devDependencies`, TypeScript files, and the entire `node_modules` folder, making the final image huge.
  2. Poor Security: It contains our source code and build tools, which are not needed to run the application and increase the potential attack surface.

The Solution: Multi-Stage Builds

A multi-stage build uses multiple `FROM` instructions in a single `Dockerfile`. Each `FROM` begins a new "stage". You can selectively copy artifacts from one stage to another, leaving behind everything you don't need.

Multi-Stage `Dockerfile` for NestJS


                        # ---- Stage 1: Build ----
                        FROM node:18-alpine AS builder

                        WORKDIR /app
                        COPY package*.json ./
                        
                        # Install all dependencies, including devDependencies
                        RUN npm install

                        COPY . .
                        
                        # Build the production JavaScript files
                        RUN npm run build

                        # ---- Stage 2: Production ----
                        FROM node:18-alpine

                        WORKDIR /app
                        
                        # Only copy the production dependencies manifest
                        COPY package*.json ./

                        # Install ONLY production dependencies
                        RUN npm install --omit=dev

                        # Copy the compiled code from the 'builder' stage
                        COPY --from=builder /app/dist ./dist

                        EXPOSE 3000
                        
                        # Run the compiled JavaScript code
                        CMD [ "node", "dist/main.js" ]
                    

In-Class Practical Exercise

Containerizing Your NestJS Application

Your task is to take the NestJS application you've been building and write a complete, multi-stage `Dockerfile` to containerize it.

  1. In the root of your NestJS project, create a file named `Dockerfile`.
  2. Create a `.dockerignore` file and add `node_modules`, `dist`, `.git`, and any other local artifacts.
  3. Write a multi-stage `Dockerfile`:
    • The first stage (named `builder`) should start `FROM node:18-alpine`.
    • In the builder, copy `package.json`, run `npm install`, copy the rest of the source code, and run `npm run build`.
    • The second, final stage should also start `FROM node:18-alpine`.
    • In the final stage, copy `package.json`, run `npm install --omit=dev` to get only production dependencies.
    • Copy the compiled `dist` folder from the `builder` stage using `COPY --from=builder ...`.
    • `EXPOSE` the correct port and set the `CMD` to run your application.
  4. Build your Docker image: `docker build -t my-nest-api .`
  5. Run your container: `docker run -p 3000:3000 my-nest-api`
  6. Test your running, containerized application using Postman.

Final Knowledge Check