Problem

A fine line separates our local machines from being the birthplace of innovation or the graveyard of well-intentioned projects. Given the high stakes, it is relatively strange that most developers and organizations don't make local development a significant priority. To make my point, see how well you relate to these struggles:

  1. You were diligent and documented your setup perfectly in your README... six months ago. Now half the steps are wrong, and you're debugging your own instructions on a new machine.
  2. Your code runs flawlessly locally. Yet you'll spend half your development cycle deploying and debugging in a production like environment (or let's be honest... in production).
  3. A new team member joins. Day one becomes week one as they navigate undocumented dependencies, missing environment variables, and must meditate on the sisyphean reality of onboarding, intimated as "oh yeah, you also need to install..."
  4. You feel it is time to return to that world changing project you stopped working on for some unknown reason. But you spend the first hour trying to figure out how to get it to run, dooming its already uncertain revival for a new, better project that you definitely won't abandon.
  5. You enter negotiations with the devil himself to keep your outdated laptop alive rather than give in to that scheduled company upgrade because migrating is an unthinkable land mine you'd end up fruitlessly navigating. Besides, it's not like you've been using your soul once since LLMs came around.
  6. Your team is now evenly split between silicon and intel processor users. Therefore, it is time to give your stakeholders the bad news that you are shutting down the service because on-call support is officially impossible. It was a good run.

Years of debugging convoluted local systems has given me a couple of core principles:

  • Automation is the preventative medicine against the disease that is technical debt.
  • Executable documentation is among the only documentation you can consistently rely on.

TL;DR: The 5-step process

Here's what we'll build to solve these development environment headaches:

  1. Create a personalized base image with your preferred tools, shell configurations, and dotfiles.
  2. Build project-specific containers that extend your base image with only the tools needed for that project.
  3. Mount your source code so changes sync instantly between container and host.
  4. Use helper scripts to automate the build and run process for one-command setup.
  5. Enjoy consistent environments across all projects, team members, and machines.

The approach: layered docker development

Most Docker development approaches fall into two camps: project-specific containers that rebuild everything from scratch (slow, inconsistent), or heavy pre-built containers that feel generic and impersonal.

This article demonstrates a third way: creating a personalized base image with your preferred tools and dotfiles, then extending it for specific projects. This gives you Docker's reproducibility without sacrificing the comfort of your customized environment; the speed of familiar tools with the portability of containers.

We won't dive into performance optimization, production deployment, or complex orchestration. Those are important but beyond our scope. Instead, we'll focus on building a template for consistent, reproducible development environments that feel as comfortable as your native setup.

Prerequisites

You'll need:

  • Docker installed and running.
  • Some Docker familiarity (comfortable with docker build and docker run).
  • Basic command line skills (running commands, navigating directories).

Helpful but not required: organized dotfiles (we'll breifly cover dotfile setup later).

The brass tacks of Docker-based development

Let's outline the advantages and challenges we'll face with this proposed methodology.

Advantages

  • Your local development container more closely mirrors production containers than your bespoke local development setup.
  • Docker gives us much better reproducibility. We should expect commands like docker build, docker run, and docker-compose up to work consistently across machines.
  • We can achieve a self-documenting workflow since our dependencies are explicitly declared in Dockerfiles.
  • Multiple projects with conflicting dependencies can coexist peacefully.
  • We have a version controlled development environment because the whole setup is now code.

Challenges

  • Volume mounts can be slower than native file access.
  • Container debugging requires basic Linux skills. (but come on... you should have/want those).
  • Containers are ephemeral; customizations need persistence strategies.
  • We need to manage IDE integration complexity. Modern IDEs handle this better, but setup remains non-trivial.

Building your development foundation

The key to this approach is creating a carefully crafted base image with your preferred tools, then extending it for specific projects rather than starting fresh each time. We'll mount source code into containers while keeping it accessible to local IDEs—changes in either location sync instantly.

A quick aside on managing dotfiles

I want to make a suggestion on a prerequisite that will make our containers come pre-configured with our current local environment's preferred tooling and settings.

That reccomendation is for the tool stow to assist with dotfile management. Check my referenced sources at the end of this article if you want to learn more about stow. All we need to understand now is that stow manages our dotfiles (i.e. .zshrc, .vim, etc.) in a dedicated "dotfiles" directory that symlinks to our home directory.

This is useful to us because Docker can only access files in its "build context" (the directory where you run docker build). Having a common place for our dotfiles, means that we can easily copy our dotfiles into a project's given "build context".

So after using stow, my home directory looks something like this:

/home/user
├── .bash_history
├── .bash_logout
├── .bashrc -> dotfiles/.bashrc # notice the symlink
├── .bashrc.bak
├── .config -> dotfiles/.config # notice the symlink
├── Desktop
├── Documents
├── dotfiles
│   ├── .bashrc
│   ├── .config
│   ├── .gitignore
│   ├── .oh-my-zsh
│   ├── README.md
│   └── .zshrc
├── .gitconfig
├── Music
├── .oh-my-zsh -> dotfiles/.oh-my-zsh # notice the symlink
├── Pictures
├── Projects
├── Public
├── .ssh # this is a dotfile, but we don't want to be copying .ssh keys
├── .zcompdump # Don't need to stow this either. Just for caching to improve startup time.
├── .zsh_history # No need to stow this unless keeping your command history saved is useful to you
├── .zshrc -> dotfiles/.zshrc # notice the symlink
└── .zshrc.pre-oh-my-zsh -> dotfiles/.zshrc

Notice that my dotfiles are in a shared directory called "dotfiles" and are being symlinked back to my $HOME.

Our dev-base container

Let's talk about our dev-base image that will bring our local preferred tools and settings into our project's containers.

First things first, we'll create a Dockerfile with a preferred image. I tend to work out of Debian, so I develop most of my projects in a Debian environment, but this base image can be set to match whatever your preferred or required environment is.

We do this by defining our FROM command like so:

FROM debian:bookworm-slim

For our base image to be useful, we need to install our preferred development tools so we don't manually have to replicate these preferences across every development container.

Many of these tools may already come pre-installed on your image, but it doesn't hurt to specify them for the sake of clarity. Here's an example:

RUN apt-get update && apt-get install -y \
    git \
    curl \
    wget \
    zsh \
    vim \
    nano \
    build-essential \
    unzip \
    ca-certificates \
    sudo \
    htop \
    tree \
    jq \
    openssh-client \
    gnupg2 \
    less \
    netcat-openbsd \
    dnsutils \
    ripgrep \
    fd-find \
    && rm -rf /var/lib/apt/lists/*

I am installing things like tree, htop zsh, and vim which are tools I use frequently and want to have access to in all of my development environments.

Next, we should create a non-root user to avoid permission issues with mounted volumes:

ARG USERNAME=devuser
# UID 1000 typically matches the first user on Linux systems
ARG USER_UID=1000
ARG USER_GID=$USER_UID

RUN groupadd --gid $USER_GID $USERNAME \
    && useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \
    && echo "$USERNAME ALL=(ALL) NOPASSWD: /usr/bin/apt-get, /usr/bin/apt" > /etc/sudoers.d/$USERNAME

We can switch to our user context in our Dockerfile for user-specific installations in their home directory.

For example, I prefer developing with an add-on to my zsh terminal called oh-my-zsh among other popular plugins and themes for my shell.

USER $USERNAME
WORKDIR /home/$USERNAME

# Install oh-my-zsh
RUN sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" "" --unattended

# Install popular zsh plugins
RUN git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions \
    && git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting

# Install Spaceship theme
RUN git clone https://github.com/spaceship-prompt/spaceship-prompt.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/spaceship-prompt --depth=1 \
    && ln -s ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/spaceship-prompt/spaceship.zsh-theme ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/spaceship.zsh-theme

With this, we can expect our workspace to have our preferred terminal setup.

Next, let's copy in our dotfiles with our preferred settings and configurations for our tools. Note, that in this Dockerfile we are making an assumption that our "dotfiles" directory is in the same directory with our Dockerfile. A reminder from earlier that Docker cannot copy something outside of its context space.

In a shell script that we'll review later, we'll manage the copying and cleaning up of our dotfiles into the directory where our base Dockerfile is.


# These are just two examples of configurations I want copied in. 
# I don't need every dotfile, but as time passes and my tooling changes, 
# I can always add additional COPY lines for future dotfiles.
COPY --chown=$USERNAME:$USERNAME dotfiles/.zshrc /home/$USERNAME/.zshrc
COPY --chown=$USERNAME:$USERNAME dotfiles/.vimrc /home/$USERNAME/.vimrc

Lastly, we need to set up a workspace directory where our project will live when we mount it to this container. Since we've configured zsh with oh-my-zsh and our preferred plugins, let's also set it as our default shell.

RUN mkdir -p /home/$USERNAME/workspace

WORKDIR /home/$USERNAME/workspace

CMD ["/bin/zsh"]

The whole thing will look like this:

FROM debian:bookworm-slim

RUN apt-get update && apt-get install -y \
    git \
    curl \
    wget \
    zsh \
    vim \
    nano \
    build-essential \
    unzip \
    ca-certificates \
    sudo \
    htop \
    tree \
    jq \
    openssh-client \
    gnupg2 \
    less \
    netcat-openbsd \
    dnsutils \
    ripgrep \
    fd-find \
    && rm -rf /var/lib/apt/lists/*

ARG USERNAME=devuser
ARG USER_UID=1000
ARG USER_GID=$USER_UID

RUN groupadd --gid $USER_GID $USERNAME \
    && useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \
    && echo "$USERNAME ALL=(ALL) NOPASSWD: /usr/bin/apt-get, /usr/bin/apt" > /etc/sudoers.d/$USERNAME

USER $USERNAME
WORKDIR /home/$USERNAME

# Install oh-my-zsh
RUN sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" "" --unattended

# Install popular zsh plugins
RUN git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions \
    && git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting

# Install Spaceship theme for terminal
RUN git clone https://github.com/spaceship-prompt/spaceship-prompt.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/spaceship-prompt --depth=1 \
    && ln -s ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/spaceship-prompt/spaceship.zsh-theme ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/spaceship.zsh-theme

COPY --chown=$USERNAME:$USERNAME dotfiles/.zshrc /home/$USERNAME/.zshrc
COPY --chown=$USERNAME:$USERNAME dotfiles/.vimrc /home/$USERNAME/.vimrc

RUN mkdir -p /home/$USERNAME/workspace

WORKDIR /home/$USERNAME/workspace

CMD ["/bin/zsh"]

To be good stewards of this beautiful Dockerfile we've created, and to fulfill our mandate of "executable documentation", let's manage the building and configuring of the image with a bash script (rather than some instructions in a README).

We need our script to do the following:

  1. Copy our "dotfiles" directory into our local directory so our Dockerfile's COPY command will work as intended.
  2. Build our docker image so other containers can later pull from this local image.
  3. Clean up our temporary dotfiles directory.

Here's my version of the build.sh script:

#!/bin/bash

set -euo pipefail

# Check if DOTFILES_DIR env variable is set, if not prompt user
if [[ -z "${DOTFILES_DIR:-}" ]]; then
    read -p "Enter dotfiles directory path: " -r DOTFILES_DIR
    DOTFILES_DIR="${DOTFILES_DIR/#\~/$HOME}"
    export DOTFILES_DIR
fi

if [[ ! -d "$DOTFILES_DIR" ]]; then
    echo "Error: Dotfiles directory not found: $DOTFILES_DIR" >&2
    exit 1
fi

echo "Using dotfiles from: $DOTFILES_DIR"

trap "rm -rf ./dotfiles" EXIT

cp -rT "$DOTFILES_DIR" "./dotfiles"

# Build the image, passing DOTFILES_DIR as build argument
BUILD_DATE=$(date +%Y%m%d)
docker build -f Dockerfile.base \
  -t dev-base:latest \
  -t dev-base:$BUILD_DATE .

echo "✅ Built dev-base:latest and dev-base:$BUILD_DATE"

echo "cleaning up temp dotfiles directory"

After running, we have this dev-base image on our local machines which future project specific images can extend.

➜ docker images
REPOSITORY   TAG        IMAGE ID       CREATED         SIZE
dev-base     20250802   6351a5d71ff7   3 minutes ago   612MB
dev-base     latest     6351a5d71ff7   3 minutes ago   612MB

Before and after: basic container vs. enahnced dev-base

Let's see the difference between a basic Debian container and our personalized dev-base:

Basic Debian container with plain bash prompt Enhanced dev-base container with oh-my-zsh and custom theme

Above: Basic debian:bookworm-slim container with minimal tools. Below: Our personalized dev-base with oh-my-zsh, custom theme, and familiar configurations. Notice the immediate difference in prompt, available tools, and overall feel.

Now let's go through a few example projects that can build on top of this base image.

Example: LaTeX development (why the base image matters)

Here's a real-world example where Docker converts a painful setup into a clean, self-documented, and version controlled onboarding process.

I chose LaTeX screenplay writing, not because it's common, but because it illustrates the kind of specialized development environment that would normally require:

  • Manual download and installation of obscure packages.
  • Knowledge of TeX directory structures.
  • System-specific configuration that varies between macOS/Linux/WSL.
  • Documentation that inevitably goes stale.

This is where containerization pays significant dividends. What would normally be an arduous setup becomes a simple matter of executing a docker run command.

FROM dev-base:latest

USER root

RUN apt-get update && apt-get install -y \
    pandoc \
    texlive-latex-base \
    texlive-latex-extra \
    texlive-fonts-recommended \
    texlive-latex-recommended \
    && rm -rf /var/lib/apt/lists/*

# Install screenplay package from the official source
RUN cd /tmp && \
    # Download the screenplay package
    wget http://dvc.org.uk/sacrific.txt/screenplay.zip && \
    echo "8ec5210bcd4d3c2a7d961f4a9a7472c9fea8a7b00907dc7601465a947413a265  screenplay.zip" | sha256sum -c - && \
    unzip screenplay.zip && \
    # Generate the class files using the provided installer
    latex screenplay.ins && \
    # Create the correct directory where TeX looks for local packages
    mkdir -p /usr/local/share/texmf/tex/latex/screenplay && \
    # Copy the generated files to the correct location
    cp screenplay.cls /usr/local/share/texmf/tex/latex/screenplay/ && \
    cp hardmarg.sty /usr/local/share/texmf/tex/latex/screenplay/ && \
    # Update the TeX filename database for the local tree
    mktexlsr /usr/local/share/texmf && \
    # Copy example files to permanent location
    mkdir -p /usr/local/share/screenplay-examples && \
    cp example.tex test.tex /usr/local/share/screenplay-examples/

# Verify installation works by testing if TeX can find the class
RUN kpsewhich screenplay.cls

# Create an entrypoint script that copies examples directory and starts bash
RUN echo '#!/bin/bash' > /usr/local/bin/entrypoint.sh && \
    echo 'cp -r /usr/local/share/screenplay-examples /home/devuser/workspace/ 2>/dev/null || true' >> /usr/local/bin/entrypoint.sh && \
    echo 'exec "$@"' >> /usr/local/bin/entrypoint.sh && \
    chmod +x /usr/local/bin/entrypoint.sh


USER devuser
WORKDIR /home/devuser/workspace

ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["/bin/zsh"]

Notice how this Dockerfile starts with FROM dev-base:latest. This means:

  • No reinstalling basic tools - git, vim, zsh, and all your dotfiles are already there.
  • We hav a consistent environment where every project feels familiar because they share the same foundation.
  • Because we're only adding LaTeX-specific dependencies and not rebuilding your entire development environment, we should have a faster build.

For improved "executable documentation", we can then add a simple run-dev.sh script that will build and run our project for us.

#!/bin/bash

# Build the image
docker build -t screenplay-latex .

# Run container with interactive terminal and volume mounting
# Notice that we're mounting to the workspace directory we set 
# up in the base image
docker run -it --rm -v "$(pwd):/home/devuser/workspace" screenplay-latex

This complex LaTeX setup is now portable and reusable. What was once a fragile, machine-specific configuration is captured in code and builds consistently anywhere.

This same pattern works for any specialized environment, whether it's a legacy Python version, a specific Node.js setup, or even more unconventional requirements.

LaTeX environment ready to use

LaTeX container showing successful screenplay compilation

Complex LaTeX setup with specialized packages working immediately.

View the compiled screenplay PDF that was generated from this containerized environment.

A more common example: web development

Let's see how this base image approach applies to a more typical development scenario, a simple web development environment with Node.js tooling.

FROM dev-base:latest

USER root

RUN apt-get update && apt-get install -y \
    nodejs \
    npm \
    && rm -rf /var/lib/apt/lists/*

# Install web development tools globally
RUN npm install -g \
    http-server \
    live-server \
    prettier

# Copy entrypoint script that starts web server and keeps container running
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh

# Copy helper script for common development tasks
COPY webdev-helper.sh /usr/local/bin/webdev
RUN chmod +x /usr/local/bin/webdev

USER devuser
WORKDIR /home/devuser/workspace

# Expose ports for web servers
EXPOSE 8000 8080

ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["tail", "-f", "/dev/null"]

Here are the base script and the referenced helper scripts from the webdev Dockerfile:

run-dev.sh

#!/bin/bash

# Stop and remove any existing container first
echo "Cleaning up existing container..."
docker stop web-dev-container 2>/dev/null || true
docker rm web-dev-container 2>/dev/null || true

# Build the development image
echo "Building web development container..."
if ! docker build -f Dockerfile.dev -t web-dev:latest .; then
    echo "Build failed! Exiting..." >&2
    exit 1
fi

# Debug: Show what we're mounting
echo "================================================================"
echo "Current directory: $(pwd)"
echo "Files in current directory:"
ls -la
echo "================================================================"

echo "Starting web development container..."
echo "Access your site at:"
echo "  - http://localhost:8000 (basic server)"
echo "  - http://localhost:8080 (live-reload server)"
echo ""
echo "To access the container shell:"
echo "  docker exec -it web-dev-container /bin/zsh"
echo ""
echo "Inside the container, use:"
echo "  webdev live    # Start live-reload server"
echo "  webdev serve   # Start basic server"  
echo "  webdev format  # Format your code"
echo "================================================================"

# Run container with volume mounting
if ! docker run -d \
  --name web-dev-container \
  -p 8000:8000 \
  -p 8080:8080 \
  -v "$(pwd):/home/devuser/workspace" \
  web-dev:latest; then
    echo "Failed to start container! Exiting..." >&2
    exit 1
fi

echo "Container started! Basic server running on port 8000."
echo ""
echo "Verify the mount worked:"
echo "  docker exec web-dev-container ls -la /home/devuser/workspace"
echo ""
echo "Access the shell:"
echo "  docker exec -it web-dev-container /bin/zsh"

entrypoint.sh

#!/bin/bash
echo "=== Web Development Container ==="
echo "Starting Node.js HTTP server on port 8000..."
echo "Access your site at: http://localhost:8000"

cd /home/devuser/workspace
http-server -p 8000 --host 0.0.0.0 &
SERVER_PID=$!

cleanup() {
    kill $SERVER_PID 2>/dev/null || true
    exit 0
}
trap cleanup SIGINT SIGTERM

exec "$@"

webdev-helper.sh

#!/bin/bash
case "$1" in
  "serve")
    echo "Starting http-server on port 8000..."
    http-server -p 8000 --host 0.0.0.0
    ;;
  "live")
    echo "Starting live-server with auto-reload..."
    live-server --port=8080 --host=0.0.0.0
    ;;
  "format")
    echo "Formatting HTML, CSS, and JS files..."
    prettier --write "**/*.{html,css,js,json}"
    ;;
  "validate")
    echo "Validating JSON files..."
    find . -name "*.json" -exec jq . {} \;
    ;;
  *)
    echo "Usage: webdev {serve|live|format|validate}"
    echo "  serve    - Start http-server on port 8000"
    echo "  live     - Start live-server with auto-reload on port 8080"
    echo "  format   - Format HTML/CSS/JS files with Prettier"
    echo "  validate - Validate JSON files"
    ;;
esac

Key takeaways

Remember those core principles from the beginning?

  • "Automation is the preventative medicine against the disease that is technical debt" — Your run-dev.sh scripts automate the entire environment setup, preventing the accumulation of undocumented steps.
  • "Executable documentation is among the only documentation you can consistently rely on" — Your Dockerfiles are your documentation. They can't go stale because they're executed every time someone builds the project.

This base image approach delivers on those principles while solving our original problems:

  1. No more stale READMEs — The Dockerfile is the single source of truth for dependencies.
  2. No more "works on my machine" — Everyone runs the exact same container.
  3. No more painful onboarding — New team members run ./run-dev.sh and they're ready.
  4. No more abandoned projects — Return to any project and it runs exactly as you left it.
  5. No more machine migration dread — Your entire environment rebuilds identically on any machine.

The magic isn't just in the containers, it's in treating your development environment as code. Your Dockerfiles and scripts aren't just configurations; they're living, executable documentation that proves itself correct every time it runs.

Additional material and references