LaTeX with Docker on Windows: a fast, reproducible WSL2 workflow to compile PDFs in minutes
Set up a reproducible LaTeX environment with Docker on Windows using WSL2 – a fast, clean, command-line guide from install to PDF build in about 10 minutes.
Docker-powered LaTeX: what this workflow delivers and why it matters
If you’ve ever wrestled with TeX Live path errors, mismatched package versions, or a half-day installer on Windows, running LaTeX inside a container changes the equation. This guide explains how to set up a reproducible LaTeX environment using Docker on Windows through WSL2, from installing the subsystem to pulling a TeX Live image, creating a minimal project, and producing a PDF with latexmk. The result is a portable, versioned build environment that avoids global TeX installations, keeps your Windows system clean, and makes it simple to share the exact toolchain with colleagues or CI pipelines.
Why use Docker to run LaTeX on Windows
Containerizing LaTeX eliminates many of the common pain points on Windows: environment variables, conflicting TeX distributions, and long installers. A container bundles the TeX Live binaries, fonts, and tools so that the same build commands produce identical output on any machine that can run Docker. That reproducibility is crucial for collaborative research, automated document builds (CI/CD), and teaching, where instructors need students to compile the same source reliably. Containers also let you keep your host uncluttered — TeX Live images can be gigabytes, but they live in Docker storage rather than scattered across system folders.
Using Docker also simplifies version management: when you need to upgrade or test a different TeX Live release, you switch images instead of juggling system-level installs. For power users and developers, the container approach integrates well with editor workflows (for example VS Code over WSL) and with automated pipelines that generate PDFs on push.
How WSL2 fits into the solution
On modern Windows, the practical bridge to Linux containers is WSL2 (Windows Subsystem for Linux 2). Unlike WSL1, which translated system calls, WSL2 runs a real Linux kernel in a lightweight virtual machine, delivering near-native Linux performance and compatibility with Docker. Docker for Windows historically relied on Hyper-V or Docker Desktop; by running Docker inside an Ubuntu instance on WSL2 you can avoid GUI dependencies, manage Docker with apt, and run linux-native builds in a familiar shell.
Key distinctions that matter:
- WSL2 provides a full Linux kernel and is required for Docker compatibility in this setup.
- Containers run inside WSL2’s environment, which gives the same command-line interface and file semantics you expect on Linux.
- The workflow uses standard Linux tooling (apt, systemd alternatives) and integrates with Windows through the \wsl$ network share for editing and viewing files.
Installing WSL2 and Ubuntu: practical steps and gotchas
Begin in Windows with an elevated PowerShell session (Run as Administrator). The simplest single-line installer is:
wsl –install
This command will enable WSL and install Ubuntu by default. On most modern builds of Windows 10 (version 2004 or later) and Windows 11, it automatically selects WSL2. Follow the on-screen prompts to reboot when requested.
Common hiccup: some users report a "WSL install may be corrupted" prompt during installation. If you see that, press a key when the prompt appears — the repair option times out after about 60 seconds — and then re-run wsl –install if needed. After installation, launch Ubuntu from the Start menu and set a Linux username and password; note that the password is typed invisibly (no asterisks), which is normal.
Requirements to keep in mind: at least 8 GB RAM is recommended for a comfortable experience, and ensure you have a few gigabytes of free disk space for images and temporary build artifacts.
Installing Docker inside Ubuntu and configuring it for convenience
Once Ubuntu is up, work from its terminal. Update packages and install Docker:
sudo apt update && sudo apt upgrade -y
sudo apt install -y docker.io
After installing, add your user to the docker group so you can run Docker commands without sudo:
sudo usermod -aG docker $USER
Important: you must close and reopen the Ubuntu window (or log out and back in) for that group change to take effect. Verify Docker is available with docker –version. If the daemon isn’t running, start it manually:
sudo service docker start
If you plan to run Docker commands automatically when your WSL session starts, you can add a tiny check to ~/.bashrc that starts the daemon if it isn’t running. Note that automated starts may require configuring passwordless sudo for the Docker start command to run silently, and that has security implications you should evaluate.
Choosing a TeX Live Docker image and pulling it
There are multiple TeX Live images on Docker Hub and other registries. An easy, ready-made choice is the k1z3/texlive image, which packages a comprehensive TeX Live installation suitable for many documents. Pulling the image is straightforward:
docker pull k1z3/texlive
Note: the first download will take a few minutes and can be multiple gigabytes depending on the image tag. If disk usage is a concern, investigate slimmer images or build a custom image that contains only the packages you need.
Confirm the image was downloaded with docker images and note its repository and size. Using a fixed image tag rather than latest can help ensure build reproducibility over time.
Project layout and build configuration for predictable output
Organize your LaTeX project in a dedicated folder inside WSL, for example ~/latex-project. This directory is easily accessible from Windows Explorer via the \wsl$\Ubuntu path, allowing you to edit sources with Windows tools like VS Code or Notepad while compiling from the container.
A minimal project for testing can include:
- main.tex — your TeX source
- .latexmkrc — a configuration file for latexmk that controls the engine and PDF pipeline
Example main.tex can be as simple as:
\documentclass{article}
\begin{document}
\Huge \LaTeX
\end{document}
A typical .latexmkrc to use uplatex and convert DVI to PDF might declare:
$latex = ‘uplatex’;
$dvipdf = ‘dvipdfmx %O -o %D %S’;
$pdf_mode = 3;
.latexmkrc automates the multi-step compile process latexmk performs (running latex/uplatex, bibtex/biber, dvipdf, and rerunning passes as needed) so you can use a single command to build a clean PDF.
Running the containerized build: command breakdown
From inside your project folder, run:
docker run –rm -v "$(pwd):/texsrc" -w /texsrc k1z3/texlive latexmk main.tex
What this does:
- docker run launches a container from the k1z3/texlive image.
- –rm removes the container automatically after the run finishes.
- -v "$(pwd):/texsrc" mounts your current folder into the container at /texsrc so build outputs appear on the host.
- -w /texsrc sets the working directory inside the container.
- latexmk main.tex invokes latexmk to build the document using the instructions in .latexmkrc.
On success you will find main.pdf in the project folder; you can open it from Windows Explorer. This single-command build pattern makes it easy to script document generation or plug the command into editor tasks.
Common errors and straightforward fixes
- docker: command not found — Docker is not installed in WSL or you haven’t restarted the session after adding your user to the docker group. Reopen Ubuntu and verify docker –version.
- permission denied when running docker — you likely didn’t close and reopen Ubuntu after usermod -aG docker $USER; do that first. If it persists, ensure the Docker daemon is running with sudo service docker start.
- Cannot connect to the Docker daemon — start the daemon as above. If you continually need to start it manually, append a start check to ~/.bashrc, but consider the implications of enabling passwordless sudo for fully automated starts.
- Long download times or large image sizes — consider curated slim TeX Live images or building an image with only the packages you need to save disk and speed up CI runs.
Editor and CI integration: practical workflows
This Docker approach integrates cleanly with several development workflows:
- VS Code: use the Remote – WSL extension to edit files directly in the Ubuntu environment, then run build commands in the integrated terminal. You can also define VS Code tasks that run the docker run compile command.
- Continuous integration: run the same docker pull and docker run steps in pipeline jobs to generate PDFs on push. Using the same image tag in CI and on developer machines helps maintain deterministic builds.
- Automation: script repeated builds with Makefiles or npm scripts that call docker run, allowing larger projects to standardize build steps across teams.
Because the container encapsulates the toolchain, it’s straightforward to bake the document build into release processes, automated reporting, or scheduled documentation builds.
Security, updates, and maintaining images
Treat third-party Docker images like any binary dependency: verify the image source, inspect Dockerfiles where possible, and pin image tags rather than relying on mutable latest tags. If you require specific packages or want finer control, create your own Dockerfile based on a minimal base image and install only the TeX packages you need.
Keep images updated by pulling newer tags when appropriate, and rebuild custom images periodically to include security updates. For production or institutional use, consider hosting curated images in a private registry.
Implications for teams, reproducibility, and education
Containerized LaTeX workflows change how documentation and academic writing are managed. For research groups and publishers, the ability to re-run builds from archived source and a pinned image removes a common friction point in reproducibility. In courses that teach TeX, an instructor can provide a ready-to-run image and a single command for students to reproduce the instructor’s environment, avoiding mismatch issues on diverse student machines.
Organizations benefit from reduced support overhead: instead of troubleshooting dozens of local TeX setups, an operations or documentation team can maintain one image and one build command. This also enables easier migration between machines or recovery after hardware changes: install WSL2, pull the image, and you’re back to a known-good toolchain.
Performance and disk considerations
TeX Live images are not tiny; they often occupy several gigabytes. Plan storage accordingly and clean unused images with docker image prune when you need to reclaim space. For large projects with many builds, leverage Docker image caching and layered builds if you create a custom image. When performance matters, WSL2 typically offers near-native file IO for files inside the Linux filesystem; store project files inside the WSL filesystem (~/) rather than on mounted Windows drives for faster builds, or be mindful that cross-filesystem mounts can impact speed.
Extending the setup: custom images and package management
If your document uses specific packages not included in an image, you can either install them each time (not recommended) or build a Docker image that installs the TeX Live packages you need via tlmgr during image build. That custom image becomes the single source of truth for your project’s dependencies and can be shared through a registry or version-controlled Dockerfile, making onboarding simpler.
For users who need reproducible research artifacts, commit a Dockerfile alongside your TeX sources that documents the image construction process. That serves both as documentation and as a build script for CI.
Looking ahead: containerized toolchains are becoming a standard part of developer and research workflows. Expect improved tooling around WSL2 container integration, better official images for common tasks like LaTeX builds, and tighter editor extensions that can orchestrate containerized compile steps without leaving the editor. As container platforms and Windows evolve, the friction of running Linux-native toolchains on Windows will continue to decline, making reproducible, portable document builds even easier to adopt across teams and classrooms.


















