Speed Up Docker Builds 10x: A Practical Guide to Cache Optimization

1 AM. I’m staring at the terminal progress bar. Fixed a typo, ran docker build again, and then鈥攏pm install started all over. 10 minutes later, after scrolling through 20 social media posts and two cups of coffee, the progress bar is still spinning.
Anyone who’s done containerized development knows this frustration, right?
But here’s the thing鈥攖here’s a solution. I spent a week digging into Docker’s caching mechanism and cut my build time from 10 minutes down to 30 seconds. Honestly, the first time I saw that happen, I was stunned鈥擠ocker can actually be this fast.
In this article, I’ll walk you through 3 immediately actionable techniques: configuring .dockerignore, understanding layer caching, and optimizing Dockerfile instruction order. Plus an advanced bonus: BuildKit cache mounts.
Are your builds slow? If so, keep reading.
Why Are Your Docker Builds So Slow?
Build Context Is Too Large
Let’s start with a common pitfall: build context.
You know what happens when you run docker build .? The first thing Docker does isn’t running your Dockerfile鈥攊t’s packaging all files in the . directory and sending them to the Docker daemon. Yes, all of them. Including your node_modules, your .git folder, and that hundreds-of-megabytes test dataset you downloaded.
I’ve seen the most extreme case: a frontend project with an 800MB build context. Just transferring those files took 2-3 minutes. In reality, the image only needed less than 10MB of source code.
It’s like shipping a book but packing the entire bookshelf with it.
The Domino Effect of Cache Invalidation
Second issue: not understanding Docker’s layer caching mechanism.
Docker images are layered. Each instruction in your Dockerfile鈥擣ROM, RUN, COPY鈥攃reates a layer. When building, Docker checks if there’s usable cache for each layer. If the instruction is identical and dependent files haven’t changed, it uses the cache without re-executing.
Sounds great, right?
But here’s the catch: once a layer’s cache is invalidated, all subsequent layers must be rebuilt. Like dominoes鈥攌nock down the first one, and the rest fall.
Many people write their Dockerfile like this:
FROM node:18
COPY . /app
WORKDIR /app
RUN npm installLooks fine? Actually, it’s problematic.
The COPY . /app line copies the entire project. Change any file鈥攅ven just a typo in README.md鈥攁nd this layer’s cache is invalidated. What happens next? The npm install after it has to run again too.
That’s why changing one line of code forces you to reinstall the entire dependency tree.
Illogical Instruction Order
Third pitfall: not knowing how to order instructions.
Docker’s caching strategy is simple: it checks from top to bottom, and stops using cache once invalidation occurs. This means you should put rarely-changing instructions first, frequently-changing ones last.
But in reality, many Dockerfiles do the opposite: copy code first (most frequently changed), then install dependencies (less frequently changed). Result: every code change invalidates the dependency cache.
Simply put, people don’t understand what changes frequently versus what stays stable.
Trick 1 - Configure .dockerignore to Reduce Build Context
Alright, problems identified. Let’s talk about the simplest, most immediate optimization: .dockerignore.
What Is It?
You’ve used .gitignore, right? .dockerignore works the same way鈥攊t tells Docker which files not to package into the build context.
Creating one is super simple: in your project root (same level as Dockerfile), create a .dockerignore file and write your rules.
How to Configure for Node.js Projects?
Here’s the configuration I use:
# Dependencies
**/node_modules/
**/npm-debug.log
**/.npm
# Git related
.git/
.gitignore
.gitattributes
# Tests and docs
**/test/
**/tests/
**/docs/
**/*.md
!README.md
# IDE and editors
.vscode/
.idea/
*.swp
*.swo
.DS_Store
# Environment and config
.env
.env.*
*.local
# Build artifacts
dist/
build/
coverage/Key points:
- Always exclude node_modules. This can be hundreds of MB, and you’ll reinstall it in the image anyway鈥攏o need to copy from local.
- Use
**/prefix to match all nested directories. For example,**/node_modules/matches./node_modules/and./packages/lib/node_modules/, so nothing slips through. - Add trailing slash for directories.
node_modules/means directory,node_modulesmeans file. They look similar, but Docker is precise.
How Effective Is This?
I tested on a Next.js project:
- Before: build context 520MB, transfer time 2m15s
- After: build context 4.8MB, transfer time 3s
Yes, 3 seconds. Saved 2 minutes instantly.
This doesn’t even count the image size reduction鈥攏o longer packaging .git and node_modules, final image went from 1.2GB to 680MB.
Common Pitfalls
Pitfall 1: .dockerignore only works at the build context root. If your build command is docker build -f subfolder/Dockerfile ., place .dockerignore in the project root, not in subfolder.
Pitfall 2: Writing node_modules (no slash) might not work. Add the slash: node_modules/.
Pitfall 3: Forgetting to exclude .git. The .git directory can be hundreds of MB yet is never used.
Trick 2 - Understanding and Leveraging Docker Layer Caching
.dockerignore solves transfer speed, but the core is understanding how caching works.
How Does Layer Caching Work?
Docker images are like layer cakes鈥攅ach layer is the result of executing one Dockerfile instruction.
Take this Dockerfile:
FROM node:18 # Layer 1
RUN apt-get update # Layer 2
COPY package.json . # Layer 3
RUN npm install # Layer 4
COPY . . # Layer 5During build, Docker checks layer by layer:
- Layer 1: FROM instruction, checks if node:18 image exists locally. Yes? Use cache.
- Layer 2: RUN instruction, checks if instruction text is identical. Yes? Use cache.
- Layer 3: COPY instruction, calculates package.json checksum. File unchanged? Use cache.
- Layer 4: RUN instruction, continues checking.
- Layer 5: Same logic.
Key point: once a layer’s cache is invalidated, all subsequent layers must be rebuilt.
That’s the domino effect I mentioned. If you change package.json at layer 3, layer 4’s npm install and layer 5’s code copy must both re-run.
How to Tell If Cache Is Used?
Check the build output:
Step 3/5 : COPY package.json .
---> Using cache
---> 3a8f29e7c5b1See Using cache? Cache is being used. Don’t see it? It’s rebuilding.
You can also use docker history <image-ID> to view layer history鈥擲IZE and creation time for each layer at a glance.
Why Are COPY Instructions Special?
RUN instructions only check command text. The RUN npm install command鈥攁s long as the text doesn’t change, Docker assumes cache can be used.
But COPY and ADD are different. Docker calculates content checksums of copied files. Even if the filename stays the same, if content changes, cache is invalidated.
This design is clever鈥攂ecause file content changed, subsequent build steps might be affected, so old cache can’t be used.
But because of this, COPY . . is particularly dangerous: as soon as any file in the project changes (even README.md), this layer’s cache is gone.
Trick 3 - Optimizing Dockerfile Instruction Order
Now that you understand caching principles, let’s get practical: how do you write Dockerfiles to maximize cache utilization?
Golden Rule: From Stable to Volatile
The core is one sentence: put rarely-changing instructions first, frequently-changing ones last.
Why? Because Docker checks cache from top to bottom. If earlier layers are stable, changes to later layers won’t affect earlier cache.
Specifically:
- Base image - rarely changes
- System dependencies - occasionally change
- Project dependencies - sometimes change
- Source code - changes daily
Arrange in this order to maximize cache utilization.
Wrong Example: Copy Code Then Install Dependencies
Many people start by writing this:
FROM node:18
WORKDIR /app
# Wrong: copy entire project directly
COPY . .
# Then install dependencies
RUN npm install
# Start command
CMD ["npm", "start"]What’s the problem? You change source code, COPY . . layer cache invalidates, and npm install after it has to re-run.
Result: change one line of JS code, reinstall hundreds of npm packages. 10 minutes gone again.
Correct Example: Install Dependencies Then Copy Code
Optimized approach:
FROM node:18
WORKDIR /app
# Step 1: Only copy dependency files
COPY package.json package-lock.json ./
# Step 2: Install dependencies (this layer will be cached)
RUN npm ci --only=production
# Step 3: Copy source code
COPY . .
# Start command
CMD ["npm", "start"]Benefits:
- As long as package.json doesn’t change, the
npm cilayer uses cache. - When you change source code, only
COPY . .layer invalidates, dependency installation cache remains. - Second build skips npm install directly鈥攂lazing fast.
I tested this鈥攖his adjustment cut subsequent build time from 7-8 minutes to about 30 seconds.
Same Principle for Other Languages
Python projects:
FROM python:3.11
WORKDIR /app
# Copy requirements.txt first
COPY requirements.txt .
# Then pip install
RUN pip install --no-cache-dir -r requirements.txt
# Finally copy code
COPY . .Go projects:
FROM golang:1.21
WORKDIR /app
# Copy go.mod and go.sum first
COPY go.mod go.sum ./
# Download dependencies
RUN go mod download
# Then copy code
COPY . .
# Compile
RUN go build -o main .The core idea is the same: separate dependency management files from source code copying, maximizing cache reuse for the dependency installation step.
Advanced Technique: Fine-Grained COPY
For complex project structures, you can be even more granular:
# First copy rarely-changing config files
COPY .eslintrc.json .prettierrc ./
# Then copy dependency files
COPY package*.json ./
RUN npm install
# Then copy shared libraries (if any)
COPY ./lib ./lib
# Finally copy business code
COPY ./src ./srcThis approach is less common but useful in certain scenarios (like monorepo projects).
Advanced Technique - BuildKit Cache Mounts
Everything above is basic optimization. Now let’s talk about something more advanced: BuildKit cache mounts.
What Is BuildKit?
BuildKit is the new build engine introduced in Docker 18.09, much faster than the old engine and supporting more powerful caching features.
Enabling it is super simple:
# Temporary enable
export DOCKER_BUILDKIT=1
docker build .
# Or add before command
DOCKER_BUILDKIT=1 docker build .If your Docker version is recent enough (19.03+), BuildKit should be enabled by default. Not sure? Run docker version to confirm.
What Are Cache Mounts?
The layer caching discussed earlier has an issue: once that layer invalidates, it must be completely re-executed.
For example, you changed package.json and added a new dependency鈥攖he npm install layer cache is gone. Result: all packages鈥攊ncluding those already downloaded鈥攎ust be re-downloaded.
Cache mounts solve this problem. The logic: even if layer cache invalidates, package manager download cache can be preserved.
Simply put, give the package manager a persistent cache directory that’s shared across builds.
How to Use?
Node.js project example:
FROM node:18
WORKDIR /app
COPY package*.json ./
# Key is here: mount npm cache directory
RUN --mount=type=cache,target=/root/.npm \
npm ci --only=production
COPY . .
CMD ["npm", "start"]The --mount=type=cache,target=/root/.npm part is key:
type=cacheindicates this is a cache mounttarget=/root/.npmis npm’s cache directory
With this, even if package.json changes and layer cache invalidates, npm doesn’t have to re-download all packages. It reads from existing cache in /root/.npm and only downloads new or updated packages.
How to Configure for Other Package Managers?
Yarn:
RUN --mount=type=cache,target=/root/.yarn \
yarn install --frozen-lockfilepip (Python):
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txtapt (system packages):
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt-get update && apt-get install -y gccNote the sharing=locked parameter for apt. Because apt needs exclusive access to its cache, add this parameter to avoid concurrent build conflicts.
How Effective Is This?
I tested on a project with 200+ dependencies:
- Layer cache invalidated but cache mount active: dependency installation dropped from 8 minutes to 1m30s
- Complete cold start (no cache at all): still 8 minutes
In other words, cache mounts are the “second line of defense” for layer caching. When layer cache isn’t invalidated, it’s fastest (skips directly). When invalidated, cache mounts have your back鈥攁t least you don’t have to completely re-download.
Considerations
Default retention time isn’t long: BuildKit defaults to cleaning cache exceeding 512MB and older than 2 days. If using in CI/CD environments, you might need to adjust the strategy.
Not all scenarios need it: If your project has few dependencies (like just a dozen packages), cache mounts make little difference.
Get the path right: Each package manager’s cache directory is different鈥攃heck the docs to confirm.
Conclusion
After all this, the core is three things:
First, do it now: Create a .dockerignore file in your project root and exclude node_modules, .git, test files. Takes less than 5 minutes, reduces build context by 90%+.
Second, change it today: Adjust your Dockerfile instruction order. COPY dependency files first, RUN install, finally COPY source code. This adjustment can reduce your subsequent build time from 10 minutes to 30 seconds.
Third, study when you have time: If your project has many dependencies and updates frequently, try BuildKit cache mounts. It can save you when layer cache invalidates.
After completing these three steps on my own project, build time went from 10 minutes to 30 seconds, and image size from 1.2GB to 680MB. Honestly, this is the highest ROI optimization I’ve ever seen.
Are your Docker builds slow? If so, try following this article. When you’re done, come back and drop a comment鈥擨’m really curious how much you managed to speed things up.
FAQ
How can I speed up Docker builds?
1) Configure .dockerignore to reduce build context size
2) Optimize Dockerfile instruction order (COPY package.json before COPY .)
3) Use BuildKit cache mounts for dependency caching
Can reduce build time from 10 minutes to 30 seconds.
What is Docker layer caching?
If instruction and dependent files haven't changed, Docker reuses cached layer instead of re-executing.
Place stable instructions (like COPY package.json) before changing ones (COPY .) to maximize cache hits.
What should I put in .dockerignore?
• node_modules
• .git
• Test files
• Build artifacts
• IDE configs
• Logs
• .env files
Anything not needed for build. Reduces build context size significantly - I've seen 800MB contexts reduced to 10MB.
What are BuildKit cache mounts?
Even when layer cache invalidates, dependencies don't need full re-download.
Use: `RUN --mount=type=cache,target=/root/.npm npm install`.
Why does changing one file invalidate all cache?
If you COPY . before installing dependencies, any source code change invalidates dependency installation cache.
Solution: COPY dependency files first, install, then COPY source code.
How much can build optimization improve speed?
• Build time from 10 minutes to 30 seconds (20x improvement)
• Image size from 1.2GB to 680MB
• With cache mounts, dependency installation drops from 8 minutes to 1m30s even when layer cache invalidates
Do I need BuildKit for cache optimization?
BuildKit cache mounts require BuildKit (enabled by default in Docker 23+).
Enable with:
• `DOCKER_BUILDKIT=1 docker build`
• Or `docker buildx build`.
10 min read · Published on: Dec 17, 2025 · Modified on: Jan 22, 2026
Related Posts
Next.js E-commerce in Practice: Complete Guide to Shopping Cart and Stripe Payment Implementation

Next.js E-commerce in Practice: Complete Guide to Shopping Cart and Stripe Payment Implementation
Complete Guide to Next.js File Upload: S3/Qiniu Cloud Presigned URL Direct Upload

Complete Guide to Next.js File Upload: S3/Qiniu Cloud Presigned URL Direct Upload
Next.js Unit Testing Guide: Complete Jest + React Testing Library Setup


Comments
Sign in with GitHub to leave a comment