Docker Container Security Best Practices in 2026

Learn Docker container security best practices for 2026: distroless images, rootless runtimes, Cosign v3 supply chain verification, and runtime monitoring. Checklist included.

T
TechSaaS Team
8 min read read

Docker Container Security Best Practices in 2026

Containers are the backbone of modern infrastructure. At TechSaaS, we run 90+ Docker containers on a single host — and every one of them is a potential attack surface if left unchecked. Container security in 2026 isn't optional; it's table stakes.

This guide covers the practices we actually use in production, backed by real scan data and real incidents. If you're running containers in any environment — dev, staging, or production — these are the things that matter.

1. Start With Minimal Base Images

The single most impactful security decision happens at the top of your Dockerfile. Every package in your base image is a package that could have a CVE.

Use distroless, Alpine, or Docker Hardened Images:

# Bad: Full Debian with hundreds of packages
FROM node:22-bookworm

# Better: Alpine with minimal footprint
FROM node:22-alpine

# Best: Distroless — no shell, no package manager, nothing extra
FROM gcr.io/distroless/nodejs22-debian12

A major development in late 2025: Docker released over 1,000 Hardened Images (DHI) under the Apache 2.0 license — purpose-built for security, available on Docker Hub. These images are stripped of unnecessary packages, pre-scanned, and continuously updated. If distroless is too restrictive for your use case, DHI is the next best option.

Distroless images contain only your application and its runtime dependencies. No shell, no package manager, no curl, no wget. An attacker who gains code execution inside a distroless container has almost nothing to work with.

The debugging trade-off: Distroless has no shell, so you can't docker exec -it into a crashing production container at 3 AM. Solutions: use the :debug variant (gcr.io/distroless/base-nossl:debug) as a temporary sidecar, or use ephemeral containers (docker debug in Docker Engine 29+).

Use multi-stage builds to strip build tools from the final image:

FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
RUN npm run build

FROM gcr.io/distroless/nodejs22-debian12
COPY --from=builder /app/dist /app
CMD ["app/server.js"]

This pattern keeps your final image minimal — build tools, dev dependencies, and source code never ship to production. For a deeper dive, see our guide on how multi-stage Docker builds reduce image size by 80%.

Watch out for Alpine's musl libc: Alpine uses musl instead of glibc. Some Python packages with C extensions (numpy, pandas, cryptography) fail to install or have subtle behavioral differences. Test thoroughly if switching from Debian to Alpine.

2. Never Run as Root

By default, Docker containers run as root. This means if an attacker exploits your application, they're root inside the container — and potentially one kernel exploit away from root on the host.

# Create a non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Copy application files with correct ownership
COPY --chown=appuser:appgroup ./dist /app

# Switch to non-root user
USER appuser

CMD ["node", "server.js"]

This applies to every container. No exceptions. Even your "internal-only" services should run as non-root because lateral movement is how breaches escalate.

For an extra layer, enable rootless Docker on the host:

dockerd-rootless-setuptool.sh install

# Verify rootless mode
docker info --format '{{.SecurityOptions}}'
# Output includes: name=rootless

Rootless Docker maps the container's root user to an unprivileged user on the host, making container escapes significantly harder. Know the trade-offs: rootless mode doesn't support privileged containers, ports below 1024 (without setcap), ICMP (ping), AppArmor, or cgroup resource limits without systemd delegation. These are real constraints in production — evaluate whether they affect your workload.

3. Scan Images in Your CI/CD Pipeline

Get more insights on Security

Join 2,000+ engineers who get our weekly deep-dives. No spam, unsubscribe anytime.

Building secure images means nothing if you don't verify them continuously. Vulnerabilities are discovered daily.

Integrate scanning into every build:

# Gitea Actions / GitHub Actions workflow
name: Container Security Scan
on: [push]

jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build with SBOM and provenance attestations
        run: docker buildx build --sbom=true --provenance=true -t myapp:${{ github.sha }} .

      - name: Scan with Trivy
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: myapp:${{ github.sha }}
          severity: CRITICAL,HIGH
          exit-code: 1

Key scanning tools (2026 versions):

  • Trivy v0.69.3 — Fast, comprehensive, supports SBOM generation, concurrent DB access for parallel CI pipelines
  • Grype (by Anchore) — Strong vulnerability database, actively maintained alternative
  • Docker Scout (GA) — Built into Docker CLI, docker scout cves myapp:latest for native scanning

Docker Engine 25+ automatically generates provenance attestations (mode=min) on every buildx build. Adding --sbom=true generates a full software bill of materials as a build attestation — this is critical for supply chain verification.

The critical rule: fail the build on HIGH and CRITICAL vulnerabilities. Don't just generate reports that nobody reads. We compare Falco, Trivy, and Snyk Container in detail in our container security tools guide.

Important caveat: Trivy and Grype scan OS packages and language dependencies, but they don't detect application-level misconfigurations, hardcoded secrets in code, or malicious custom binaries. A "zero CVE" scan result does not mean the image is secure.

4. Verify the Supply Chain

Supply chain attacks are the defining threat of this era.

The real-world stakes: In August 2025, Binarly researchers discovered that dozens of official Debian-based Docker Hub images still contained the XZ Utils backdoor (CVE-2024-3094, CVSS 10.0) — months after public disclosure. Teams that pinned mutable tags like debian:bookworm unknowingly inherited the compromised library. Organizations using digest pinning and automated scanning caught it immediately; everyone else was silently shipping backdoored containers.

Around the same time, researchers found over 10,000 Docker Hub images leaking production credentials — API keys, database passwords, AI model tokens — from more than 100 organizations including a Fortune 500 company. These weren't obscure images; they were built by teams who hardcoded secrets in Dockerfiles.

Pin image digests, not just tags:

# Tags can be overwritten — this could point to anything tomorrow
FROM node:22-alpine

# Digests are immutable — this always points to the exact same image
FROM node:22-alpine@sha256:a1b2c3d4e5f6...

Verify image signatures with Cosign v3 (keyless by default):

# Keyless verification via Sigstore Fulcio CA + Rekor transparency log
cosign verify \
  [email protected] \
  --certificate-oidc-issuer=https://accounts.google.com \
  myregistry/myapp:latest

# Key-based verification (alternative)
cosign verify --key cosign.pub myregistry/myapp:latest

Cosign v3 (current: v3.0.5) defaults to keyless verification through Sigstore's certificate authority and transparency log. This is simpler and more secure than managing signing keys yourself.

Generate and track SBOMs:

trivy image --format spdx-json -o sbom.json myapp:latest

An SBOM gives you a complete inventory of everything inside your container. When the next zero-day drops, you can instantly check whether you're affected. Note: The EU Cyber Resilience Act (September 2026) will mandate SBOM generation for all software sold in the EU market — this is no longer a nice-to-have.

5. Drop Capabilities, Read-Only Filesystem, and Resource Limits

Linux capabilities are fine-grained permissions that replace the old root/non-root binary. Docker gives containers a default set that most applications don't need.

# docker-compose.yml — standalone compose (NOT Swarm)
services:
  webapp:
    image: myapp:latest
    security_opt:
      - no-new-privileges:true
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE
    read_only: true
    tmpfs:
      - /tmp
      - /var/run
    # Resource limits for standalone compose
    mem_limit: 512m
    cpus: 1.0
    pids_limit: 100

Critical note: The deploy.resources block that many articles recommend is silently ignored by docker compose up. It only works with docker stack deploy (Swarm mode). For standalone Compose — which is what the vast majority of deployments use — set mem_limit, cpus, and pids_limit directly at the service level as shown above.

What this configuration does:

  • cap_drop: ALL removes every Linux capability
  • cap_add grants back only what's strictly needed
  • no-new-privileges prevents privilege escalation via setuid binaries
  • read_only: true makes the filesystem immutable — malware can't write to disk
  • tmpfs provides writable scratch space in memory only
  • pids_limit prevents fork bombs

Docker ships a default seccomp profile that blocks approximately 44 dangerous syscalls. Custom seccomp profiles can tighten this further for specific workloads.

Gotchas to watch for:

  • read_only: true breaks more than you'd expect. Many apps write to /var/log, /var/cache, or generate temp files in unexpected locations. Redis needs tmpfs for RDB dumps; Nginx needs /var/cache/nginx and /var/run. Audit your app's write paths.
  • cap_drop: ALL can break DNS resolution in some images. Removing CAP_NET_RAW prevents certain DNS strategies. If hostname resolution fails, add cap_add: NET_RAW back.

6. Manage Secrets Properly

Hardcoded secrets in images are the most common container security failure. Credentials baked into ENV instructions in a Dockerfile end up in image layers that anyone with docker history can read — as the 10,000+ leaked Docker Hub images demonstrate.

Never do this:

# Secret is baked into the image layer forever
ENV DATABASE_URL=postgres://admin:password@db:5432/prod

For Docker Compose (standalone):

services:
  webapp:
    image: myapp:latest
    secrets:
      - db_password
    environment:
      DB_HOST: postgres
      DB_USER: app

secrets:
  db_password:
    file: ./secrets/db_password.txt

Important distinction: Docker secrets (/run/secrets/) with full lifecycle management only works in Swarm mode. In standalone Compose, the secrets: directive mounts files but without Swarm's encryption-at-rest and access control. For non-Swarm environments, use a dedicated secrets manager:

  • Infisical — Open-source, self-hostable (what we use at TechSaaS)
  • HashiCorp Vault — Industry standard with robust access policies

We compare these tools in depth in our secrets management comparison: Vault vs Infisical vs Doppler.

# Infisical CLI injection — works in any Docker environment
infisical run -- docker compose up

Permissions gotcha: Secrets mounted at /run/secrets/ default to root-owned (mode 0444, uid 0). If your container runs as non-root AND uses mounted secrets, the process may not be able to read them. Ensure permissions are correct or use an entrypoint script to fix ownership.

7. Network Segmentation

By default, all containers on the same Docker network can talk to each other. Your frontend does not need direct access to your database.

Free Resource

Infrastructure Security Audit Template

The exact audit template we use with clients: 60+ checks across network, identity, secrets management, and compliance.

Get the Template
services:
  frontend:
    networks: [public]
  api:
    networks: [public, backend]
  postgres:
    networks: [backend]

networks:
  public:
    driver: bridge
  backend:
    driver: bridge
    internal: true

The internal: true flag means containers on that network have zero access to the outside internet. Your database can talk to the API, but it can't phone home to a C2 server.

Edge case: Containers on an internal: true network can still reach the Docker host's IP. If the host runs services (metadata endpoints, local databases), the "isolated" container can still reach them. Use iptables rules on the host to close this gap. Docker Engine 28 addressed a related issue: unpublished container ports are now blocked from LAN access by default.

8. Monitor and Audit at Runtime

Static security catches problems before deployment. Runtime security catches what happens after.

  • Falco 0.43.0 (January 2026) — Detects anomalous syscalls, file access, and network connections. The new drop-enter initiative removed syscall enter events from the pipeline, significantly improving performance. The legacy eBPF probe is deprecated in favor of the modern eBPF driver.
  • Docker events — Native audit stream for container lifecycle
  • Prometheus + cAdvisor — Resource usage tracking per container
  • Docker Engine 29 — Experimental nftables support as an iptables alternative

At TechSaaS, we pipe container logs through Promtail into Loki and visualize with Grafana. Unexpected process execution, network connections, or file modifications trigger immediate alerts.

9. Keep Everything Updated

This sounds obvious, but it's where most teams fail. Running docker pull once and forgetting about it means shipping images with months-old vulnerabilities.

Automate updates with Renovate Bot's container image datasource — it creates PRs when base images have updates, pairs with your CI scanning pipeline for automatic remediation.

For quick checks:

docker scout cves myapp:latest

The Security Checklist

  • Minimal base images (distroless, Alpine, or Docker Hardened Images)
  • Non-root USER in every Dockerfile
  • Image scanning in CI/CD (fail on HIGH/CRITICAL)
  • Pinned image digests, Cosign v3 keyless verification
  • Capabilities dropped, read-only filesystem, seccomp profiles
  • Secrets via mounted files or Infisical/Vault — never ENV in Dockerfiles
  • Network segmentation with internal networks
  • Resource limits (mem_limit, cpus, pids_limit — NOT deploy.resources)
  • Runtime monitoring (Falco, log aggregation)
  • Automated base image updates (Renovate, Docker Scout)
  • SBOM generation for EU CRA compliance

Conclusion

Container security isn't a single tool or a one-time audit. It's a set of practices layered across your entire pipeline — from the Dockerfile you write, to the CI that builds it, to the runtime that executes it.

The incidents of 2025 — backdoored base images shipping for months, thousands of production credentials leaked through Dockerfiles — prove that the basics still matter. Start with the high-impact items: minimal images, non-root users, CI scanning, and digest pinning. Layer in runtime monitoring, network segmentation, and secrets management. Every hardened container is one less thing an attacker can exploit.

Your containers are only as secure as the weakest link in your pipeline. Make every link count.


Related reading:

Need help securing your container infrastructure? Explore our cybersecurity and compliance services or cloud infrastructure and DevOps consulting.

#Docker#Container Security#DevSecOps#Supply Chain Security#Runtime Security#Cloud Native#SBOM

Related Service

Security & Compliance

Zero-trust architecture, compliance automation, and incident response planning.

Need help with security?

TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.

We Will Build You a Demo Site — For Free

Like it? Pay us. Do not like it? Walk away, zero complaints. You will spend way less than hiring developers or any agency.

47+ companies trusted us
99.99% uptime
< 48hr response

No spam. No contracts. Just a free demo.