Containerization with Docker: Building and Deploying C# Applications

Docker for C# Developers: Practical Containerization, Security, and Production Patterns

I began containerizing .NET apps years ago and, believe me, it reshapes how you think about shipping software. Docker turns fragile, environment-specific deployments into reproducible artifacts you can test, run, and scale anywhere. In short: learn how to build small, secure images, how to run multi-container systems sensibly, and how to operate containers in production.

This guide is written like I’d explain it at a pairing session: lots of practical advice, some small code snippets you can copy, and the “why” behind each choice. You’ll see minimal but complete examples - not a kitchen sink of commands - and notes about trade-offs.

Why Docker matters for C# - the mental model

Docker gives you three things that matter most: consistency, isolation, and portability. Consistency because the same image runs in dev, CI, and production; isolation because containers keep runtime dependencies contained; portability because images run on any host that supports the container runtime. For .NET developers this means fewer “works on my machine” moments, and faster feedback loops.

Think of a Docker image as a self-describing environment. You don’t need the host to match the developer machine - the artifact carries everything required to run. That changes how you design builds, tests, and deployments.

Use Docker to define your runtime explicitly. An image plus configuration becomes the single source of truth for how your app runs.

Small first steps - a minimal ASP.NET Core app and Dockerfile

Start tiny. Build a minimal API, verify it runs locally, then add a Dockerfile. The first iteration should prioritize clarity over optimization: get an image that works, then make it smaller and more secure.

// Program.cs - tiny minimal API
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.MapGet("/hello", () => "Hello from Docker!");
app.MapGet("/info", () => new { framework = System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription });
app.Run();

Now a very straightforward Dockerfile that uses multi-stage builds. Multi-stage is essential: it separates the SDK used for building from the runtime you ship.

# Dockerfile - multi-stage (concise)
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /src
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o /app/publish

FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS runtime
WORKDIR /app
COPY --from=build /app/publish .
ENV ASPNETCORE\_URLS=http\://+:8080
EXPOSE 8080
ENTRYPOINT \["dotnet", "MyApp.dll"]

One quick aside: keep Dockerfile layers predictable. Copy project files and run dotnet restore before copying the source - that makes dependency caching effective during iterative development.

Make it production-ready - smaller images & security

After you have a working image, shrink and harden it. Use the smallest supported runtime image (alpine variants), run as non-root, remove build artifacts, and add a health endpoint so orchestrators can check liveness and readiness.

# Dockerfile - hardened runtime excerpt
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS runtime
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY --from=build /app/publish .
RUN chown -R appuser:appgroup /app
USER appuser
HEALTHCHECK --interval=30s CMD wget -q -O- http://localhost:8080/health || exit 1

A few practical notes: running as non-root reduces attack surface; alpine images are smaller but sometimes need additional packages; if you use alpine and see surprising failures, try the glibc vs musl differences and add only the packages you need.

When debugging, reproducibility matters: keep Dockerfile steps deterministic and pin base image tags for production.

Docker Compose for local multi-service stacks

Compose is your best friend in development. It lets you spin up databases, message brokers, and multiple services with one command. Keep compose files readable and use healthchecks and environment vars to mimic production behavior closely.

# docker-compose.yml - concise example
version: '3.8'
services:
  api:
    build: ./src/MyApi
    ports: ["5000:8080"]
    depends_on:
      - db
    environment:
      - ConnectionStrings__Db=Host=db;Username=postgres;Password=postgres
  db:
    image: postgres:15-alpine
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: mydb
    volumes:
      - postgres_data:/var/lib/postgresql/data
volumes:
  postgres_data:

Compose supports multiple override files. Keep docker-compose.yml as canonical and add a docker-compose.override.yml or a docker-compose.dev.yml with volume mounts for live code reload.

Developer ergonomics - hot reload and debugging

For day-to-day work you want fast feedback. Use dotnet watch inside a development container and mount source code volumes. Expose a debugger port for VS/VS Code to attach. Keep this configuration out of production images.

# Dockerfile.dev (development only)
FROM mcr.microsoft.com/dotnet/sdk:8.0
WORKDIR /src
EXPOSE 8080 4020
CMD ["dotnet", "watch", "run", "--urls", "http://0.0.0.0:8080"]

Mount source directories into the container via Compose and enable the polling watcher if you use networked filesystems. It’s a small friction tweak that saves a lot of time.

Secrets & configuration - never bake secrets into images

Treat secrets as runtime configuration. Use Docker secrets for Swarm, Kubernetes secrets for k8s, or environment-backed vaults for clouds. At minimum, avoid hardcoding credentials in your Dockerfile or checked-in compose files.

// Read Docker secret in .NET (concept)
var secretPath = "/run/secrets/DbPassword";
if (File.Exists(secretPath))
{
    var dbPassword = File.ReadAllText(secretPath).Trim();
    builder.Configuration.AddInMemoryCollection(new[] { new KeyValuePair("ConnectionStrings:DbPassword", dbPassword) });
}

If you must use environment variables in dev, make them explicit and load .env files locally only. For production, use managed secret stores.

CI/CD with Docker - build reproducibly and scan images

Your CI job should: build images with pinned base tags, run tests inside containers (unit and integration), scan images for vulnerabilities, and push signed artifacts to a registry. Reproducible builds and immutability make rollbacks and auditing far simpler.

# GitHub Actions - snippet to build and push (concept)
- name: Build and push
  uses: docker/build-push-action@v5
  with:
    push: true
    tags: ghcr.io/${{github.repository}}:${{ github.sha }}
    platforms: linux/amd64,linux/arm64

Add an image scan step (Trivy, Snyk) and fail builds on high-severity findings. Also automate multi-arch builds where needed.

Runtime resilience - health checks, graceful shutdown, and signals

Containers are ephemeral. Implement health endpoints, respond to SIGTERM, and shut down gracefully so orchestrators can drain traffic. Handle background work cancellation via IHostedService that respects cancellation tokens.

// Graceful shutdown handler (pattern)
public class TimedHostedService : IHostedService
{
    private readonly IHostApplicationLifetime _lifetime;
    public Task StartAsync(CancellationToken ct) => Task.CompletedTask;
    public async Task StopAsync(CancellationToken ct)
    {
        // stop accepting work, flush queues, respect ct
        await Task.Delay(100, ct);
    }
}

Kubernetes uses readiness and liveness probes - readiness means “I can accept traffic,” liveness means “I’m healthy.” Combined, they let orchestrators restart problematic pods automatically without impacting users.

Observability - logs, metrics, and traces

Structured logs, application metrics, and distributed traces are non-negotiable for production. Use Serilog or Microsoft.Extensions.Logging to emit JSON logs to stdout (or to a sidecar). Export Prometheus metrics and instrument traces with OpenTelemetry.

// Serilog setup (concise)
Log.Logger = new LoggerConfiguration()
  .Enrich.FromLogContext()
  .WriteTo.Console(outputTemplate: "{Timestamp:yyyy-MM-dd HH:mm:ss} [{Level}] {Message:lj} {Properties:j}{NewLine}{Exception}")
  .CreateLogger();

Streaming logs to a central system (ELK, Loki, Datadog) and shipping traces to a tracing backend (Jaeger, Zipkin, or vendor) makes incident analysis possible.

Performance & image hygiene

Small images start faster and have smaller attack surfaces. Use .dockerignore to exclude build artifacts, order Dockerfile steps so cached layers are reused, and avoid copying dev toolchains into production images.

# .dockerignore essentials
**/bin/
**/obj/
**/.git/
**/.vs/

Warm-up considerations: ephemeral services that initialize heavy caches can cause cold-start latency. Consider compact startup work and populate caches asynchronously if possible.

From Compose to Kubernetes - practical migration notes

Compose is great for local dev, but k8s is a different operational surface. Focus on the following when migrating: container images (same), health probes (translate to Kubernetes liveness/readiness), and configuration (move to k8s secrets/configmaps). Also add resource requests and limits so the scheduler can place pods sensibly.

# Kubernetes readiness probe example (manifest fragment)
readinessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 10

Start with a single-replica deployment and use canary or blue/green rollouts for safety. Observe and iterate.

Common mistakes I still see

A few recurring problems I keep bumping into: shipping development tools in production images, ignoring graceful shutdowns, hard-coding secrets, and not monitoring circuit-breaker behavior. These are small mistakes with big operational costs.

Containers are immutable by design. Instead of patching running containers, build a new image and redeploy - that improves reproducibility and makes rollbacks easy.

Checklist - practical rules to follow

Keep this simple checklist in your repo README so engineers can follow the same patterns: pin base images, multi-stage builds, non-root user, health checks, secrets managed at runtime, structured logs to stdout, CI image scanning, and minimal runtime images.

Summary

Docker changes the shape of your development and operations. It forces you to be explicit about the runtime, to automate builds, and to think about deployment as a repeatable process. Start small: containerize one service, add health checks and logging, then expand the pattern to other services. Over time you’ll get faster builds, fewer environment issues, and more reliable deployments.

If you take anything away, let it be this: make images small, secrets runtime-managed, and observability first-class. Those three give you the biggest win in terms of reliability and developer experience.