Kubernetes Orchestration: Deploying Microservices at Scale

I've deployed applications on everything from single servers to massive cloud infrastructures, and Kubernetes stands out as the most transformative technology I've worked with. Before Kubernetes, scaling applications was a complex dance of load balancers, deployment scripts, and manual monitoring. Kubernetes made it declarative-you describe what you want, and the system makes it happen.

For C# developers, Kubernetes represents the bridge between writing great code and running it reliably at scale. It takes the containerization we learned with Docker and adds orchestration, scaling, and self-healing capabilities that make production deployments feel almost effortless.

In this comprehensive guide, we'll explore Kubernetes orchestration for C# microservices from the ground up. You'll learn how to deploy, scale, and manage containerized applications with confidence. We'll cover everything from basic deployments to advanced patterns like service meshes and GitOps workflows.

By the end, you'll understand why Kubernetes has become the standard for cloud-native applications and how to leverage it effectively for your C# microservices.

Why Kubernetes Matters for C# Microservices

The first time I deployed a microservices architecture without Kubernetes, it was a disaster. Services crashed during deployments, load balancing was manual, and scaling required waking up the on-call engineer at 3 AM. Kubernetes solved all of these problems and more.

Kubernetes provides: - Automatic scaling: Services scale up and down based on demand - Self-healing: Failed containers are automatically restarted - Load balancing: Traffic is automatically distributed across instances - Rolling updates: Zero-downtime deployments with rollback capabilities - Resource management: CPU and memory are allocated efficiently - Service discovery: Services find each other automatically

For C# developers, this means you can focus on writing business logic while Kubernetes handles the operational complexity. The same code that runs on your development machine scales to handle millions of requests in production.

Kubernetes doesn't just run containers-it provides the infrastructure for reliable, scalable, self-healing applications that can evolve with your business needs.

Your First Kubernetes Deployment: Hello World with C#

Let's start with the fundamentals. We'll deploy a simple ASP.NET Core API to Kubernetes and see how it all fits together. This will teach you the core concepts without overwhelming complexity.

// Program.cs - Simple C# API for Kubernetes
var builder = WebApplication.CreateBuilder(args);

// Add health checks for Kubernetes
builder.Services.AddHealthChecks();

var app = builder.Build();

app.MapGet("/", () => "Hello from Kubernetes!");
app.MapGet("/info", () =>
{
    return new
    {
        service = "k8s-demo-api",
        version = "1.0.0",
        timestamp = DateTime.UtcNow,
        framework = System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription,
        pod = Environment.GetEnvironmentVariable("HOSTNAME") ?? "unknown"
    };
});

// Health check endpoint for Kubernetes
app.MapHealthChecks("/health");

app.Run();

This is a minimal API with health checks-essential for Kubernetes to know when your application is ready and healthy.

Now let's create the Kubernetes deployment manifest.

# k8s/deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-demo-api
  labels:
    app: k8s-demo-api
spec:
  replicas: 3  # Run 3 instances for high availability
  selector:
    matchLabels:
      app: k8s-demo-api
  template:
    metadata:
      labels:
        app: k8s-demo-api
    spec:
      containers:
      - name: api
        image: myregistry/k8s-demo-api:v1.0.0
        ports:
        - containerPort: 8080
        env:
        - name: ASPNETCORE_URLS
          value: "http://+:8080"
        resources:
          requests:
            memory: "64Mi"
            cpu: "50m"
          limits:
            memory: "128Mi"
            cpu: "100m"
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5
          timeoutSeconds: 3
          failureThreshold: 3

This deployment creates 3 replicas of our API with health checks, resource limits, and proper labeling. The liveness probe restarts containers that become unresponsive, while the readiness probe ensures traffic only goes to healthy instances.

Now let's expose the deployment with a service.

# k8s/service.yml
apiVersion: v1
kind: Service
metadata:
  name: k8s-demo-api-service
  labels:
    app: k8s-demo-api
spec:
  selector:
    app: k8s-demo-api  # Route traffic to pods with this label
  ports:
  - name: http
    port: 80          # External port
    targetPort: 8080  # Container port
  type: ClusterIP     # Internal service (default)

---
# k8s/ingress.yml - External access
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: k8s-demo-api-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: k8s-demo-api-service
            port:
              number: 80

The service provides stable networking for the pods, while the ingress exposes the application externally.

Let's deploy this to Kubernetes.

# Deploy to Kubernetes
kubectl apply -f k8s/

# Check deployment status
kubectl get deployments
kubectl get pods
kubectl get services

# View logs
kubectl logs -f deployment/k8s-demo-api

# Test the API
curl http://api.example.com/
curl http://api.example.com/info

# Scale the deployment
kubectl scale deployment k8s-demo-api --replicas=5

# Rolling update
kubectl set image deployment/k8s-demo-api api=myregistry/k8s-demo-api:v1.1.0

With these commands, you've deployed a scalable, self-healing API. Kubernetes ensures the desired number of replicas are always running, automatically restarting failed instances.

Configuration Management: ConfigMaps and Secrets

Hard-coding configuration is a recipe for problems. Kubernetes provides ConfigMaps for non-sensitive configuration and Secrets for sensitive data, allowing you to manage configuration separately from code.

# k8s/configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: ecommerce-api-config
  namespace: production
data:
  appsettings.json: |
    {
      "Logging": {
        "LogLevel": {
          "Default": "Information",
          "Microsoft.AspNetCore": "Warning"
        }
      },
      "Features": {
        "EnableSwagger": false,
        "EnableMetrics": true,
        "CacheEnabled": true
      },
      "Cache": {
        "RedisConnection": "redis-service:6379",
        "DefaultExpirationMinutes": 30
      },
      "ServiceDiscovery": {
        "ConsulUrl": "http://consul-service:8500"
      }
    }
  environment: "Production"
  log-level: "Information"

---
# k8s/secret.yml
apiVersion: v1
kind: Secret
metadata:
  name: ecommerce-api-secrets
  namespace: production
type: Opaque
stringData:
  database-connection: "Host=postgres;Database=ecommerce;Username=app;Password=MySecurePassword123!"
  redis-password: "RedisSecurePassword456!"
  jwt-secret: "MySuperSecretJwtKeyThatIsAtLeast32CharactersLong"
  api-key: "sk-1234567890abcdef"

ConfigMaps store configuration as key-value pairs or files, while Secrets provide base64-encoded storage for sensitive data. Both can be mounted as environment variables or files.

Let's update our deployment to use these configurations.

# k8s/deployment.yml (updated)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ecommerce-api
spec:
  template:
    spec:
      containers:
      - name: api
        image: myregistry/ecommerce-api:v1.0.0
        env:
        # From ConfigMap
        - name: ASPNETCORE_ENVIRONMENT
          valueFrom:
            configMapKeyRef:
              name: ecommerce-api-config
              key: environment
        - name: Logging__LogLevel__Default
          valueFrom:
            configMapKeyRef:
              name: ecommerce-api-config
              key: log-level
        # From Secret
        - name: ConnectionStrings__Database
          valueFrom:
            secretKeyRef:
              name: ecommerce-api-secrets
              key: database-connection
        - name: Cache__RedisPassword
          valueFrom:
            secretKeyRef:
              name: ecommerce-api-secrets
              key: redis-password
        volumeMounts:
        - name: config-volume
          mountPath: /app/config
      volumes:
      - name: config-volume
        configMap:
          name: ecommerce-api-config
          items:
          - key: appsettings.json
            path: appsettings.json

This approach allows you to change configuration without rebuilding containers. You can update ConfigMaps and Secrets, then restart deployments to pick up the changes.

In your C# code, these configurations are available through the standard IConfiguration interface.

// Configuration usage in C#
public class OrderService
{
    private readonly string _connectionString;
    private readonly string _redisPassword;
    private readonly bool _cacheEnabled;

    public OrderService(IConfiguration configuration)
    {
        _connectionString = configuration.GetConnectionString("Database");
        _redisPassword = configuration["Cache:RedisPassword"];
        _cacheEnabled = configuration.GetValue("Features:CacheEnabled");
    }
}

This separation of concerns makes your applications more maintainable and environment-agnostic.

Scaling: From Manual to Auto-Scaling

One of Kubernetes' most powerful features is its ability to scale applications automatically based on demand. Let's explore different scaling approaches.

# k8s/hpa.yml - Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: ecommerce-api-hpa
  namespace: production
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: ecommerce-api
  minReplicas: 2
  maxReplicas: 20
  metrics:
  # Scale based on CPU utilization
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  # Scale based on memory utilization
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  # Scale based on custom metrics (requires metrics server)
  - type: Pods
    pods:
      metric:
        name: http_requests_per_second
      target:
        type: AverageValue
        averageValue: "100"
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300  # Wait 5 minutes before scaling down
      policies:
      - type: Percent
        value: 50  # Scale down by 50% at most
        periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 60   # Quick scale up
      policies:
      - type: Percent
        value: 100  # Double the number of pods
        periodSeconds: 60

---
# Manual scaling commands
kubectl scale deployment ecommerce-api --replicas=10
kubectl autoscale deployment ecommerce-api --cpu-percent=70 --min=2 --max=10

The Horizontal Pod Autoscaler automatically adjusts the number of replicas based on resource utilization or custom metrics. This ensures your application can handle traffic spikes while minimizing costs during low usage.

For applications with state or specific scaling needs, you might also use Vertical Pod Autoscaling.

# k8s/vpa.yml - Vertical Pod Autoscaler
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: ecommerce-api-vpa
  namespace: production
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: ecommerce-api
  updatePolicy:
    updateMode: "Auto"  # Automatically update resource requests
  resourcePolicy:
    containerPolicies:
    - containerName: api
      minAllowed:
        cpu: 100m
        memory: 128Mi
      maxAllowed:
        cpu: 2
        memory: 4Gi
      controlledResources: ["cpu", "memory"]

Vertical Pod Autoscaling adjusts CPU and memory requests for individual containers, ensuring optimal resource allocation.

The combination of horizontal and vertical scaling gives you fine-grained control over application performance and costs.

Service Mesh: Istio for Advanced Traffic Management

As your microservices architecture grows, you need more sophisticated traffic management, security, and observability. Service meshes like Istio provide these capabilities without changing your application code.

# k8s/istio/gateway.yml - API Gateway
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: ecommerce-gateway
  namespace: production
spec:
  selector:
    istio: ingressgateway  # Use Istio's ingress gateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "api.ecommerce.com"
  - port:
      number: 443
      name: https
      protocol: HTTPS
    tls:
      mode: SIMPLE
      credentialName: ecommerce-tls
    hosts:
    - "api.ecommerce.com"

---
# k8s/istio/virtual-service.yml - Traffic Routing
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: ecommerce-api-routing
  namespace: production
spec:
  hosts:
  - api.ecommerce.com
  gateways:
  - ecommerce-gateway
  http:
  # Route mobile traffic to mobile-optimized version
  - match:
    - headers:
        user-agent:
          regex: ".*Mobile.*"
    route:
    - destination:
        host: ecommerce-api
        subset: mobile
  # A/B testing: 90% to stable, 10% to canary
  - route:
    - destination:
        host: ecommerce-api
        subset: stable
      weight: 90
    - destination:
        host: ecommerce-api
        subset: canary
      weight: 10
  # API versioning
  - match:
    - uri:
        prefix: "/v2"
    route:
    - destination:
        host: ecommerce-api-v2
  # Default route
  - route:
    - destination:
        host: ecommerce-api
        subset: stable

Virtual services allow sophisticated traffic routing based on headers, paths, and other criteria. This enables canary deployments, A/B testing, and API versioning.

Now let's define the destination subsets.

# k8s/istio/destination-rule.yml - Traffic Policies
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: ecommerce-api-subsets
  namespace: production
spec:
  host: ecommerce-api
  subsets:
  - name: stable
    labels:
      version: v1.0.0
  - name: canary
    labels:
      version: v1.1.0
  - name: mobile
    labels:
      version: mobile
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 100
      http:
        http1MaxPendingRequests: 10
        maxRequestsPerConnection: 10
    outlierDetection:
      consecutive5xxErrors: 3
      interval: 10s
      baseEjectionTime: 30s
      maxEjectionPercent: 50
    loadBalancer:
      simple: ROUND_ROBIN
    tls:
      mode: ISTIO_MUTUAL  # Mutual TLS between services

Destination rules define traffic policies including load balancing, circuit breaking, and TLS settings. The outlier detection automatically removes unhealthy instances from the load balancing pool.

Service meshes add observability, security, and reliability features without code changes.

Stateful Applications: Databases and Persistent Storage

Not all applications are stateless. Databases and applications with persistent state need special handling in Kubernetes. StatefulSets provide stable network identities and persistent storage.

# k8s/postgres-statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
  namespace: production
spec:
  serviceName: postgres
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:15-alpine
        ports:
        - containerPort: 5432
        env:
        - name: POSTGRES_DB
          value: ecommerce
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: username
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: password
        volumeMounts:
        - name: postgres-data
          mountPath: /var/lib/postgresql/data
        resources:
          requests:
            memory: "512Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "500m"
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -U
            - $(POSTGRES_USER)
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -U
            - $(POSTGRES_USER)
          initialDelaySeconds: 5
          periodSeconds: 5
  volumeClaimTemplates:
  - metadata:
      name: postgres-data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: fast-ssd
      resources:
        requests:
          storage: 50Gi

---
# k8s/postgres-service.yml
apiVersion: v1
kind: Service
metadata:
  name: postgres-service
  namespace: production
spec:
  selector:
    app: postgres
  ports:
  - port: 5432
    targetPort: 5432
  clusterIP: None  # Headless service for StatefulSet

StatefulSets ensure ordered deployment and stable network identities. The volume claim template creates persistent storage that survives pod restarts.

For high availability, you might use database clustering solutions or cloud-managed databases.

CI/CD Pipelines: GitOps with ArgoCD

Kubernetes enables GitOps workflows where your desired cluster state is stored in Git. Tools like ArgoCD automatically sync your cluster with the Git repository.

# .github/workflows/deploy.yml
name: Deploy to Kubernetes

on:
  push:
    branches: [ main ]
    paths:
      - 'src/**'
      - 'k8s/**'

jobs:
  build-and-push:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v4

    - name: Setup .NET
      uses: actions/setup-dotnet@v3
      with:
        dotnet-version: 8.0.x

    - name: Build and Test
      run: |
        dotnet restore
        dotnet build --no-restore -c Release
        dotnet test --no-build --verbosity normal

    - name: Build and Push Docker Image
      uses: docker/build-push-action@v5
      with:
        context: .
        push: true
        tags: myregistry/ecommerce-api:${{ github.sha }},myregistry/ecommerce-api:latest
        cache-from: type=gha
        cache-to: type=gha,mode=max

    - name: Update Kubernetes Manifests
      run: |
        sed -i "s|image: myregistry/ecommerce-api:.*|image: myregistry/ecommerce-api:${{ github.sha }}|g" k8s/deployment.yml
        git config --global user.email "[email protected]"
        git config --global user.name "CI/CD"
        git add k8s/deployment.yml
        git commit -m "Update image to ${{ github.sha }}"
        git push

  deploy:
    needs: build-and-push
    runs-on: ubuntu-latest

    steps:
    - name: Deploy to Staging
      if: github.ref == 'refs/heads/develop'
      run: |
        kubectl config use-context staging
        kubectl apply -f k8s/
        kubectl rollout status deployment/ecommerce-api

    - name: Deploy to Production
      if: github.ref == 'refs/heads/main'
      run: |
        kubectl config use-context production
        kubectl apply -f k8s/
        kubectl rollout status deployment/ecommerce-api

This pipeline builds, tests, and deploys your application automatically. The GitOps approach ensures all changes go through Git, providing audit trails and rollback capabilities.

For more sophisticated GitOps, ArgoCD provides automated synchronization.

# k8s/argocd/application.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: ecommerce-api
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/myorg/ecommerce-k8s
    targetRevision: HEAD
    path: k8s
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
    - PrunePropagationPolicy=foreground
    - PruneLast=true

ArgoCD continuously monitors your Git repository and keeps the cluster synchronized. Changes to manifests in Git automatically deploy to Kubernetes.

Monitoring and Observability: Prometheus and Grafana

Monitoring is critical for production Kubernetes deployments. Prometheus collects metrics, while Grafana provides visualization and alerting.

# k8s/monitoring/service-monitor.yml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: ecommerce-api-monitor
  namespace: monitoring
  labels:
    team: backend
spec:
  selector:
    matchLabels:
      app: ecommerce-api
  endpoints:
  - port: http
    path: /metrics
    interval: 30s
    scrapeTimeout: 10s
  namespaceSelector:
    matchNames:
    - production

---
# Application metrics in C#
public static class MetricsConfiguration
{
    public static void AddMetrics(this IServiceCollection services)
    {
        services.AddMetrics();
        services.AddSingleton();
    }
}

public class OrderMetrics
{
    private readonly Counter _ordersCreated;
    private readonly Histogram _orderProcessingTime;
    private readonly Gauge _activeOrders;

    public OrderMetrics(IMetrics metrics)
    {
        _ordersCreated = metrics.CreateCounter("ecommerce_orders_created_total",
            "Total number of orders created");
        _orderProcessingTime = metrics.CreateHistogram("ecommerce_order_processing_duration_seconds",
            "Time taken to process orders");
        _activeOrders = metrics.CreateGauge("ecommerce_orders_active",
            "Number of currently active orders");
    }

    public void OrderCreated() => _ordersCreated.Inc();

    public IDisposable StartOrderProcessingTimer() => _orderProcessingTime.NewTimer();

    public void SetActiveOrders(int count) => _activeOrders.Set(count);
}

// Usage in service
public class OrderService
{
    private readonly OrderMetrics _metrics;

    public OrderService(OrderMetrics metrics)
    {
        _metrics = metrics;
    }

    public async Task CreateOrderAsync(CreateOrderRequest request)
    {
        using var timer = _metrics.StartOrderProcessingTimer();

        var order = new Order(request.CustomerId);
        // Process order...

        await _repository.SaveAsync(order);

        _metrics.OrderCreated();

        return order;
    }
}

Application metrics combined with infrastructure metrics provide comprehensive observability. You can monitor application performance, resource usage, and business metrics.

Grafana dashboards visualize these metrics, and Prometheus AlertManager handles alerting.

Security: Network Policies and RBAC

Security in Kubernetes requires multiple layers: network policies, RBAC, and secure configurations.

# k8s/security/network-policy.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ecommerce-api-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: ecommerce-api
  policyTypes:
  - Ingress
  - Egress
  ingress:
  # Allow traffic from API gateway
  - from:
    - podSelector:
        matchLabels:
          app: api-gateway
    ports:
    - protocol: TCP
      port: 8080
  # Allow traffic from other services in the same namespace
  - from:
    - podSelector:
        matchLabels:
          app: inventory-service
    ports:
    - protocol: TCP
      port: 8080
  egress:
  # Allow DNS resolution
  - to: []
    ports:
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53
  # Allow access to database
  - to:
    - podSelector:
        matchLabels:
          app: postgres
    ports:
    - protocol: TCP
      port: 5432
  # Allow access to Redis
  - to:
    - podSelector:
        matchLabels:
          app: redis
    ports:
    - protocol: TCP
      port: 6379
  # Allow external API calls (if needed)
  - to: []
    ports:
    - protocol: TCP
      port: 443

---
# k8s/security/rbac.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ecommerce-api-role
  namespace: production
rules:
- apiGroups: [""]
  resources: ["configmaps", "secrets"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ecommerce-api-binding
  namespace: production
subjects:
- kind: ServiceAccount
  name: ecommerce-api-sa
  namespace: production
roleRef:
  kind: Role
  name: ecommerce-api-role
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ecommerce-api-sa
  namespace: production
automountServiceAccountToken: false

Network policies control traffic flow between pods, while RBAC ensures services only have necessary permissions. The principle of least privilege applies to Kubernetes just as it does to application code.

Security Principle: Defense in depth is crucial. Combine network policies, RBAC, secure configurations, and regular security audits to protect your applications.

Troubleshooting and Debugging in Kubernetes

Even with Kubernetes' automation, issues arise. Effective troubleshooting requires systematic approaches and the right tools.

# Essential troubleshooting commands
# Check pod status and events
kubectl get pods -o wide
kubectl describe pod ecommerce-api-12345-abcde
kubectl get events --sort-by=.metadata.creationTimestamp

# View logs
kubectl logs -f deployment/ecommerce-api
kubectl logs -f deployment/ecommerce-api --previous  # Previous container

# Debug running containers
kubectl exec -it deployment/ecommerce-api -- /bin/bash
kubectl port-forward deployment/ecommerce-api 8080:8080

# Check resource usage
kubectl top pods
kubectl top nodes

# Network debugging
kubectl exec deployment/ecommerce-api -- curl -v inventory-service:8080/health
kubectl get endpoints ecommerce-api

# Check service mesh (Istio)
kubectl get virtualservices
kubectl get destinationrules
istioctl proxy-status

# Debug with temporary pod
kubectl run debug-pod --image=busybox --rm -it -- sh
# Inside pod: wget -qO- http://ecommerce-api:8080/health

# Check cluster events and logs
kubectl cluster-info dump
kubectl logs -n kube-system deployment/coredns

Systematic troubleshooting starts with checking pod status, then moves to logs, network connectivity, and resource usage. The key is to isolate the problem step by step.

For complex issues, distributed tracing with Jaeger or OpenTelemetry helps trace requests across services.

Common Kubernetes Mistakes and How to Avoid Them

I've made many Kubernetes mistakes over the years. Here are the most common ones and how to avoid them.

Not setting resource limits: Always specify CPU and memory requests and limits. Without them, one pod can starve others of resources.

Ignoring health checks: Implement proper liveness and readiness probes. Without them, Kubernetes can't detect unhealthy pods.

Using latest tags in production: Pin specific image versions for reproducible deployments.

Storing secrets in ConfigMaps: Use Secrets for sensitive data. ConfigMaps are not encrypted.

Not using namespaces: Organize resources with namespaces for better management and security.

Over-provisioning: Start with minimal resources and scale up. Over-provisioning wastes money.

Ignoring network policies: Implement network segmentation from the start. It's much harder to add later.

Not monitoring: Implement comprehensive monitoring and alerting. You can't fix what you can't see.

Start simple and add complexity gradually. Focus on reliability and observability before optimizing for performance.

Summary

Kubernetes orchestration transforms how we build and deploy C# microservices. It provides the infrastructure for reliable, scalable, self-healing applications that can evolve with your business needs.

We've explored the complete Kubernetes journey: from basic deployments and services to advanced patterns like service meshes, GitOps, and comprehensive monitoring. The key insights are that infrastructure as code enables reproducible environments, observability is crucial for maintaining complex systems, and automation reduces human error. Security must be built into every layer, and scaling should be designed from the start. Kubernetes integrates seamlessly with C# workflows, allowing the same applications to scale from local development to production handling millions of requests.