Software Architecture/Engineering  

The Senior .NET Developer Interview Guide: Part 3 – Scalability, Observability, and Concurrency

Introduction

Building a .NET application that works for a few hundred users is straightforward, but designing one that survives massive traffic spikes while remaining observable is where senior expertise is truly tested. As systems grow, the challenges shift from simple logic to managing resource contention, distributed telemetry, and data consistency.

In this section, we move into the operational side of high-level development. We will explore how to handle 100x traffic surges, the mechanics of distributed tracing, and the hidden dangers of shared state and connection pooling. Finally, we discuss when it is strategically correct to break the rules of database normalization to achieve elite performance.

Let's dive in.

What’s Your Approach to Handling 100x Traffic Spikes?

Technical Answer: The strategy is to move from a static architecture to an Elastic, Decoupled Architecture that fails gracefully.

  • Horizontal Scaling: Use Kubernetes (HPA) or cloud auto-scaling based on CPU/request metrics.

  • Edge Offloading: Deploy a CDN (Cloudflare/Akamai) to cache static assets and common API responses before they hit your origin.

  • Asynchronous Processing: Move heavy writes to a Message Queue (RabbitMQ/Azure Service Bus).

  • Resilience Patterns: Use the Polly library to implement Circuit Breakers.

Simple terms: If a downstream service is drowning, stop sending it requests so your own app doesn't crash while waiting.

Code Snippet (Polly Circuit Breaker)

// Stops calling the service for 30 seconds after 5 consecutive failures
var circuitBreaker = Policy
    .Handle<HttpRequestException>()
    .CircuitBreakerAsync(5, TimeSpan.FromSeconds(30));

await circuitBreaker.ExecuteAsync(() => _httpClient.GetAsync("/api/data"));

Example: During a "Black Friday" sale, instead of saving every order directly to SQL (which would lock the DB), you enqueue the order into Azure Service Bus. A background worker then drains the queue at a steady pace the database can handle.

How Do You Structure Logging to Support Distributed Tracing?

Technical Answer: You must implement Structured Logging and Propagation Context to track a request across service boundaries.

  • Correlation IDs: Generate a unique ID at the entry point and pass it in the X-Correlation-ID header to all downstream services.

  • Structured Logging: Use Serilog to log objects, not just strings.

Simple terms: Logs should be "searchable data," not just a "text file."

  • OpenTelemetry: Use this standard to automatically link traces across different microservices.

Code Snippet (Serilog Structured Log)

// Good: Searchable by CustomerId in Seq/ELK
_logger.LogInformation("Processing order {OrderId} for customer {CustomerId}", orderId, customerId);

// Bad: Hard to query/filter
_logger.LogInformation("Processing order " + orderId + " for customer " + customerId);

Analogy: It’s like a Package Tracking Number. No matter how many trucks or planes the box moves through, that one ID tells you exactly where it has been.

How Do You Avoid Shared Mutable State in Background Services?

Technical Answer: You should favor Immutability and Stateless Workers to prevent race conditions.

  • Thread-Safe Collections: Use ConcurrentDictionary or ConcurrentQueue instead of standard lists.

  • SemaphoreSlim: Use this for async-friendly locking.

Simple terms: It's like a "one-at-a-time" turnstile for your code.

  • Channels: Use System.Threading.Channels for high-performance producer/consumer logic.

Code Snippet (Using Channels for Safety)

var channel = Channel.CreateUnbounded<string>();

// Producer: Safe to call from multiple threads
await channel.Writer.WriteAsync("New Task");

// Consumer: Reads tasks one by one safely
await foreach (var item in channel.Reader.ReadAllAsync())
{
    /* Process */
}

Analogy: Think of a shared notebook. If two people try to write on the same page at once, it becomes a mess. A Channel is like a suggestion box—everyone drops notes in, and one person reads them in order.

How Do You Manage Connection Pooling in High-Throughput Systems?

Technical Answer: You must optimize the Pool Lifetime and prevent Connection Leaks.

  • Open Late, Close Early: Only open the connection right before the query and use a using block to return it to the pool immediately.

  • Sizing: Adjust Min Pool Size and Max Pool Size in the connection string.

  • External Poolers: Use PgBouncer for PostgreSQL to manage thousands of connections outside of the .NET process.

Example: In a high-traffic API, if you forget to Dispose your SqlConnection, the connection stays "active" but unusable. Eventually, your app hits the Max Pool Size (default 100) and every new request fails with a timeout because no workers are free.

When Do You Intentionally Denormalize Data in PostgreSQL?

Technical Answer: Denormalization is a trade-off where you sacrifice Write Integrity for Read Performance.

  • Read-Heavy Views: When a dashboard needs to join 10 tables, save a pre-joined version of that data in one table.

  • JSONB Columns: Use PostgreSQL's JSONB for semi-structured data to avoid 50+ nullable columns.

  • Materialized Views: Create a snapshot of a complex query that refreshes on a schedule.

Code Snippet (JSONB for Flexibility)

-- Storing variable attributes without changing schema
CREATE TABLE products (
    id serial PRIMARY KEY,
    metadata jsonb -- Stores {"color": "red", "size": "XL", "fabric": "cotton"}
);

Normalization is like keeping your clothes in separate drawers (socks, shirts, pants). Denormalization is like packing an "outfit bag" the night before. It’s faster to grab in the morning, but if you decide to change your socks, you have to remember to update the bag too.

Conclusion

In this article, we shifted our focus to the operational challenges of senior-level development—ensuring systems are elastic, observable, and thread-safe. Understanding these patterns ensures that your applications can handle the chaotic reality of production environments without losing data or visibility.