Last month I attended an internal session at Happiest Minds Technologies on modernising .NET applications for cloud-native microservices. These are the ideas that stuck with me — and why I think every .NET engineer should be thinking about them.
Why This Topic Hit Home
I have been working with .NET applications long enough to know how easy it is to keep building the way we always have — IIS-hosted, session-heavy, monolithic deployments that work until they really do not.
The session framed it well: the goal is not to chase microservices as a trend. It is to build software that can grow with your team — where services deploy independently, scale independently, and fail without taking everything else down with them.
Here are the seven pillars the session covered, along with what I took away from each one.
1. Break Free from IIS
This was the starting point, and honestly the most liberating idea. IIS is not a bad web server — but it is a Windows-only one. Tying your application to it means tying your deployment options to a single operating system.
.NET Core ships with Kestrel, a cross-platform web server that runs identically on Linux, macOS, and Windows. Pair it with self-contained deployment and you have a service that can run as a standalone executable or a container image — no host dependencies, no IIS configuration, no Windows-only infrastructure.
dotnet publish -r linux-x64 --self-contained true -p:PublishSingleFile=true
That one command produces something genuinely portable. I found this shift in thinking — from 'deploy to a server' to 'ship a container' — to be the most conceptually significant takeaway of the session.
2. Observability Is Not Optional in Distributed Systems
When everything lives in one process, debugging is manageable. When a request hops across five services, you need proper tooling — or you are guessing.
The session covered four layers of observability that work together:
Structured logging — queryable, indexed log events rather than free-text strings
Distributed tracing via OpenTelemetry — follow a single request across every service it touches
Metrics via System.Diagnostics.Metrics — expose custom counters and gauges for dashboards and alerting
Health endpoints — /health/live and /health/ready tell your load balancer and Kubernetes whether this instance should be serving traffic
The most practical advice from the session: instrument with OpenTelemetry once, then choose your backend (Jaeger, Azure Monitor, Grafana Tempo) separately. Your instrumentation code never needs to change.
3. Each Service Must Own Its Data
This one generated the most discussion in the session — and I understand why. Shared databases feel efficient. In practice, they create invisible coupling that makes independent deployments impossible.
The principle is straightforward: every microservice gets its own schema or database. No other service queries those tables directly. If service A needs data from service B, it calls service B's API or subscribes to its events.
Schema changes in service B cannot break service A
Each team can evolve their data model at their own pace
Services can choose the right database type for their use case — relational, document, key-value
The session recommended starting with dedicated schemas within a shared database instance as a practical first step, then separating into independent databases as team and service boundaries mature.
4. Design for Failure, Not Against It
This was the most mindset-shifting section of the session. The assumption in a microservices architecture is not that services will stay up — it is that they will occasionally go down, slow down, or become unreachable. The question is whether your system handles that gracefully or collapses.
The four patterns covered:
Detect repeated failures to a downstream service and stop calling it temporarily, giving it space to recover: Circuit Breakers
Retry transient failures automatically, with increasing delays so you do not overwhelm a recovering service: Retries with Exponential Backoff
Every outbound call must have one — a service without timeouts will eventually exhaust its thread pool waiting for a dependency that never responds: Timeouts
Cap concurrent calls to any single dependency so one slow service cannot consume all available resources: Bulkhead Isolation
Microsoft.Extensions.Http.Resilience (Polly v8) in .NET 8 makes this straightforward to implement on HttpClient. The session's point was that these patterns are not advanced — they are baseline requirements.
5. Stateful Services Cannot Scale
If your service stores anything in memory — session state, cached data, temporary files — then every request must go to the same instance. That is a hard ceiling on scalability and a single point of failure.
The fix is to externalise all state:
Session → Redis via IDistributedCache
Application cache → Redis or Azure Cache for Redis
File storage → Azure Blob Storage or equivalent object storage
Persistent data → SQL Server, PostgreSQL, Cosmos DB, etc.
Stateless services can be scaled horizontally — spin up ten instances, route requests to any of them, shut any of them down without data loss. This was one of those ideas that sounds obvious until you audit how much state is quietly living in your current services.
6. Automation Is the Delivery Mechanism
The architecture improvements in pillars one through five deliver their full value only when paired with automated deployment pipelines. A manually deployed microservices system is more complicated than a manually deployed monolith — not less.
The session outlined what a mature pipeline looks like:
Automated build, unit test, and integration test on every pull request
Multi-stage Docker builds producing minimal, hardened images
Automated publishing to a container registry on merge to main
Blue/green or rolling deployments to production with zero downtime
Automated rollback triggered by health check failures or error rate thresholds
GitHub Actions and Azure DevOps Pipelines both support this end-to-end for .NET Core. The goal the session described stuck with me: merging a pull request should be sufficient to ship to production safely.
7. Containerisation Is the Glue
Every pillar above converges on containerisation. Kestrel makes services container-friendly. Statelessness makes containers disposable and replaceable. Observability makes containers debuggable. Resilience patterns make container orchestration reliable.
.NET Core, Docker, and Kubernetes are a first-class combination in 2026. The session's framing was useful: containerisation is not the goal — it is the mechanism that makes all the other goals achievable in practice.
Quick Reference Summary:
| Pillar | What to Do | Why It Matters |
|---|
| 1. Decouple from IIS | Kestrel + self-contained builds | Cross-platform, containerisable |
| 2. Observability | OpenTelemetry, structured logs, health checks | Visibility across distributed services |
| 3. Data Ownership | Database per service, no shared schemas | Independent deployment & evolution |
| 4. Resilience | Circuit breakers, retries, timeouts, bulkheads | Failures stay isolated |
| 5. Statelessness | Redis for session, blob for files | True horizontal scaling |
| 6. DevOps Automation | CI/CD pipelines, blue/green deployments | Safe, repeatable releases |
| 7. Containerisation | Docker + Kubernetes + .NET Core | Portability across all environments |
Final Thought
What I appreciated most about the session was the emphasis on incrementalism. You do not need to rewrite everything to start moving in the right direction. Pick the pillar that addresses your biggest current pain — whether that is observability, resilience, or deployment automation — and apply it. The others follow naturally.