Introduction
Many modern systems combine a relational SQL database (the system of record for many domain objects) with an event store (append-only log of domain events) to enable analytics, integrations, CQRS read models, and auditability. Doing both well requires a clear consistency model so application correctness, user experience, and operational procedures remain predictable.
This article explains how to design a robust consistency model for a hybrid architecture that uses both SQL and an event store. You will get:
Architectural patterns (Transactional Outbox, Event Sourcing, Change Data Capture)
Consistency strategies (strong, causal, eventual) and when to choose each
Implementation recipes (.NET + SQL + Kafka/EventStore style)
Projection and read-model best practices
UI patterns for Angular to handle eventual consistency gracefully
Testing, monitoring and operational playbook
Diagrams use block-style format for clarity.
Problem Statement
When you update a domain entity in SQL and also emit an event to an event store, you have two pieces of truth to keep in sync:
The state in SQL tables (current values).
The event stream of domain happenings (audit, integration feed).
Challenges
How do you guarantee that an event is emitted exactly when the SQL change happens?
How do you prevent duplicates or missing events?
How do you design read models that depend on events without showing inconsistent data to users?
How do you achieve the required latency and correctness for business-critical operations (payments, inventory) vs. low-critical analytics?
Before building, define the consistency promises your system must make. These promises guide design.
Consistency Models (Quick Primer)
Strong Consistency: After a successful write, all readers see the new value immediately. Requires synchronous coordination.
Causal Consistency: If B happened after A, then observers see A before B. Good for preserving causal order without global sync.
Eventual Consistency: The system converges to a single state eventually; reads may show stale data for a while. Much more scalable.
For hybrid SQL + Event Store, most practical architectures use Eventual or Causal consistency with compensating patterns for strongly consistent flows where necessary.
Recommended High-Level Architecture
┌─────────────┐
│ Client UI │
└─────┬───────┘
│
┌───────▼────────┐
│ API / Service │
└───────┬────────┘
│
┌──────────────────┴─────────────────┐
│ Transactional Boundary │
│ (SQL + Outbox in same DB TX) │
└──────┬──────────────────┬──────────┘
│ │
│ │
┌─────▼────┐ ┌─────▼─────┐
│ SQL Read │ │ Outbox │
│ Models │ │ Table │
└──────────┘ └─────┬─────┘
│ (background poll/publish)
▼
┌───────────┐
│ Event Bus │
└────┬──────┘
│
┌──────▼───────┐
│ Projections │
└──────────────┘
Key principle: Make event emission atomic with the SQL change. The most reliable pattern to achieve this is the Transactional Outbox.
Pattern 1. Transactional Outbox (Recommended)
Idea
When code changes SQL state, also insert an event-row into an Outbox table within the same database transaction. A separate background publisher reads the outbox, publishes events to the event store/broker, and marks them sent.
Benefits
Atomicity: either both SQL state and outbox row are committed, or none.
Simplicity: you use the DB's ACID guarantees.
Decouples publish from user request latency.
Outbox Table Example (SQL Server)
CREATE TABLE Outbox (
OutboxId BIGINT IDENTITY PRIMARY KEY,
AggregateId UNIQUEIDENTIFIER NOT NULL,
EventType NVARCHAR(200) NOT NULL,
Payload NVARCHAR(MAX) NOT NULL,
CreatedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
Processed BIT NOT NULL DEFAULT 0,
ProcessedAt DATETIME2 NULL,
AttemptCount INT NOT NULL DEFAULT 0
);
CREATE INDEX IX_Outbox_Processed_CreatedAt ON Outbox(Processed, CreatedAt);
Write Flow
Begin DB transaction.
Update domain tables.
Insert JSON event into Outbox.
Commit transaction.
Publisher (Background Worker)
Queries Outbox for Processed = 0 (batch), publishes events to Kafka/EventStore, uses idempotent publish or dedupe token, then updates Processed = 1 and ProcessedAt.
.NET Publisher Pseudocode
while (!ct.IsCancellationRequested) {
var events = db.Query("SELECT TOP 100 * FROM Outbox WHERE Processed = 0 ORDER BY CreatedAt").ToList();
foreach(var e in events) {
try {
await eventBus.PublishAsync(e.EventType, e.Payload, dedupeId: e.OutboxId);
db.Execute("UPDATE Outbox SET Processed=1, ProcessedAt=SYSUTCDATETIME() WHERE OutboxId=@id", e.OutboxId);
} catch (Exception ex) {
db.Execute("UPDATE Outbox SET AttemptCount=AttemptCount+1 WHERE OutboxId=@id", e.OutboxId);
}
}
await Task.Delay(pollInterval);
}
Guarantees & Caveats
Offers at-least-once event delivery unless publisher and broker support idempotent publish semantics. Design consumers to be idempotent.
Publisher should be resilient and support retries, exponential backoff, and dead-lettering.
Pattern 2. Event Sourcing (Full Event Store as Source of Truth)
Idea
Instead of keeping current state in SQL, store every state change as events (append-only). The SQL read models are projections derived from events.
Pros
Cons
When to Use
Use Event Sourcing when domain invariants are naturally expressed as events, or when audit/replay capabilities are core to the domain (financial systems, ledgers). Otherwise prefer hybrid: SQL primary, outbox to event store.
Pattern 3. Change Data Capture (CDC)
Idea
Use DB-level CDC to stream table-level changes to an event bus (Debezium, SQL Server CDC). This can remove the need for explicit outbox inserts.
Pros
Cons
Schema-level events may not express domain intent (a row update vs. domain event).
Mapping DB change to domain event can be lossy; application-level events are cleaner.
Ordering and transactional semantics across multiple tables can be complex.
Exactly-Once vs At-Least-Once Semantics
At-Least-Once: events may be delivered multiple times. Consumers must be idempotent. This is the pragmatic default for most brokers (Kafka is at-least-once unless you implement additional dedupe).
Exactly-Once: expensive or complex; typically achieved via idempotent consumer logic + transactional writes or broker support (Kafka transactions). Use only when necessary.
Design consumers to be idempotent: store ProcessedEventId in projection DB with unique constraint, use upserts/merges.
Read Models and Projection Strategies
Projections read events and update SQL read tables:
Synchronous Read-After-Write (for strongly consistent needs): after writing SQL + outbox and publishing & confirming the event, update read model synchronously before returning to user. This increases latency and coupling; use sparingly for critical flows.
Asynchronous Projections (common): projection workers process events and update read tables eventually. UI will need to handle staleness.
Projection Upsert Pattern (SQL)
MERGE INTO OrderProjection AS target
USING (VALUES(@OrderId, @Total, @Version)) AS src(OrderId, Total, Version)
ON target.OrderId = src.OrderId
WHEN MATCHED AND target.Version < src.Version THEN
UPDATE SET Total = src.Total, Version = src.Version
WHEN NOT MATCHED THEN
INSERT (OrderId, Total, Version) VALUES (src.OrderId, src.Total, src.Version);
Keep Version to ensure ordering and idempotence.
Ordering Guarantees
Ordering matters for projections if events must be applied in sequence per aggregate. Partition events by aggregate id when publishing (Kafka partition key) so consumer processes events for same aggregate in order.
For cross-aggregate workflows, use causal ordering (sagas) and conflict resolution.
Sagas and Long-Running Consistency
For multi-step, multi-aggregate operations (booking + payment + notification), use a Saga or orchestration that:
Listens to domain events.
Coordinates actions by emitting commands/events.
Tracks saga state in SQL or as a saga event stream.
Supports compensation when steps fail.
Sagas embed eventual consistency logic with compensating actions, not immediate global transactions.
UI Patterns: Angular Handling of Eventual Consistency
Users expect immediate confirmation. But the system may be eventually consistent. Use UX patterns to make this reliable and understandable.
1. Optimistic UI + Pending State
2. Read-After-Write Strategy
After write, read from the write-side SQL for authoritative immediate state instead of projection. This requires the API to return the fresh state from the transaction.
Use projection after it converges for heavy queries and analytics.
3. Staleness Indicator
4. Explicit Refresh / Polling
Example Angular Flow (Pseudo)
// After saving:
this.api.post('/orders', order).subscribe(resp => {
this.localState.update(resp); // immediate
this.startCheckingProjection(order.id); // poll until projection shows change
});
.NET Implementation Example: Transactional Outbox + Publisher
DbContext Save Helper (EF Core)
public async Task SaveChangesWithEventAsync(DbContext ctx, DomainEvent evt) {
using var tx = await ctx.Database.BeginTransactionAsync();
// domain updates
await ctx.SaveChangesAsync();
// outbox
ctx.Set<Outbox>().Add(new Outbox { AggregateId=evt.AggregateId, EventType=evt.Type, Payload=JsonSerializer.Serialize(evt) });
await ctx.SaveChangesAsync();
await tx.CommitAsync();
}
Publisher Worker (Hosted Service)
Poll unprocessed outbox rows
Publish to broker with dedupe key (OutboxId)
On success, mark processed
Use single-producer or distributed publisher with claim semantics: UPDATE TOP (N) SET Processing = 1 OUTPUT ... WHERE Processed = 0 AND Processing = 0 to claim batch atomically.
Testing Strategy
Unit Tests: validate outbox insert during transaction, transformation logic, idempotency logic.
Integration Tests: spin up DB + broker (Kafka test container), write change, ensure publisher picks up and event consumed by projection that updates read model.
Chaos Tests: kill publisher mid-batch, restart, ensure no duplicate projection side-effects; test DB failures during outbox insert.
Replay Tests: test event replay scenarios to rebuild projection and ensure consistency.
Monitoring And Observability
Track:
Outbox queue depth (unprocessed rows) and age of oldest outbox event.
Publisher errors and retries.
Projection lag: offset of event store vs latest applied offset in projection.
Duplicate handling logs (how many duplicates detected & skipped).
Saga health and in-flight saga instances.
Alert thresholds:
Outbox age > X minutes — critical
Projection lag > SLA window — warn
High duplicate events processed — investigate idempotency/timing
Operational Playbook
Deploy Outbox Publisher First (if migrating): enable publisher to read new outbox rows before switching producers.
Backfill or Replay Projections: support replay to rebuild read models after code fix.
Schema Migrations: when changing domain schema, publish transformation events or handle old versions in projection handlers.
Compensations: design compensation workflows for irreversible external side effects (refunds, reversals).
Data Recovery: snapshot read models before mass replays.
Trade-Offs And Decision Guide
If you need strict transactional behavior across services → avoid distributed transactions across services; instead consider synchronous APIs with clear failure handling and compensations.
If you need auditability and replay → consider Event Sourcing or hybrid with outbox + event store.
For low-latency user flows where immediate read-after-write is necessary → return fresh state from write transaction and show pending status for projections.
For integration/emission guarantees → outbox with reliable publisher is the most practical.
Common Pitfalls And How To Avoid Them
Missing Outbox Writes: ensure every code path inserts to outbox (use helper methods / repository patterns).
Non-Idempotent Consumers: always persist processed event id or version, use upserts/merges.
Large Outbox Table Growth: purge or archive processed rows after safe retention, or move to partitioned table.
Assuming Ordering Across Partitions: partition by aggregate key when ordering is necessary.
Blind Replay Into Production: always replay to isolated staging projection first; use dry-run mode.
Example: End-to-End Scenario
Use Case: User updates item quantity -> SQL stock updated -> event emitted -> inventory projection updates global availability -> downstream reservation service consumes event.
Flow
API updates Stock row and inserts StockAdjusted event into Outbox in same DB TX.
Commit returns success to client; UI shows immediate change with “Confirmed”.
Publisher reads outbox, publishes StockAdjusted event to Kafka partitioned by ItemId.
Projection worker consumes event, upserts InventoryProjection with correct version.
Reservation service consumes the same event (separate consumer group) and adjusts holds.
If publisher fails, outbox retains event and monitoring alerts engineers.
Summary
Designing a consistency model for a hybrid SQL + event store architecture is about making realistic promises and implementing patterns that deliver them reliably:
Use the Transactional Outbox to make SQL state changes and event emission atomic within a DB transaction.
Design consumers and projections to be idempotent (unique processed id, upsert merges, versioning).
Partition and order events by aggregate key when ordering matters.
Use sagas for multi-step workflows that require coordination and compensation.
Expose read-after-write UX patterns and pending indicators to handle eventual consistency gracefully in Angular.
Monitor outbox depth, publisher health, and projection lag and provide operational tools to replay and reconcile.