ASP.NET Core  

Implementing Your Own Change Tracking Engine (without CDC) | Angular + .NET Core

Enterprises often need to capture what changed, who changed it, when, and why. SQL Server CDC is great, but sometimes you cannot use it (policy, licensing, cloud limits), or you want finer control, richer metadata, or different retention/processing rules. In that case, building your own Change Tracking Engine inside the application and database is a sensible approach.

This article gives a production-quality blueprint: data model, trigger patterns, lightweight transaction-safe logging, background processing, API surface, UI for audit/undo, retention, performance and testing. Examples are in SQL Server, ASP.NET Core (.NET 8), and Angular. The pattern works with other RDBMS with minor adjustments.

Goals and constraints

Your engine should:

  • Record INSERT / UPDATE / DELETE for target tables.

  • Capture minimal “before” and “after” states, user id, timestamp, source (API/app), and change set id.

  • Be fast and low-impact on OLTP transactions.

  • Allow replay, undo (optional), audit reporting and projecting to downstream systems.

  • Support batching, watermark-based reads, idempotent processors.

  • Provide retention policies and legal-hold handling.

  • Avoid CDC, but be reliable and auditable.

Constraints we accept:

  • No CDC / Change Data Capture.

  • Prefer simple triggers + append-only change log (or application-level writes).

  • Avoid long-running work inside triggers; delegate heavy tasks to background worker.

High-level design

Application/API (Angular + .NET)
        |
        v
Write to Business Tables (INSERT/UPDATE/DELETE)
        |
   SQL TRIGGERS (lightweight)
        |
Append row(s) to ChangeLog tables (transactional or queued)
        |
Background Processor (Worker / Service)
 - read ChangeLog (watermark)
 - validate/enrich
 - project to read models / push to queues
        |
 Downstream: Audit UI, Search Index, ETL, Kafka, Undo API

Key idea: append-only changelog as source of truth for changes. Triggers must be lightweight; heavy enrichment or publishing is done by worker.

Data model

Keep the change store normalized and compact. Example schema:

-- A change set groups related changes (single API call, multiple rows)
CREATE TABLE ChangeSet (
  ChangeSetId BIGINT IDENTITY PRIMARY KEY,
  SourceSystem VARCHAR(100),        -- "WebAPI", "ImportJob", "IntegrationX"
  CorrelationId UNIQUEIDENTIFIER,   -- request id / trace id
  CreatedBy VARCHAR(200),           -- user id or service account
  CreatedAt DATETIME2 DEFAULT SYSUTCDATETIME(),
  Processed BIT DEFAULT 0,
  ProcessedAt DATETIME2 NULL
);

-- Each row change
CREATE TABLE ChangeLog (
  ChangeId BIGINT IDENTITY PRIMARY KEY,
  ChangeSetId BIGINT NOT NULL REFERENCES ChangeSet(ChangeSetId),
  TableName SYSNAME NOT NULL,
  PrimaryKeyJson NVARCHAR(4000) NOT NULL,   -- {"Id":123}
  Operation CHAR(1) NOT NULL,               -- 'I','U','D'
  BeforeJson NVARCHAR(MAX) NULL,
  AfterJson NVARCHAR(MAX) NULL,
  ChangedBy VARCHAR(200) NULL,
  ChangedAt DATETIME2 DEFAULT SYSUTCDATETIME(),
  SequenceNo BIGINT NOT NULL DEFAULT 0      -- ordering within set
);

CREATE INDEX IX_ChangeLog_Processed ON ChangeSet(Processed, CreatedAt);
CREATE INDEX IX_ChangeLog_Table ON ChangeLog(TableName, ChangedAt);

Notes

  • ChangeSet groups changes belonging to same API call or transaction. This is useful for atomic replay and undo.

  • PrimaryKeyJson is compact, easy to index, and language-agnostic.

  • BeforeJson / AfterJson store small JSON snapshots. Use NVARCHAR(MAX) but keep JSON small.

  • SequenceNo preserves order; you can use ROW_NUMBER() in trigger code to set sequence.

Trigger strategy (transaction-safe and lightweight)

Two common approaches:

  1. Synchronous trigger writes — trigger writes ChangeSet and ChangeLog rows inside same transaction. Pros: perfect atomicity. Cons: extra I/O inside transaction, may affect latency.

  2. Async queue from trigger — trigger writes minimal row into small queue table or Service Broker, worker reads and expands. Pros: minimal transaction cost. Cons: small window where change detail is queued asynchronously.

Recommendation: Use synchronous minimal append where each trigger inserts compact JSON into ChangeLog. Keep serialization small and avoid heavy computations. If write latency is critical, use a fast queue table and let worker gather details and write main change table.

Example trigger pattern (insert/update/delete)

Assume table Customer(CustomerId PK, Name, Email, Phone, ModifiedAt, ModifiedBy).

CREATE PROCEDURE dbo.AppendChangeSet
  @SourceSystem VARCHAR(100),
  @CorrelationId UNIQUEIDENTIFIER,
  @CreatedBy VARCHAR(200),
  @TableName SYSNAME,
  @PrimaryKeyJson NVARCHAR(4000),
  @Operation CHAR(1),
  @BeforeJson NVARCHAR(MAX),
  @AfterJson NVARCHAR(MAX)
AS
BEGIN
  SET NOCOUNT ON;
  DECLARE @ChangeSetId BIGINT;

  -- Option A: one ChangeSet per transaction/request; Use CONTEXT_INFO or session var for reusing ChangeSet
  INSERT INTO ChangeSet (SourceSystem, CorrelationId, CreatedBy)
  VALUES (@SourceSystem, @CorrelationId, @CreatedBy);

  SET @ChangeSetId = SCOPE_IDENTITY();

  INSERT INTO ChangeLog (ChangeSetId, TableName, PrimaryKeyJson, Operation, BeforeJson, AfterJson, ChangedBy)
  VALUES (@ChangeSetId, @TableName, @PrimaryKeyJson, @Operation, @BeforeJson, @AfterJson, @CreatedBy);
END

Trigger for UPDATE:

CREATE TRIGGER TR_Customer_Update
ON dbo.Customer
AFTER UPDATE
AS
BEGIN
  SET NOCOUNT ON;

  DECLARE @CorrelationId UNIQUEIDENTIFIER = CONVERT(UNIQUEIDENTIFIER, SESSION_CONTEXT(N'CorrelationId'));
  DECLARE @SourceSystem VARCHAR(100) = SESSION_CONTEXT(N'SourceSystem');
  DECLARE @User VARCHAR(200) = SESSION_CONTEXT(N'User') ;

  IF @CorrelationId IS NULL
  BEGIN
    SET @CorrelationId = NEWID(); -- fallback
  END

  INSERT INTO ChangeLog(ChangeSetId, TableName, PrimaryKeyJson, Operation, BeforeJson, AfterJson, ChangedBy, ChangedAt, SequenceNo)
  SELECT
    NULL, -- if you prefer creating ChangeSet in worker, else set ChangeSetId via AppendChangeSet (recommended)
    'Customer',
    JSON_QUERY('{"CustomerId":' + CONVERT(NVARCHAR(50), d.CustomerId) + '}'),
    'U',
    (SELECT d.CustomerId, d.Name, d.Email, d.Phone FOR JSON PATH, WITHOUT_ARRAY_WRAPPER),
    (SELECT i.CustomerId, i.Name, i.Email, i.Phone FOR JSON PATH, WITHOUT_ARRAY_WRAPPER),
    ISNULL(@User, SUSER_SNAME()),
    SYSUTCDATETIME(),
    ROW_NUMBER() OVER (ORDER BY (SELECT 1))  -- sequence if multiple rows
  FROM deleted d
  JOIN inserted i ON d.CustomerId = i.CustomerId;
END;

Notes

  • Use SESSION_CONTEXT to pass request-specific info from the application to the DB (CorrelationId, User, SourceSystem). This avoids expensive lookups in trigger. In .NET, after opening connection set EXEC sp_set_session_context 'CorrelationId', '...'.

  • FOR JSON PATH, WITHOUT_ARRAY_WRAPPER gives small JSON representation.

  • Keep ChangeLog inserts minimal; heavy enrichment and publishing is deferred.

Application pattern: set session context

In .NET (EF Core or Dapper), on opening connection call:

await connection.ExecuteAsync(
  "EXEC sp_set_session_context @key, @value",
  new { key = "CorrelationId", value = correlationId.ToString() });

await connection.ExecuteAsync(
  "EXEC sp_set_session_context @key, @value",
  new { key = "User", value = currentUserId });

Wrap this in a DB-context interceptor so every request carries context.

Background processing — Change Processor

The worker reads unprocessed ChangeSets or ChangeLog rows and performs heavy tasks:

  • Enrich change data (lookup display names, resolve tenant)

  • Publish to message bus (Kafka, RabbitMQ, Service Bus)

  • Update read-models / materialized views

  • Push to search index (Elasticsearch)

  • Mark ChangeSet processed

Important patterns

  • Use watermark (last processed ChangeSetId) to resume after crash. Do not rely on TOP without ordering.

  • Make processing idempotent. Each change must produce same result if processed twice. Use ChangeId as idempotency key.

  • Batch reads to improve throughput (e.g., 1000 changes per batch).

  • Use READPAST hints or UPDATE lock to claim rows for processing (or add Processing flag).

Example worker (C# simplified)

public async Task ProcessChangesAsync(CancellationToken ct)
{
  while (!ct.IsCancellationRequested)
  {
    var sets = await _db.QueryAsync<ChangeSetDto>(
      @"WITH cte AS (
           SELECT TOP (@batch) * FROM ChangeSet WHERE Processed = 0 ORDER BY ChangeSetId
         )
         UPDATE cte SET Processed = 2 OUTPUT inserted.*;",
      new { batch = 100 });

    foreach (var set in sets)
    {
       var changes = await _db.QueryAsync<ChangeLogDto>(
         "SELECT * FROM ChangeLog WHERE ChangeSetId = @cs ORDER BY SequenceNo",
         new { cs = set.ChangeSetId });

       // transform/enrich
       // publish to queue or index
       // mark processed
       await _db.ExecuteAsync("UPDATE ChangeSet SET Processed = 1, ProcessedAt = SYSUTCDATETIME() WHERE ChangeSetId = @id",
                              new { id = set.ChangeSetId });
    }

    await Task.Delay(TimeSpan.FromSeconds(1), ct);
  }
}

Notes

  • Marking Processed = 2 reserves sets to this worker and prevents duplicates. Use transactions to atomically claim rows.

Downstream consumers & idempotency

Publish messages with metadata:

{
  "ChangeSetId": 123,
  "ChangeId": 456,
  "Table": "Customer",
  "Operation": "U",
  "PrimaryKey": {"CustomerId": 99},
  "Before": {...},
  "After": {...},
  "ChangedBy":"user@company",
  "ChangedAt": "2025-11-20T12:00:00Z"
}

Consumers should store last processed ChangeId per source to avoid duplicate processing. Use unique message key (ChangeId) if using Kafka or dedupe store in relational DB.

Undo & Reconciliation

Undo support can be implemented by re-applying inverse operations using BeforeJson. Example undo for UPDATE is to write a new UPDATE using BeforeJson as After. For DELETE, re-insert using BeforeJson. For INSERT, delete the row.

Careful rules

  • Only allow undo for latest changes in a ChangeSet or when business rules allow.

  • Always create a new ChangeSet for the undo action so history is preserved.

  • Validate constraints: re-inserting a deleted PK may fail if constraints changed; notify user.

Implement an Undo API in .NET that:

  1. Takes ChangeSetId (or ChangeId)

  2. Loads related ChangeLog entries in reverse order (LIFO)

  3. For each change compute inverse operation and apply inside a transaction with appropriate concurrency checks

  4. Log the undo as a new ChangeSet with references to original ChangeSet.

Retention & legal hold

Retention is essential

  • Keep full change history for legalHold flagged entities.

  • Soft-delete old ChangeLog rows after retention period (e.g., 365 days) unless legal hold.

  • For long-term archival, export change sets into compressed files and move to archival storage.

Add columns

ALTER TABLE ChangeSet ADD RetainUntil DATETIME2 NULL, LegalHold BIT DEFAULT 0;

Purge job

DELETE FROM ChangeLog WHERE ChangeSetId IN (
  SELECT ChangeSetId FROM ChangeSet 
  WHERE Processed = 1 AND IsDeleted = 0 AND (LegalHold = 0 OR LegalHold IS NULL)
    AND CreatedAt < DATEADD(day, -@RetentionDays, SYSUTCDATETIME())
);

Performance considerations

  • Keep JSON small: store only changed fields rather than full row when possible. For UPDATE include only fields that changed.

  • Use compressed storage for long-term: compress JSON before storing if size matters.

  • Indexing: index ChangeSet.CreatedAt, ChangeLog.TableName and ChangeLog.ChangedAt. Avoid indexing JSON columns heavily.

  • Batch inserts: for high-volume operations use batching (bulk insert) rather than row-by-row trigger inserts — consider application-level batching for bulk imports.

  • Partitioning: partition ChangeLog by time (monthly) for easy purge and archival.

  • Offload enrichment: don't perform lookups inside triggers; do them in worker.

  • Use minimal locking: triggers append only — avoid updates of large tables inside triggers.

Alternative: Application-level change logging

Instead of triggers, application code can write change log entries. Benefits:

  • More control, richer metadata, easier versioning.

  • Avoids trigger-related surprises.

  • You can correlate user context without session-context calls.

Downside: every code path must call the logging API (including ETL/imports). But you can centralize logging in repository layer or use interceptors (EF Core SaveChanges interceptor).

Example EF Core SaveChanges interceptor

public override async Task<int> SaveChangesAsync(...)
{
  var entries = ChangeTracker.Entries()
    .Where(e => e.State == EntityState.Modified || e.State == EntityState.Added || e.State == EntityState.Deleted);

  var changeSetId = await CreateChangeSetAsync(...);

  foreach (var e in entries) {
     var before = Serialize(e.OriginalValues);
     var after = Serialize(e.CurrentValues);
     await AppendChangeLogAsync(changeSetId, e.Entity.GetType().Name, pkJson, op, before, after);
  }

  return await base.SaveChangesAsync(...);
}

This approach often simplifies debugging and avoids DB triggers.

Angular UI: audit explorer & undo

Provide components:

  • ChangeSetListComponent: list change sets with filter (date, user, table, correlation id).

  • ChangeSetDetailComponent: show changes in a set with before/after diffs and a preview.

  • UndoDialogComponent: allow qualified users to perform undo (with confirmation and pre-undo snapshot).

  • Subscribe to processing status: show worker processing state (queued/processed).

Example Angular API call

getChangeSets(filter) {
  return this.http.post('/api/changesets/query', filter);
}

Display diff using a JSON diff library and highlight changed paths.

Testing strategy

  • Unit tests for serialization/deserialization of before/after JSON and for DB helper methods.

  • Integration tests that perform INSERT/UPDATE/DELETE and assert ChangeLog rows are created atomically. Use transaction rollback to clean test DB.

  • Load tests to measure overhead of triggers — simulate production load and measure latency.

  • Failure injection: simulate worker crash to ensure resumed processing works and no data loss.

  • Undo tests to ensure constraints and business rules are respected.

Observability & monitoring

Track metrics:

  • Changes per minute (ingest rate)

  • ChangeSet processing latency (CreatedAt → ProcessedAt)

  • Number of unprocessed change sets

  • Size distribution of Before/After JSON

  • Worker failure counts and exceptions

Emit traces and correlation IDs from application through session context to change log so you can trace a UI action end-to-end.

Security & compliance

  • Only authorized users can view or undo changes. Enforce RBAC in API.

  • Encrypt sensitive fields or mask them in BeforeJson / AfterJson (PII). Consider field-level encryption if required.

  • Audit access to change logs (who viewed what).

  • Use secure storage for archived change logs.

Real-world patterns & gotchas

  • Bulk imports: For large imports, temporarily disable triggers and create a single aggregated ChangeSet in application code to avoid millions of trigger inserts.

  • Schema changes: When changing table structure, keep change log format compatible (store column name keys, not positional).

  • Multi-tenancy: include TenantId in ChangeSet and ChangeLog for filtering and retention.

  • Transactions spanning multiple DBs: you cannot capture cross-db atomicity with simple triggers — consider distributed transaction design or application-level orchestration.

  • Prevent trigger storms: avoid triggers that cause more writes that in turn fire more triggers (looping).

Quick checklist before rollout

  1. Implement session context propagation in all DB connections.

  2. Create compact JSON snapshots (only changed fields where possible).

  3. Ensure triggers are minimal (append-only).

  4. Create robust background processor with idempotency and batch processing.

  5. Add monitoring/alerts for processing lag.

  6. Implement retention and legal hold policies.

  7. Write tests: functional, load, failure injection.

  8. Secure the API and logs.

Conclusion

A well-designed change tracking engine without CDC gives you full control: atomic change capture, custom metadata, grouping of changes, undo support, and flexible downstream processing. The pattern combines lightweight triggers or application interceptors, an append-only change store, and a robust background processor that enriches and publishes changes.