A Data Versioning System keeps the complete history of changes to your records: who changed what, when, and how to recover or audit older versions. This is critical for audit, debugging, compliance (GDPR/financial), and features such as “time travel”, record revert, and change comparisons.
This article gives a practical, production-ready design with multiple implementation options (built-in temporal tables, manual history tables with triggers, and application-level capture), ER diagrams, flowcharts, SQL scripts, ASP.NET Core patterns, and best practices.
Goals
Capture every INSERT / UPDATE / DELETE for important tables.
Keep metadata: who, when, reason (optional), transaction id.
Efficient storage and query patterns.
Provide APIs to read history, diff versions and revert a record.
Retention policy and archive strategies.
Support both simple and complex schemas.
High-level architecture (Visio-style)
+-----------------+ writes/reads +-------------------+
| Application | <------------------> | Primary DB |
| (ASP.NET Core) | | (SQL Server) |
+-----------------+ +-------------------+
| |
| v
| +-----------------------+
| | Versioning Store |
| | - Temporal Tables OR |
| | - History Tables + |
| | Triggers |
| +-----------------------+
| |
v v
+-----------------+ +-------------------+
| Audit / Admin | | Archive / Cold |
| UI (history / | | storage (S3/Blob) |
| compare / revert)| +-------------------+
+-----------------+
Flowchart (change -> capture -> query -> revert)
User updates record
↓
Application executes SQL
↓
Versioning mechanism captures change
(temporal OR trigger OR app-level)
↓
History stored with metadata (user, ts, txid)
↓
Admin queries history / diffs
↓
Admin reverts (optional) -> write new record (audit captured)
ER diagram (core objects)
+-------------------+ +------------------------+
| MyEntity | 1 1| MyEntity_History |
+-------------------+ <------>+-----------------------+
| Id (PK) | | HistoryId (PK) |
| Name | | EntityId (FK) |
| Value | | ValidFrom |
| ... | | ValidTo |
+-------------------+ | ChangedBy |
| ChangeType (I/U/D) |
| PayloadJson (full row)|
| ChangeReason |
| TxId |
+-----------------------+
Two main approaches (summary)
System-versioned temporal tables (SQL Server feature, simple, performant, automatic).
Pros: automatic row-versioning, built-in time travel query, efficient.
Cons: less flexible metadata (who/why), harder to store JSON diffs, needs SQL Server 2016+.
Manual history tables + triggers / stored procedures.
Pros: full control (store user, reason, JSON diffs, tx ids), easier to extend.
Cons: more code, need to manage triggers and retention.
Additionally, application-level capture (EF Core interceptors, change tracker) can supplement to include user/context info and business logic.
Option A — System-versioned temporal table (recommended when available)
1) Create temporal table
CREATE TABLE dbo.Customer
(
CustomerId UNIQUEIDENTIFIER NOT NULL PRIMARY KEY DEFAULT NEWID(),
Name NVARCHAR(200) NOT NULL,
Email NVARCHAR(200),
Balance DECIMAL(18,2) DEFAULT 0,
SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL,
PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.CustomerHistory));
SQL Server creates dbo.CustomerHistory automatically with the row versions. But CustomerHistory does not include ChangedBy or ChangeReason.
2) Add audit metadata (who/why)
You can store user info in a separate audit table or extend approach by writing triggers to insert augmented history or use an application-level write to an audit table.
Example: keep CustomerHistoryMeta where you store (HistoryRowPK, ChangedBy, ChangeReason, TxId) linked to history rows using SysStartTime and CustomerId as keys.
3) Query history (time travel)
-- Get record as of a point in time
SELECT *
FROM dbo.Customer
FOR SYSTEM_TIME AS OF '2025-11-01 10:00:00'
WHERE CustomerId = '...';
-- Get all versions
SELECT * FROM dbo.Customer
FOR SYSTEM_TIME ALL
WHERE CustomerId = '...'
ORDER BY SysStartTime;
4) Revert to historical version (pattern)
To revert, read the historical row and insert a new current row (or update current row) — do not “restore” history row directly; write a new change so revert is also tracked.
Option B — Manual history table + trigger (flexible, full metadata)
1) Schema (example for table Customer)
CREATE TABLE dbo.Customer
(
CustomerId UNIQUEIDENTIFIER PRIMARY KEY DEFAULT NEWID(),
Name NVARCHAR(200),
Email NVARCHAR(200),
Balance DECIMAL(18,2)
);
CREATE TABLE dbo.CustomerHistory
(
HistoryId BIGINT IDENTITY PRIMARY KEY,
CustomerId UNIQUEIDENTIFIER NOT NULL,
ChangeType CHAR(1) NOT NULL, -- I/U/D
ChangedBy NVARCHAR(200) NULL,
ChangeReason NVARCHAR(500) NULL,
ChangedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
PayloadJson NVARCHAR(MAX) NOT NULL, -- full row snapshot as JSON
TxId UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID()
);
CREATE INDEX IX_CustomerHistory_CustomerId ON dbo.CustomerHistory(CustomerId);
CREATE INDEX IX_CustomerHistory_TxId ON dbo.CustomerHistory(TxId);
2) Trigger to capture changes
CREATE TRIGGER trg_Customer_Audit
ON dbo.Customer
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
SET NOCOUNT ON;
DECLARE @txid UNIQUEIDENTIFIER = NEWID();
DECLARE @changedBy NVARCHAR(200) = SUSER_SNAME(); -- replace via CONTEXT_INFO if app sets
-- INSERTED rows -> Insert
INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
SELECT i.CustomerId, 'I', @changedBy, NULL, (SELECT i.* FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), @txid
FROM inserted i;
-- UPDATED rows -> Update (capture new snapshot or both old & new as needed)
INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
SELECT u.CustomerId, 'U', @changedBy, NULL, (SELECT u.* FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), @txid
FROM inserted u
WHERE EXISTS (SELECT 1 FROM deleted d WHERE d.CustomerId = u.CustomerId);
-- DELETED rows -> Delete
INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
SELECT d.CustomerId, 'D', @changedBy, NULL, (SELECT d.* FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), @txid
FROM deleted d;
END
Notes
SUSER_SNAME() gives SQL login; for application user prefer setting SESSION_CONTEXT or CONTEXT_INFO before DML and reading it in trigger (see below).
PayloadJson stores full state snapshot; you may instead store only changed columns (diff) if storage is a concern.
3) Passing application user into trigger
Before executing DML, set session context from application:
EXEC sp_set_session_context 'AppUser', '[email protected]';
Then inside trigger:
DECLARE @changedBy NVARCHAR(200) = CONVERT(NVARCHAR(200), SESSION_CONTEXT(N'AppUser'));
This makes ChangedBy accurate.
4) Stored procedures to query history
CREATE PROCEDURE usp_GetCustomerHistory
@CustomerId UNIQUEIDENTIFIER
AS
BEGIN
SELECT HistoryId, ChangeType, ChangedBy, ChangeReason, ChangedAt, PayloadJson
FROM dbo.CustomerHistory
WHERE CustomerId = @CustomerId
ORDER BY ChangedAt DESC;
END
5) Revert procedure (create new record state from history)
CREATE PROCEDURE usp_RevertCustomerToHistory
@HistoryId BIGINT,
@RevertedBy NVARCHAR(200)
AS
BEGIN
DECLARE @payload NVARCHAR(MAX);
SELECT @payload = PayloadJson FROM dbo.CustomerHistory WHERE HistoryId = @HistoryId;
-- parse JSON into columns and update current table
UPDATE dbo.Customer
SET Name = JSON_VALUE(@payload, '$.Name'),
Email = JSON_VALUE(@payload, '$.Email'),
Balance = TRY_CAST(JSON_VALUE(@payload, '$.Balance') AS DECIMAL(18,2))
WHERE CustomerId = JSON_VALUE(@payload, '$.CustomerId');
-- insert a history record marking revert
INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
VALUES (JSON_VALUE(@payload, '$.CustomerId'), 'U', @RevertedBy, 'Revert to HistoryId ' + CAST(@HistoryId AS NVARCHAR(20)), @payload, NEWID());
END
Application-level capture (EF Core interceptor) — add user / reason
If you use EF Core, intercept SaveChanges to write audit to history table so you have full contextual data (user id, IP, reason).
Example (simplified)
public class AuditSaveChangesInterceptor : SaveChangesInterceptor
{
private readonly IHttpContextAccessor _http;
public AuditSaveChangesInterceptor(IHttpContextAccessor http) => _http = http;
public override async ValueTask<InterceptionResult<int>> SavingChangesAsync(DbContextEventData eventData,
InterceptionResult<int> result, CancellationToken cancellationToken = default)
{
var ctx = eventData.Context;
var user = _http.HttpContext?.User?.Identity?.Name ?? "system";
var entries = ctx.ChangeTracker.Entries().Where(e => e.State == EntityState.Modified
|| e.State == EntityState.Added
|| e.State == EntityState.Deleted);
foreach (var entry in entries)
{
var payload = JsonSerializer.Serialize(entry.CurrentValues.ToObject()); // or build object
var history = new CustomerHistory
{
CustomerId = (Guid)entry.Property("CustomerId").CurrentValue,
ChangeType = entry.State == EntityState.Added ? "I" : entry.State == EntityState.Deleted ? "D" : "U",
ChangedBy = user,
PayloadJson = payload,
ChangedAt = DateTime.UtcNow,
TxId = Guid.NewGuid()
};
ctx.Set<CustomerHistory>().Add(history);
}
return await base.SavingChangesAsync(eventData, result, cancellationToken);
}
}
Register interceptor in Program.cs for EF Core.
Benefits: you have direct access to user principal and request info.
Query patterns & useful API endpoints
GET /api/entity/{id}/history — list versions.
GET /api/entity/{id}/history/{historyId} — fetch specific version.
GET /api/entity/{id}/diff?from=histA&to=histB — return field-level diff.
POST /api/entity/{id}/revert — revert to historyId (authz required).
GET /api/entity/{id}/asOf?timestamp=... — time-travel view (temporal tables easy).
Example C# controller methods using Dapper/EF Core — omitted for brevity (pattern same as stored procs).
Field-level diff (practical approach)
Store PayloadJson for each version (full row).
To compute diff, fetch two JSON objects and compare keys; report changed fields, old and new values. Use server code (C#) to parse JSON into Dictionary<string, object> and compare.
Example diff function (C# pseudocode)
Dictionary<string, object> left = JsonSerializer.Deserialize<Dictionary<string, object>>(leftJson);
Dictionary<string, object> right = JsonSerializer.Deserialize<Dictionary<string, object>>(rightJson);
var diffs = new List<Diff>();
foreach(var key in left.Keys.Union(right.Keys))
{
left.TryGetValue(key, out var lv);
right.TryGetValue(key, out var rv);
if (!object.equals(lv, rv))
diffs.Add(new Diff { Field = key, Old = lv?.ToString(), New = rv?.ToString() });
}
Return diffs to UI.
Retention, compression and archiving
Keep history for required retention window (e.g., 2–7 years) per compliance.
Archive older history to cheaper storage (Parquet/JSON files in Blob/S3). Provide a process to purge archived rows.
Consider compressing PayloadJson (e.g., gzip) if history large. Store as VARBINARY or use table compression features.
Avoid storing huge BLOBs repeatedly; store references instead.
Concurrency, transaction and tx-id handling
Example in application
EXEC sp_set_session_context @key = 'TxId', @value = '...';
Trigger reads
DECLARE @txid UNIQUEIDENTIFIER = CONVERT(uniqueidentifier, SESSION_CONTEXT(N'TxId'));
Security & GDPR considerations
Protect personal data: encryption at rest and in transit.
Provide delete/forget workflows: redact PII in history if user requests (but preserve audit trail per law). Consider pseudonymisation.
Audit access to history itself (who viewed history). Log access events separately.
Restrict revert endpoints to authorized roles.
Performance considerations
Index history tables on (EntityId, ChangedAt).
Partition large history tables by date (Monthly/Yearly).
Use compression (ROW/ PAGE) on history partitions.
For high write volumes prefer temporal tables which are highly optimised, or use append-only history tables with minimal indexes to speed inserts; create read-optimized projections for reporting.
Testing checklist
Verify INSERT / UPDATE / DELETE produce history rows with correct payload.
Test user propagation via SESSION_CONTEXT.
Test revert and confirm a new history row created.
Test time-travel queries for temporal tables.
Test diff outputs for correctness.
Test retention/archiving and reimport from archive.
Load-test writes and history insert rate.
Example migration & scripts (summary)
Add history table schema for each entity.
Add triggers or enable temporal versioning.
Add SyncWatermark table if you need cross-system sync.
Add stored procedures to fetch history, revert, cleanup/archive.
Add application interceptor to set SESSION_CONTEXT('AppUser') and optional TxId.
Build APIs for UI and admin tasks.
Final recommendations
If using SQL Server 2016+ and you only need row-version history and time-travel queries: use system-versioned temporal tables (easier, performant).
If you need richer metadata (who/why), JSON diffs, or complex revert/grouping: use manual history tables + triggers together with application-level session context.
Combine both approaches: use temporal tables for automatic versioning and maintain separate AuditMeta table with user/tx metadata written by application or trigger to link history rows to metadata.
Automate retention, archiving and monitoring.