Concept Of Change Data Capture (CDC) In SQL Server - Part Two

SQL Server 2008 – Change Data Capture (CDC) example

One of the new features of SQL Server 2008 is Change Data Capture (CDC). CDC is used to track changes (inserts, updates and deletes) made on SQL Server tables.


When CDC has been configured for a database and table, the capture process will monitor the transaction log.

For each source table an associated change table will be created. This will be populated by the capture process as source tables are used. Two table-valued functions are used to query the change tables to provide the result set required. The rest of this blog will provide an example of how to setup and use the CDC functions.

Configure CDC

Firstly we need to configure the database and required table for CDC. I’ve created a test database for this (called CDCTest).

To enable CDC for a database run:

To avoid 15404 sql server errors ensure that the SQL Server service is running as a domain user when a domain account is the CDCTest DB owner.

To check if cdc is enabled for the database run the query:

The sp_cdc_enable_db procedure will create system tables in the cdc schema. They are:

  • captured_columns – defines which columns will be monitoring for changes
  • change_table – defines the change table which will be populated by the CDC process
  • index_columns – defines the index columns for each change table
  • ddl_history – records DDL changes for source tables enabled for CDC
  • lsn_time_mapping – captures the sequence number and time for each transaction on the source table

We don’t have to worry about these tables from here on in for this example. They are used by stored procs and the jobs to manage the CDC process and maintenance.

Next we must configure which table we will monitor for CDC. First let’s create a source table in the CDCTest database called tbl1 with a primary key constraint:

We enable CDC on this table by running,

Check that CDC is enabled for the table with this query,

The sp_cdc_enable_table procedure will create the following,

  • dbo_tbl1_CT – this is the change table (a system table), it includes the same columns as the source table and a number of control columns
  • Two table-valued functions – (cdc.fn_cdc_get_all_changes_dbo_tbl1 and cdc.fn_cdc_get_net_changes_dbo_tbl1) these are created and are used to return the required result set from the change table
  • Two SQL Server Agent Jobs (CDCTest_capture and CDCTest_cleanup) – Capture runs to monitor the transaction log and populates the change table. Cleanup runs to clean up the change table to avoid them getting too big.

Insert, update and delete records

Now we’ll produce some insert, update and delete statements to populate our source table with data. The comment is the time when the transaction was run:

The source table now looks like,

We inserted id in row 1, inserted id in row 2, updated id in row 1 and deleted id in row 2. This leaves the one row as above.

Taking a look at the change table we see a different story,

Here we can see the change table has a number of control columns and also shows the history of the changes to the table. The _$operation column defines what operation has taken place (1: delete, 2:insert, 3: update (prior to change), 4: update (after change)). The _$update_mark defines the hexadecimal value. This relates to a binary value. The binary value shows which column has been updated. For example, 0x07 = 0b111 reading from right to left, means all 3 columns change; 0x04 = 0b100 means the third column has changed (col2).

From this change table we can see that id row 1 was inserted first, then id row 2 inserted, id row 1 was updated, and finally id row 2 was deleted.

The __$start_lsn column is the log sequence number (LSN) associated with the committed transaction. This is used by the table-valued functions to select the data when queried. These functions take LSN (or associated time) to determine what to return.

__$end_lsn is not used in SQL Server 2008 and the __$seqval column can be used to sort the row changes within a transaction.

Querying the Change table using the table-valued functions.

The table-valued functions are used to extract the changes from the change table based on a start and end LSN. This defines the start and end positions that you want changes returned for. There are a few ways to get your start and end LSN, there are functions available to help you do this. Examples are,

  • Get min LSN for a capture instance (associated with a source table) – sys.fn_cdc_get_min_lsn (‘dbo_tbl1’)
  • Get max LSN available – sys.fn_cdc_get_max_lsn ()
  • Convert a time to the nearest LSN (‘smallest LSN, greater than a time’ or ‘greatest LSN, less than or equal to a time’, etc) – sys.fn_cdc_map_time_to_lsn(‘smallest greater than or equal’, ‘2009-09-03 16:22:20.000’)
  • Get next incremental LSN, used when you may save previous upper bound limit and now want to use the next LSN for the lower bound limit – sys.fn_cdc_increment_lsn (<previous upper bound limit>)
  • etc

Here’s an example of how to use the table-valued functions,

Results are,

With the query and results above we are setting the start and end LSNs to cover the full range of our insert, update and delete statements. Remember we had for row with id 1 = insert, update, and for row with id 2 = insert and delete. For the all_changes table-valued functions we see it shows each operation we have performed during this period. An insert (operation = 2), a second insert (2), an update (4) , and a delete (1). The net_changes functions shows just the net change an insert (op = 2) with col2 column showing ‘update’.

The net_changes function just gives the net changes with respect to the primary key. Note to use the net changes function requires either a primary key or unique index to be set up on the source table first (as we did above).

The next example uses time to set the start LSN we chose the time 16:22:50. Only a delete occurred after this point:

Here the all and net changes functions returns the same value, a delete operation (1) on row with id 2.

There are more options available with both all and net functions such as : “all update old”, “all with mask”, etc. Take a look at Books Online for more details -

Clean up change table and set lowest watermark

Over time the change table will grow bigger and bigger and may eventually affect CDC performance. To stop this there is a clean up job to remove redundant changes. The clean up job (CDCTest_Cleanup) runs on a configured schedule (daily basis by default) or manually by stored proc. Based on the retention value configured for the job (see table msdb.dbo.cdc_jobs) a new lowest watermark LSN for capture instance is defined. This will be used to remove records from the associated change table which are below this watermark.

SQL 2008 Change Data Capture – The beginning

with one comment

Data auditing is a requirement of most enterprise systems and was a pain to implement with SQL server up to now. When data audit was required in pre SQL server 2008 databases you had to rely on solutions like triggers or some custom change tracking on the application layer. With the dawn of SQL server edition 2008 Microsoft has provided with change data capture mechanism which is an integral part of the SQL server 2008 data engine. The change data capture is a powerful mechanism for gathering changes made to data on the SQL server platform and provides its functionality with little overhead.

How this works

Well in short, it's simple. Implementation of change data capture is based on an already existing element of the data engine, the transaction log, and does not require you to alter your db, schema or tables. When you enabled change data capture on a db SQL server will create a new schema called “cdc” with several tables, stored procedures and functions. With the changes made to your db the enabling process also creates two jobs in you SQL agent for your db, one for capture and one for cleanup. These two jobs are the base for the asynchronous performance for the change data capture. The capture job is executed by default every five seconds, it reads the transaction log for changes from the last job execution and stores them in the change data capture tables. The cleanup job works by default once a day and cleans the stored data in the data capture tables, meaning that you have to gather that data and transfer it somewhere for permanent storage before the cleanup job runs.


Microsoft published  a SQL Server Best Practices Article on on MSDN called “Tuning the Performance of Change Data Capture in SQL Server 2008″ going on at length about why, how and where to tune the performance of change data capture. The general conclusion of this article and many articles on the web is that the Change Data Capture has very little overhead and can be used comfortably on OLTP database with no fear of grinding your database server to a stand still. You will see a slight decrease in performance through the increased disk writes from the change data capture job but the decrease in performance if the capture job is properly configured should not be more than 5% by my test. You will see an increase in the disk queue length for the log file but only a slight for the data disk.


There are of course problems with the SQL change data tracking. The three main problems that I have faced in working with change data tracking are storing contextual information, gathering change data in one place, schema altering.

Contextual information

Storing contextual information about the change is a pain with change data capture. In most cases when performing audit you want to store some contextual data about the current operation which is changing the data like the userid, data, reason of change (form, command, etc.) …. Since the change data capture relies on an asynchronous read of the transaction log, in the time of the read SQL server does not have any information about the context under which the change has occurred and therefore that data can not be gathered.

Gathering change data in one place

The change data capture as it is designed and implemented (as I can figure it) is intended to capture data for temporary storage before you transfer it to data warehouse for permanent storage. The data warehouse is intended to have the same schema (no primary or foreign keys) like the OLTP database and you simply copy the rows in the data warehouse. There is no built in way as it was not intended for such use to gather the data all in one place for review.

Schema altering

This brings us to the last problem of altering the schema. If you alter the schema of your OLTP db, you have to change the schema for your data warehouse db. That is no problem when adding columns but what when you delete the columns, and how to then display the data for the user? There is one more thing I think is just pure laziness that Microsoft did not implement. The problem with schema changing is that when you change the schema the change data capture continues to track the same columns (as well as there data type) as before with no input to you that you should do something to start capturing the new columns. It would suffice that when changing the table under change data capture you would get an error that you can not do that until you drop the change data capture for that table, but it seams that was to much for Microsoft.

There are workarounds for all of these problems and I will show them in the next article on change data tracking.


SQL server 2008 change data tracking is a welcome addition to SQL server feature list which helps us to create audits for your databases easily with little overhead. With some missing features which have simple workarounds this feature can be used very effectively for auditing your database.

Check back for more articles on change data tracking with the details of implementation and workarounds needed to make the whole thing functional.