Database migration is a critical task for organizations transitioning between cloud platforms, such as moving from Amazon Web Services (AWS) to Microsoft Azure. In this blog post, we'll explore a C# program that leverages the Microsoft.Data.SqlClient library to transfer database tables and their data from an AWS SQL Server to an Azure SQL Database. The code automates the process of creating databases, copying table schemas, and transferring data, making it a robust solution for small to medium-sized migrations. Let’s dive into the code and break down how it works.
Overview of the Migration Process
The program is designed to.
- Connect to Source and Destination Databases: It establishes connections to an AWS SQL Server (source) and an Azure SQL Database (destination).
- Create Databases if Needed: It checks if the target database exists on Azure and creates it if it doesn’t.
- Transfer Table Schemas and Data: For each database, it retrieves the list of tables, copies their schemas (structure), and transfers their data to the Azure database.
The code uses the Microsoft.Data.SqlClient library, which provides a modern and efficient way to interact with SQL Server databases, including Azure SQL. Let’s walk through the key components of the code.
Setting Up the Environment
The program starts by defining connection details for both the AWS SQL Server and the Azure SQL Database. These include server addresses, usernames, passwords, and a list of databases to transfer.
string awsServer = "your-aws-server";
string awsUsername = "your-username";
string awsPassword = "your-password";
string azureServer = "your-azure-server.database.windows.net";
string azureUsername = "your-username";
string azurePassword = "your-password";
List<string> databasesToTransfer = new List<string>
{
"pass1",
"pass2",
"pass3"
};
Note. In a production environment, sensitive information like credentials should be stored securely (e.g., using environment variables or Azure Key Vault) rather than hardcoded.
The program iterates through the list of databases (pass1, pass2, pass3) and performs the migration for each one. It constructs connection strings for both the source (AWS) and destination (Azure) databases, using the TrustServerCertificate=True option to bypass certificate validation (useful for testing but should be reviewed for production).
Creating Databases on Azure
The CreateDatabaseIfNotExists method ensures that the target database exists on the Azure SQL Server before transferring data. Here’s how it works.
- Connect to the Master Database: It connects to the master database on the Azure SQL Server using a connection string.
- Check for Database Existence: It queries the sys. databases system view to check if the target database already exists.
string checkDbSql = $@"
SELECT COUNT(*)
FROM sys.databases
WHERE name = '{dbName}'
";
- Create Database with Retry Logic: If the database doesn’t exist, the method attempts to create it using the SQL command.
string createDbSql = $@"
CREATE DATABASE [{dbName}]
(
EDITION = 'Basic',
SERVICE_OBJECTIVE = 'Basic'
)";
- The database is created with the Basic tier, suitable for small workloads. The method includes retry logic (up to 3 attempts) to handle transient errors, such as network issues, with an increasing delay between retries (10 seconds, then 20, then 40).
This approach ensures reliability when creating databases, as cloud operations can occasionally fail due to temporary issues.
Retrieving Table List
The GetAllTables method retrieves the list of tables from the source (AWS) database using the INFORMATION_SCHEMA.TABLES view.
string sql = "SELECT TABLE_NAME " +
"FROM INFORMATION_SCHEMA.TABLES " +
"WHERE TABLE_TYPE = 'BASE TABLE'";
It returns a list of table names, which the program iterates over to transfer each table’s schema and data.
Transferring Table Schemas
The TransferTableSchema method copies the schema (structure) of a table from the source to the destination database:
This ensures that the destination table matches the source table’s schema, including data types, constraints, and identity columns.
Transferring Table Data
The TransferTableData method uses the SqlBulkCopy class, a powerful feature of Microsoft.Data.SqlClient, to efficiently transfer data from the source to the destination table.
This bulk copy approach is significantly faster than row-by-row inserts, making it ideal for migrating large tables.
Error Handling and Logging
The program includes robust error handling and logging.
The Console.WriteLine statements provide real-time feedback on the migration process, such as which database or table is being transferred and any errors encountered.
Why Use Microsoft.Data.SqlClient?
Microsoft.Data.SqlClient library is a modern replacement for the older System.Data.SqlClient, offering improved performance, support for Azure SQL, and compatibility with .NET Core and .NET 5+. Key benefits include.
- Efficient Data Access: It provides classes like SqlConnection, SqlCommand, and SqlDataReader for executing SQL queries and retrieving results.
- Bulk Copy Support: The SqlBulkCopy class enables high-performance data transfers, critical for large datasets.
- Azure Compatibility: It supports Azure SQL Database features, such as connection retries and secure authentication.
Best Practices and Improvements
While the code is functional, here are some suggestions for production use.
- Secure Credentials: Store connection strings and credentials in a secure configuration (e.g., Azure Key Vault or environment variables).
- Parameterized Queries: Replace string concatenation in SQL queries (e.g., WHERE name = '{dbName}') with parameterized queries to prevent SQL injection.
- Logging: Replace Console.WriteLine with a proper logging framework like Serilog or Microsoft.Extensions.Logging for better traceability.
- Parallel Processing: For large migrations, consider parallelizing table transfers using Parallel.ForEach to improve performance, but ensure proper connection management to avoid overloading the database.
- Schema Validation: Add validation to ensure the schema transfer was successful before copying data.
- Transaction Support: Use transactions for data transfers to ensure consistency in case of partial failures.
Sharing full source code below.
using Microsoft.Data.SqlClient;
internal class Program
{
private static void Main(string[] args)
{
string awsServer = "your-aws-server";
string awsusername = "your-username";
string awspassword = "your-password";
string azureServer = "your-azure-server.database.windows.net";
string azureusername = "your-username";
string azurepassword = "your-password";
List<string> databasesToTransfer = new List<string> { "pass1", "pass2", "pass3" };
foreach (string dbName in databasesToTransfer)
{
Console.WriteLine($"Starting transfer for database: {dbName}");
try
{
string awsConnStr = $"Server={awsServer};Database={dbName};User Id={awsusername};Password={awspassword};TrustServerCertificate=True;";
string azureConnStr = $"Server={azureServer};Database={dbName};User Id={azureusername};Password={azurepassword};TrustServerCertificate=True;";
CreateDatabaseIfNotExists("tcp:invoice-system-service.database.windows.net,1433", dbName, "CloudSA0d9d9087", "SemperSys@2023");
List<string> tables = GetAllTables(awsConnStr);
foreach (string table in tables)
{
Console.WriteLine($"Transferring table: {table}");
TransferTable(awsConnStr, azureConnStr, table);
}
Console.WriteLine($"Completed transfer for database: {dbName}");
}
catch (Exception ex)
{
Console.WriteLine($"Error transferring database {dbName}: {ex.Message}");
}
}
}
static void CreateDatabaseIfNotExists(string server, string dbName, string user, string pwd)
{
string masterConnStr = $"Server={server};Database=master;User Id={user};Password={pwd};";
using (SqlConnection conn = new SqlConnection(masterConnStr))
{
conn.Open();
string checkDbSql = $"SELECT COUNT(*) FROM sys.databases WHERE name = '{dbName}'";
using (SqlCommand checkCmd = new SqlCommand(checkDbSql, conn))
{
int exists = (int)checkCmd.ExecuteScalar();
if (exists > 0)
{
Console.WriteLine($"Database {dbName} already exists - skipping creation");
return;
}
}
int maxRetries = 3;
int retryDelaySeconds = 10;
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
try
{
Console.WriteLine($"Creating database {dbName} (attempt {attempt} of {maxRetries})");
string createDbSql = $@"CREATE DATABASE [{dbName}] (EDITION = 'Basic', SERVICE_OBJECTIVE = 'Basic')";
using (SqlCommand cmd = new SqlCommand(createDbSql, conn))
{
cmd.CommandTimeout = 120;
cmd.ExecuteNonQuery();
}
Console.WriteLine($"Successfully created database {dbName}");
return;
}
catch (SqlException ex) when (attempt < maxRetries)
{
Console.WriteLine($"Attempt {attempt} failed: {ex.Message}");
Console.WriteLine($"Waiting {retryDelaySeconds} seconds before retry...");
System.Threading.Thread.Sleep(retryDelaySeconds * 1000);
retryDelaySeconds *= 2;
}
}
throw new Exception($"Failed to create database {dbName} after {maxRetries} attempts");
}
}
static List<string> GetAllTables(string connectionString)
{
List<string> tables = new List<string>();
using (SqlConnection conn = new SqlConnection(connectionString))
{
conn.Open();
string sql = "SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE'";
using (SqlCommand cmd = new SqlCommand(sql, conn))
using (SqlDataReader reader = cmd.ExecuteReader())
{
while (reader.Read())
{
tables.Add(reader.GetString(0));
}
}
}
return tables;
}
static void TransferTable(string sourceConnStr, string destConnStr, string tableName)
{
TransferTableSchema(sourceConnStr, destConnStr, tableName);
TransferTableData(sourceConnStr, destConnStr, tableName);
}
static void TransferTableSchema(string sourceConnStr, string destConnStr, string tableName)
{
string createTableScript = GetTableCreationScript(sourceConnStr, tableName);
using (SqlConnection destConn = new SqlConnection(destConnStr))
{
destConn.Open();
string dropSql = $"IF OBJECT_ID('{tableName}', 'U') IS NOT NULL DROP TABLE [{tableName}];";
using (SqlCommand dropCmd = new SqlCommand(dropSql, destConn))
{
dropCmd.ExecuteNonQuery();
}
using (SqlCommand createCmd = new SqlCommand(createTableScript, destConn))
{
createCmd.ExecuteNonQuery();
}
}
}
static string GetTableCreationScript(string connectionString, string tableName)
{
string script = $"CREATE TABLE [{tableName}] (\n";
List<string> columns = new List<string>();
using (SqlConnection conn = new SqlConnection(connectionString))
{
conn.Open();
string columnSql = $@"
SELECT
c.name AS ColumnName,
t.name AS DataType,
c.max_length,
c.precision,
c.scale,
c.is_nullable,
c.is_identity
FROM sys.columns c
JOIN sys.types t ON c.user_type_id = t.user_type_id
WHERE c.object_id = OBJECT_ID('{tableName}')
ORDER BY c.column_id";
using (SqlCommand cmd = new SqlCommand(columnSql, conn))
using (SqlDataReader reader = cmd.ExecuteReader())
{
while (reader.Read())
{
string columnName = reader["ColumnName"].ToString();
string dataType = reader["DataType"].ToString();
int maxLength = Convert.ToInt32(reader["max_length"]);
int precision = Convert.ToInt32(reader["precision"]);
int scale = Convert.ToInt32(reader["scale"]);
bool isNullable = Convert.ToBoolean(reader["is_nullable"]);
bool isIdentity = Convert.ToBoolean(reader["is_identity"]);
string columnDef = $"[{columnName}] {dataType}";
if (dataType == "varchar" || dataType == "nvarchar" || dataType == "char" || dataType == "nchar")
{
columnDef += maxLength == -1 ? "(MAX)" : $"({maxLength})";
}
else if (dataType == "decimal" || dataType == "numeric")
{
columnDef += $"({precision},{scale})";
}
if (isIdentity)
columnDef += " IDENTITY(1,1)";
if (!isNullable)
columnDef += " NOT NULL";
columns.Add(columnDef);
}
}
string pkSql = $@"
SELECT COL_NAME(ic.object_id, ic.column_id) AS ColumnName
FROM sys.indexes i
JOIN sys.index_columns ic ON i.object_id = ic.object_id AND i.index_id = ic.index_id
WHERE i.is_primary_key = 1 AND ic.object_id = OBJECT_ID('{tableName}')";
List<string> pkColumns = new List<string>();
using (SqlCommand cmd = new SqlCommand(pkSql, conn))
using (SqlDataReader reader = cmd.ExecuteReader())
{
while (reader.Read())
{
pkColumns.Add(reader["ColumnName"].ToString());
}
}
script += string.Join(",\n", columns);
if (pkColumns.Count > 0)
{
script += $",\nCONSTRAINT [PK_{tableName}] PRIMARY KEY ([{string.Join("], [", pkColumns)}])";
}
script += "\n)";
}
return script;
}
static void TransferTableData(string sourceConnStr, string destConnStr, string tableName)
{
using (SqlConnection sourceConn = new SqlConnection(sourceConnStr))
using (SqlConnection destConn = new SqlConnection(destConnStr))
{
sourceConn.Open();
destConn.Open();
string selectSql = $"SELECT * FROM [{tableName}]";
using (SqlCommand sourceCmd = new SqlCommand(selectSql, sourceConn))
using (SqlDataReader reader = sourceCmd.ExecuteReader())
{
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destConn))
{
bulkCopy.DestinationTableName = tableName;
bulkCopy.BatchSize = 1000;
bulkCopy.BulkCopyTimeout = 600;
try
{
bulkCopy.WriteToServer(reader);
}
catch (Exception ex)
{
Console.WriteLine($"Error copying data for table {tableName}: {ex.Message}");
throw;
}
}
}
}
}
}