ARTICLE

N-Tier Development with Microsoft .NET : Part I

Posted by TimothyA Vanover Articles | Enterprise Development February 15, 2002
The purpose of this article is to examine a proof of concept on an architecture that follows the DNA pattern on concept only.
Reader Level:

This is the first installment of a numerous part article on doing N-Tier development with Microsoft Visual Studio .Net. In retrospect DNA will be examined and basically expanded as a starting point. In the world of .Net comparisons will be made with code examples contained in the projects in the TVanover.zip file that demonstrate the proof of concept of this series of articles.

The purpose of this article is to examine a proof of concept on an architecture that follows the DNA pattern on concept only. It will cover each of the layers, some more in depth than others to expand possibilities to .Net developers.

DNA Architecture

If an alternative definition of DNA was brought to layman terms it would be scalability. Microsoft invented this concept where developing applications in function specific layers or tiers that an application could be scaled endlessly and maintained with low total cost of ownership. These tiers include Presentation, Business, Data Access, and Data Storage. The splitting of tier creates boundaries and allows encapsulation of specific functionality from tiers that should not have knowledge of other tiers. For example the UI in the Presentation tier should never have direct database access or access to the Data Access layer, but should only have access to the Business Logic.

Scaling Up
This defines an application in its relationship to where it resides on hardware. An application that has logically separated tiers can have multiple tiers on the same server. This type of application design is limited to the amount of memory and processors that can be applied to the hardware. This is scaling up. Figure 1 shows all tiers located on a single application server and is an example of logically separated N-Tier development.

NTierDevNetPaImg1.gif

Scaling Out
Splitting up the layers to different physically separated tiers allows scaling out. In this paradigm the tiers reside on different servers and allow more servers to be added so that load balancing and clustering can handle a larger capacity of simultaneous users for the application. Figure 2 shows a bit more complex example of this concept. In this paradigm more web servers and application servers can be added to facilitate more simultaneous users.

NTierDevNetPaImg2.gif

Figure 3 shows a Visual Studio .NET solution for physically separated middle tier components. Notice the difference in that there are no proxies brought to the web server from the business tier as in the case of the DNA COM architecture shown in Figure 1. Instead a contract is brought through the web application through Simple Object Access Protocol (SOAP) that basically takes the place of DCOM and proxies generated by middle tier COM components. A Proxy class can be generated that will allow asynchronous access to the web service by using the Wsdl.exe.


NTierDevInNetP1Img11.gif

This article will focus on physical separation of tiers that will enable the scalability that is required in enterprise web based applications. This installment will focus on the Data Access Layer and the following articles will continue walking up the tiers until the entire proof of concept application is constructed.

COM+ Data Components

MTS and COM+ allowed for the creation of rich middle tier components that abstracted the data and business logic from the user interface and centralized the maintenance of that encapsulated logic. Developers had several options in connecting to data through the command objects created in the data tier to execute stored procedures on SQL Server databases. One of the drawbacks in creating data access components has been an issue of maintainability of the components as stored procedures change. When the parameters change in a stored procedure the data access components must change and be recompiled to accommodate the new parameter, the removed parameter, the data type change, or the order of new parameters. Thus there exists an implied dependency between data access components and the actual stored procedures that reside in the DBMS. Such practices in object design and creation do not support object reuse and subsequent recoding basic logic frequently occurs. The focus will on these issues and a more flexible architecture and design utilizing the features that are offered in the Microsoft .NET framework will be illustrated.

Hard Coded Parameters

In general hard coding the parameters is most efficient in terms of performance in creating the command object parameters collection used in the old world of ADODB and COM. This was also a maintenance issue throughout the development cycle as fields and parameters changed in the database to accommodate the ongoing project development cycle. Large applications that had several data access libraries containing multiple classes defining different entity based methods could result in thousands of lines of code dedicated to creating the parameters and setting up their proper types, sizes, direction, and precision. Each time parameters were added, removed or changed, compatibility was broken between the business components and the data access components as well as the data access components and the stored procedures. In some situations, you may add a parameter or change the order or a parameter which could then result in a logic error, thus the compilation was successful but at runtime you would get data corruption, type mismatches, overflows, any number of unexpected errors, or difficult to find bugs. Listing 1 shows typical code in Visual Basic 6.0 using hard coded parameters of the ADODB Command object to execute communication with the database. 

Listing 1 Hard coded parameters

With cmProduct
.ActiveConnection = cnProduct
.CommandText = "usp_ProductUpd"
.CommandType = adCmdStoredProc
'@ProductID parameter
prmProduct = .CreateParameter("@ProductID", _adInteger, adParamInput, , ProductID)
.Parameters.Append(prmProduct)
'@ShelfID parameter
prmProduct = .CreateParameter("@ShelfID", _adInteger, adParamInput, , ShelfID)
.Parameters.Append(prmProduct)
'@BinID parameter
prmProduct = .CreateParameter("@BinID", _adInteger, adParamInput, , BinID)
.Parameters.Append(prmProduct)
.Execute()
End With

Parameters Refresh

The ADODB Command objects parameter collection contained a refresh method that allows developers to dynamically create parameters. This helped overcome the maintainability issues involved with hard coding parameters, but at a measurable performance cost due to multiple calls on the database. When the parameters were refreshed a connection was made to the database to query those parameters, and then when the command was executed a second trip to the database was made.

In the case a physical separation of tiers, this created a measurable performance hit on the application and therefore created a tradeoff of development time versus application performance. Compatibility issues were reduced when adding or removing parameters as the method parameters were passed inside a variant array. There would only be a need to add the new fields or parameters to the UI, business object, and the stored procedure and insure that the correct ordinal position of the value passed in all layers corresponded to the intended parameter in the stored procedure. Listing 2 illustrates the use of code for the refresh method.

Listing 2 Parameters Refresh

With oCmd
.ActiveConnection = oCnn
.CommandText = "usp_ProductUpd"
.CommandType = adCmdStoredProc
.Parameters.Refresh()
'zeroth parameter is @RETURN_VALUE
For intValues = 1 To .Parameters.Count - 1
.Parameters(intValues) = varValues(intValues - 1)
Next intValues
End With

XML Template Adapter

There was little stir in 2000 when several faster alternatives evolved. One of which was encapsulating the parameter definitions in XML based schema files and populating the parameters collection of the command object based on the attributes or the schema of the file XML file. Performance was gained over the refresh method and there was less maintenance associated with changes to the data model. XML templates can automatically be generated to the appropriate read directory required for the data component.A caveat is that some file IO is required on each command object being created unless the XML files were cached in IIS. Someone also had to write the new template file when changes occurred to a stored procedure. Even so this eased maintenance somewhat and the performance hit was a fair trade off for reduced project development time and easier maintenance.When parameters changed in a stored procedure there was not an issue with compatibility as the parameters were passed to the method encapsulated inside a variant array. The business and UI tiers were the only other places where changes would have to be propagated in order to have proper communication on all tiers.

Listing 3 XML Template Adapter

'load the attribute data from the xml file back to the parameters object
For Each objElement In objNodeList
If lngElement = -1 Then 'write @RETURN_VALUE parameter
prmUpdate = .CreateParameter(, _
CLng(objElement.getAttribute(ATTRIBUTE_DATA_TYPE)),_
CLng(objElement.getAttribute(ATTRIBUTE_DIRECTION)), _
CLng(objElement.getAttribute(ATTRIBUTE_SIZE)), Null)
'use to keep the parameter in line with the variant array
lngElement = 0
Else 'write rest of parameters
prmUpdate = .CreateParameter(, _
CLng(objElement.getAttribute(ATTRIBUTE_DATA_TYPE)), _
CLng(objElement.getAttribute(ATTRIBUTE_DIRECTION)), _
CLng(objElement.getAttribute(ATTRIBUTE_SIZE)), _
Parameters(lngElement))
'use to keep the parameter in line with the variant array
lngElement = lngElement + 1
End If
'assign precision to parameter in case of a decimal
prmUpdate.Precision =CLng(objElement.getAttribute(ATTRIBUTE_PRECISION))
.Parameters.Append(prmUpdate)
Next 'objElement 

TVanover.DDL

Figure 4 illustrates the architecture associated with the rest of the article installments. The top center rectangle is the TVanover.DDL or Dynamic Data Layer. This is a class library that contains both transactional and non-transactional methods that are commonly used in an application. 

NTierDevWithNetP1Img6.gif

In the Human Resource Web Service examine the SearchEmployees method shown in Listing 4.

Listing 4

[WebMethod(TransactionOption=TransactionOption.NotSupported)]
public DataSet SearchEmployees(string firstName, string lastName)
{
ExecDataSet oExec = new ExecDataSet();
DataSet dsReturn = new DataSet("Employees");
object[] oParameters = new Object[] {firstName, lastName};
dsReturn = oExec.ReturnDataSet("usp_EmployeeSearch", oParameters);
return dsReturn;
}

This method  has an attribute set that takes no transactions as selecting records should not be contained in a transaction where data is not altered.  There are two parameters passed to the method for the first name and last name of the individual being sought after. An instance of the ExecDataSet class is instantiated, the two parameters are wrapped inside an object array, the stored procedure and object array are passed to the ReturnDataSet method, and it is executed to return a Dataset. 

Listing 5 goes inside the ExecDataSet.ReturnDataSet method. 

Listing 5

public DataSet ReturnDataSet(string storedProc, params object[] oParameters)
{
DataSet dsReturn = new DataSet();
SqlDataAdapter oAdapter = DataAdapter.GetSelectParameters(storedProc, oParameters);
try
{
oAdapter.Fill(dsReturn);
}
catch(Exception oException)
{
if (!EventLog.SourceExists(APPLICATION_NAME))
{
EventLog.CreateEventSource(APPLICATION_NAME, "Application");
}
EventLog.WriteEntry(oException.Source, oException.Message, EventLogEntryType.Error );
}
finally
{
oAdapter.SelectCommand.Connection.Close();
}
return dsReturn;
}

The params keyword in the ReturnDataSet method allows various numbers of parameters to be sent to the method and is dynamic in nature. This must always be the last in the method signature and there can only be one reference to it inside the signature. A DataSet is created, a SqlDataAdapter is created from the return of the static class method DataAdapter.GetSelectParameters, and the newly created DataSet is filled from that SqlDataAdapter. If an error occurs the application log is used to store the error. The finally code block closes the connection regardless of error or not, and the DataSet is returned to the calling method.

In examining the DataAdapter.GetSelectParameters method take a closer look at Listing 6. 

Listing 6

public static SqlDataAdapter GetSelectParameters(string storedProc,params object[] oParameters)
{
SqlDataAdapter adapter = new SqlDataAdapter();
try
{
//this method requires parameters and will
//throw an exception if called without them
if(oParameters == null)
{
throw new ArgumentNullException(NULL_PARAMETER_ERROR);
}
else
{
DataTable oTable = ParameterCache.dsParameters.Tables[storedProc];
int iParameters = 0;
SqlCommand oCmd = new SqlCommand(storedProc);
oCmd.CommandType = CommandType.StoredProcedure;
//write the parameters collection
//based on the cache and values sent in
foreach(DataRow oDr in oTable.Rows)
{
oCmd.Parameters.Add(CreateParameter(Convert.ToString(oDr[PARAMETER_NAME]),
Convert.ToInt32(oDr[PARAMETER_DIRECTION]),oParameters[iParameters]));
iParameters++;
}
//Add a return parameter
oCmd.Parameters.Add(CreateParameter());
oCmd.Connection = Connections.Connection(false);
adapter.SelectCommand = oCmd;
}
}
catch(Exception oException)
{
if (!EventLog.SourceExists(APPLICATION_NAME))
{
EventLog.CreateEventSource(APPLICATION_NAME, "Application");
}
EventLog.WriteEntry(oException.Source, oException.Message, EventLogEntryType.Error );
}
return adapter;
}

If no parameters are passed to this method an exception is thrown back to the caller, otherwise a new DataTable is created and is assigned the contents of a static member ParameterCache.dsParameters. This is a public static multiple tabled DataSet that is populated on the first access and remains in memory while the web application is running. The table being returned is named after the stored procedure that was sent to this method. The foreach iteration builds the command object and populates each parameter from the data in the oTable object and the oParameters that have been passed to this method. A connection is returned through another static member call that maintains a connection pool and determined whether the connection is enlisted in the root transaction or not. The SqlDataAdapter is then returned to the calling method to be executed.

In looking at the ParameterCache.dsParameters shown in Listing 7 notice that the class has a static constructor. 

Listing 7

public class ParameterCache
{
public static DataSet dsParameters;
/// <summary>
/// Uses a static constructor so that instantiation of
/// this class is not needed
/// </summary>
static ParameterCache()
{
//instantiate and fill the dataset
ParametersLookup oParameters = new ParametersLookup();
dsParameters = oParameters.FillCache();
}

This access modifier will cause the constructor to run the first time the object is used before any thing else, even when the object is not instantiated with the new keyword. Inside the constructor an instance of the ParametersLookup class is instantiated and the FillCache method is called to fill the public static dsParameters  DataSet. 

In Listing 8 a new DataSet is created along with an SqlCommand object, and an SqlDataAdapter.  A connection is returned from the Connections class that will be explained following this section. The usp_ParameterCache stored procedure is called to populate the new dataset. The new local DataSet now contains all a table for each user defined stored procedure in the associated database specified in the connection. The zeroth table contains the names of the stored procedures and each table in the new DataSet is named after its relevant position in the tables collection of the DataSet. The connection is closed in the finally block and the parameters DataSet is returned to the caller. 

Listing 8

public DataSet FillCache()
{
DataSet dsParameters = new DataSet();
int iTables = 1;
SqlCommand oCmd = new SqlCommand();
SqlDataAdapter oDa = new SqlDataAdapter();
try
{
oCmd.CommandText = "usp_ParameterCache";
oCmd.CommandType = CommandType.StoredProcedure;
oDa.SelectCommand = oCmd;
oCmd.Connection = Connections.Connection(false);
//grab the tables
oDa.Fill(dsParameters);
//name the tables based on the names from the first table
foreach(DataRow oDr in dsParameters.Tables[0].Rows)
{
dsParameters.Tables[iTables].TableName = Convert.ToString(oDr["ProcName"]);
iTables++;
}
}
catch(Exception oException)
{
if (!EventLog.SourceExists(APPLICATION_NAME))
{
EventLog.CreateEventSource(APPLICATION_NAME, "Application");
}
EventLog.WriteEntry(oException.Source, oException.Message, EventLogEntryType.Error );
throw new Exception(oException.Message,oException);
}
finally
{
oCmd.Connection.Close();
}
return dsParameters;
}

In examining the Connections class notice Listing 9. 

Listing 9

public static SqlConnection Connection(bool enlist)
{
string Connection = string.Format("User ID=DataLayer;Password=amiinyet;" +
"Persist Security Info=False;" + "Initial Catalog=Datalayer;" + "Data Source=coder;Max Pool Size=15;" +
"Enlist={0}; Connection Reset=true;" + "Application Name=DataAccess;" + "Pooling=true;Connection Lifetime=0;", enlist);
SqlConnection oConn = new SqlConnection(Connection);
try
{
oConn.Open();
}
catch(Exception oException)
{
if (!EventLog.SourceExists(APPLICATION_NAME))
EventLog.CreateEventSource(APPLICATION_NAME,EVENT_LOG);
}
EventLog.WriteEntry(oException.Source, oException.Message, EventLogEntryType.Error );
}
return oConn;
}

The static method Connection takes a single parameter and returns an open SqlConnection that is ether enlisted in a transaction or not depending on the value of the parameter sent in. The new connection is part of a pool. In this instance the pool has a maximum size of 15 connections. The table in Listing 10 has more information on the connection string properties available to the connection object.

Listing 10

User ID Logon setup in SQL Server
Password Logon password
Persist Security Info
If a connection has been opened the password is not sent when it is re-used from the pool.
Initial Catalog Database
Data Source Database Server
Max Pool Size Number of connections to allow in the pool for reuse. Care must be taken on the size of the Max Pool Size as this will allow as many connections to the database as listed under loaded conditions.
Enlist Add the connection to the current threads transaction context.
Connection Reset Determines whether the database connection is reset when being removed from the pool
Application Name Identity of the connection visible in perfmon
Pooling Draw from the pool if there is a connection available or create a new one and add it to the pool
Connection Lifetime If there is no longer immediate need remove the connection. A comparison is done in seconds to determine the immediate need.

In conclusion it should be noted that the ParameterCache.dsParameters DataSet will only be populated one time, and that accessing the parameters is instantaneous on subsequent calls. Only the first call is expensive. Also some experimenting should be done with the DDL to determine if it is more suited to run in the context of a library or server as defined in the AssemblyInfo.cs of the project. This library will also register itself in COM Services the first time it is run as defined by the ApplicationName attribute defined there as well. 

The next piece of the article will be geared toward Web Services and controlling transactions contained in the business Faade layer the other tier are enlisted int.

COMMENT USING

Trending up