Reader Level:
ARTICLE

Multi-threaded Asynchronous Programming in C#... Getting started.

Posted by Matthew Cochran Articles | Design & Architecture May 12, 2007
Trying to build better solutions and growing as a developer has always been fun for me and I’m always looking for ways to build more scalability and robustness in the software I am writing. I had a revelation that I’ve been doing a lot of C# tweaking but not taking advantage of the core performance enhancements cooked right into C# and it all comes down to one thing: Asynchronous multi-threaded programming.
  • 0
  • 0
  • 81013
Download Files:
 

Part I. Overview

If you ever get a chance to hear Jeffery Richter speak, it is definitely worth it. I can sum up my leaning for today from devscovery in a couple of points.

1) Don't create new threads! Borrow them from the ThreadPool. (Lease threads.. don't buy em')

Jeffery pointed out that running more than one thread per processor available on the machine our classes are running will cause context switches. Context switches are expensive and will degrade the performance of any application, so if we want to create a new Thread, we need to have a really (really) good reason to do so and should be able to justify it like with a windows GUI when we are willing to sacrifice overall performance for a slicker user experience. For a middle tier dll, we should be able to keep it down to around one thread per processor on the machine (as long as no other threads are sleeping) to avoid context switches and if we use the baked-in ThreadPool, it will be optimized for us.

2) Architecting a system with asynchronous calls will automatically take advantage of the ThreadPool which will manage the threads for us.

If we use the asynchronous model to develop out applications, the runtime will take care of scaling our apps to how ever many processors are running on the box. So we can build software that takes advantage of quad core processors and in the (near) future, 16 core, 32 core, 64 core or even more processors. If we build apps with the asynchronous model in mind, they will scale to whatever box we install on.

Part 2. The Paradigm shift

Async programming is a big paradigm shift from procedural coding like in C or VB6 and also is a bit of a paradigm shift from straight OOP programming w/ C#. Basically we have to break our standard methods that handle IO (input/output) operations into separate "pieces" that handle input request and deliver the results separately. The farther we can do this through the call stack, the fewer blocks we'll have on running threads and thus we'll have fewer "stalls" in our cpu processes where the processor is waiting on something else (like a disk-read).

To give an example of splitting a method, I'll show you some basic code I'm working on to demonstrate to my colleagues what I took away from Jeffery's presentation.

Let's say we have an object that contains a long-running process. Normally this would be a I/O operation like a database query or a file read or write, but for demonstration purposes, we'll use a "fake" long-running process.

In the example below we have a slow running IntFetcher.SlowFetch() which is meant to simulate either a database call, a web service call, or some kind of disk I/O.

    public static class IntFetcher

    {

        public static int SlowFetch()

        {

            // Simulate a long-running i/o process

            Thread.Sleep(150); // DONT EVER DO THIS! FOR DEMO PURPOSES ONLY!

            return 5;

        }

    }

If we have a method that calls IntFetcher.Fetch() the main thread in our app is basically locked up until the end of the method call. This can be a complete waste of time if the main thread is just waiting for a file I/O being handled by the hardware, a web service call waiting on a remote server, or a db call (imagine if we have thousands of users making this call from a web form… it could be a pretty ugly situation). Our main thread will get hung up on the slow method call waiting for a response when we could be putting it to work doing other things – like handling other requests.

    public class BadPerf

    {

        public int FetchInt()

        {

            Console.WriteLine("Start Fetchin..");

            int i =IntFetcher.Fetch();

            Console.WriteLine("Got " + i);

            Console.WriteLine("End Fetchin...");

            return i;

        }

    }

To take advantage of async calls, we have to break up our BadPerf.FetchInt() method (above) into two parts, one of which is responsible for the request to IntFetcher.Fetch() and the other which is responsible for handling the response.

A recipe I found handy to do this with delegates is as follows:

1) Create a delegate to handle the method call. I'll create a generic one that returns some type of object that we can use and can match the signature of the method we're calling.

public delegate TOutput SomeMethod<TOutput>();

2) Split the method in two separate parts and designate the method responsible for the response usng the delegate. BeginInvoke() method. BeginInvoke() takes two arguments: a AsyncCallback object and a object representing the state of the async call. The AsyncCallback parameter is a delegate that will point to a method that takes one argument (IAsyncResult) and has no return parameters.The state object is what we'll use to tranfer the result of our long running function.

To wire up the AsyncCallBack, we'll need a method to handle the response we get from the long running IntFetcher.Fetch() method with the following signature:

It's a good idea to stick with starting our new split function names with "Begin" and "End" for consistancy with other classes in the framework. Here's the class we have that splits our method call into two parts:

    public class GoodPerf

    {

        public void BeginFetchInt()

        {

            Console.WriteLine("Start Async Fetchin..");

            SomeMethod<int> method = IntFetcher.Fetch();

            method.BeginInvoke(EndFetchInt, method);       
        

 

        public void EndFetchInt(IAsyncResult result)

        {

            SomeMethod<int> method = result.AsyncState as SomeMethod<int>;

            int i = method.EndInvoke(result);

            Console.WriteLine("Got " + i);

            Console.WriteLine("End Async Fetchin...");

        }

    }

After we call method.BeginInvoke() the main thread is free to continuing executing other code and we'll get another thread from the threadpool to execute EndFetchInt() once it is ready.

Notice how we passed the method in as a parameter on the method.BeginInvoke() method which allows us to find it on the EndFetchInt() method. This enables us to find the result of the long running method call using method.EndInvoke().

Our results may be a bit confusing at first but it makes sense if you think about it. The Main thread executes through until the Console.ReadLine() call, but the long running method is still in progress and doesn't fire until later.

Start Fetchin..

Got 5

End Fetchin...

Start Async Fetchin..

Done...

Got 5

End Async Fetchin...

One important thing to keep in mind is that if the main thread terminates (is allowed to lean the main method context) that the CLR will begin finalization and the async response method may not fire.

I'm currently working on more articles to dig into this subject further and will dig a bit deeper into some neat tricks and get through how to do async calls all the way from a windows form down to the db calls or from a web service or web form down to a file read so we don't waste any cpu cycles and can build extremely scalable solutions.

Until next time,

Happy coding

COMMENT USING

Trending up