Reader Level:
ARTICLE

Multithreading Part I: Multithreading and Multitasking

Posted by Manisha Mehta Articles | Multithreading in C# April 08, 2002
In this and a series of articles that would follow, we would learn about threads and how to write multi-threaded programs in C#.
  • 0
  • 0
  • 71336

In this and a series of articles that would follow, we would learn about threads and how to write multi-threaded programs in C#. 

In this article we would learn what threads are and why they are needed. In part2 we would learn about threading with respect to the .NET framework. At the end of this article series you would have learnt how to create and manage threads, set thread priorities, determine thread states, thread pooling and thread synchronization.

Introduction

When the computers were first invented, they were capable of executing one program at a time. Thus once one program was completely executed, they then picked the second one to execute and so on. With time, the concept of timesharing was developed whereby each program was given a specific amount of processor time and when its time got over the second program standing in queue was called upon (this is called Multitasking, and we would learn more about it soon). Each running program (called the process) had its own memory space, its own stack, heap and its own set of variables. One process could spawn another process, but once that occurred the two behaved independent of each other. Then the next big thing happened. The programs wanted to do more than one thing at the same time (this is called Multithreading, and we would learn what it is soon). A browser, for example, might want to download one file in one window, while it is trying to upload another and print some other file. This ability of a program to do multiple things simultaneously is implemented through threads (detailed description on threads follows soon).

Multitasking vs. Multithreading

As explained above, Multitasking is the ability of an operating system to execute more than one program simultaneously. Though we say so but in reality no two programs on a single processor machine can be executed at the same time. The CPU switches from one program to the next so quickly that appears as if all of the programs are executing at the same time. Multithreading is the ability of an operating system to execute the different parts of the program, called threads, simultaneously. The program has to be designed well so that the different threads do not interfere with each other. This concept helps to create scalable applications because you can add threads as and when needed. Individual programs are all isolated from each other in terms of their memory and data, but individual threads are not as they all share the same memory and data variables. Hence, implementing multitasking is relatively easier in an operating system than implementing multithreading.

Hey, wait!!! all that's fine but I still do not understand fully what threads are!!!

What is a thread???

A thread can be defined as a semi-process with a definite starting point, an execution sequence and a terminating point.  It maintains its own stack where it keeps the exception handlers, the scheduling priority and other details that the system might need to re-activate that thread.

Well, it seems like a complete process to me then why do we call it a semi-process!!!

That is because a full-blown process has its own memory area and data, but the thread shares memory and data with the other threads.

A process/program, therefore, consists of many such threads each running at the same time within the program and performing a unique task.

Threads are also called lightweight processes that appear to run in parallel with the main program. They are called lightweight because they run within the context of the full-blown program taking advantage of the resources allocated for the program. 

On a single processor system, threads can be run either in a preemptive mode or in a cooperative mode.

In the preemptive mode, the operating system distributes the processor time between the threads and decides which thread should run next once the currently active thread has completed its time-share on the processor. Hence the system interrupts the threads at regular intervals to give chance to the next one waiting in the queue. So no thread can monopolize the CPU at any given point of time. The amount of time given to each thread to run depends on the processor and the operating system. The processor time given to each thread is so small that it gives the impression that a number of threads are running simultaneously. But, in reality, the system runs one thread for a couple of milliseconds, then switches to the other and so on. It keeps a count of all the threads and cycles through them giving each of them a small amount of the CPU time. The switching between threads is so fast that it appears as if all the threads are running simultaneously.

But, what does switching mean?? It means that the processor stores the state of the outgoing thread (it does so by noting the current processor register values and the last instruction-set the thread was about to perform), restores the state of the incoming thread (again by restoring its processor register values and picking the last instruction-set where it had left itself) and then runs it. But this style has its own flaws. One thread can interrupt another at any given time. Imagine what would happen if one thread was writing to a file and the other interrupted it and started writing to the same file. Windows 95/NT, UNIX use this style of managing their programs/threads.

In cooperative mode, each thread can control the CPU for as long as it needs it. In this implementation, one thread can starve all the others for processor time if it so chooses. However, if a thread is not using the processor it can allow another thread to use it temporarily. Running threads can only give up control either if a thread calls a yield function or if the thread does something that would cause it to block, such as perform I/O. Windows 3.x uses this kind of implementation.

On some systems, you can have both the cooperative and preemptive threads running simultaneously (Threads running with high priorities often behave cooperatively while threads running at normal priorities behave preemptively). Since you are not sure whether the system would let threads run in a cooperative or a preemptive model, it is always safer to assume that preemption is not available. You should be designing your program in such a way that processor-intensive threads should yield control at specific intervals. When the currently running thread wants to yield it means that the thread is willing to give up CPU control. The system then looks for threads that are ready to run and which are of the same or higher priority as the current thread. If it finds any then it pauses the execution of the current thread and activates the next thread, waiting in the queue. But, if it cannot find any thread of the same or higher priority then control returns to the thread that yielded. Still if a thread wants to give up control and let a thread of lower priority take over, then the thread goes into a sleep mode for a certain amount of time letting the lower priority thread run.

On a multi-processor system, the operating system can allocate individual threads to the separate processors, which thus fastens the execution of the program. The efficiency of the threads also increases significantly because the distribution of the threads on several processors is faster than sharing time-slices on a single processor. It is particularly useful to have a multi-processor system for 3D modeling and image-processing.

Are threads actually needed!!!

We all have fired a print command to print something. Imagine what would happen if the computer stopped responding while the printing is going on.  Oh No!! Our work will come to a stop till the nasty print work is going on. But as we all know, nothing like this happens. We are able to do the normal work with the computer (like editing/saving a file) or drawing a graphic and listening to music etc without getting bothered with the print job. Now, this is possible because separate threads are executing all these tasks. You would have all noticed that the database or a web server interacts with a number of users simultaneously. How are they able to do that??  It is possible because they maintain a separate thread for each user and hence can maintain the state of all the users.  If the program is run as one sequence then it is possible that failure in some part of the program will disrupt the functioning of the entire program. But, if the different tasks of the program are in separate threads then even if some part of the program fails, the other threads can execute independent of it and will not halt the entire program.

Wow!! This sounds good. So, if we start writing threaded applications, we would never come across those nasty crashes in our program.

Hold on!! We are not yet over.

No doubt, writing multithreaded applications gives you an edge over non-threaded applications but threading can become a very costly concept, if not used judiciously.  A few apparent drawbacks are listed below. If one program has many threads, then threads in the other programs will naturally get less of the processor time. Besides, a large amount of processor time is consumed in controlling the threads. The system also needs sufficient memory to store the context information of each thread. Hence, large number of threads is a blow to memory, bugging the entire system and ultimately slowing the system down. Besides, a program has to be designed really well to support a large number of threads otherwise it would be more of a curse than a boon. While killing each of the threads you need to be aware of the repercussions that it might involve and handle them appropriately.

Designing threaded programs-few tips

There are numerous ways that you can design a good multi-threaded application. Here we would get a general glimpse but as we proceed (in the later parts of this article) you would understand things better. Threads can be of different priorities (we would see in the later parts of this article series how to decide their priority levels). Say, we need to draw a graphic or do a big mathematical computation and at the same time want to get user input. We should first keep all the individual tasks (like drawing an image, or doing computation or asking for user-input) in separate threads.  We should then allocate a higher priority to the thread, which is expecting user-input so that its responsiveness is high, and the thread, which is drawing the graphic or doing the calculation, at a lower priority so that the entire CPU time is not bogged down by these tasks.

Again, say based on the user-input the program has to do some processing. If the processing is long then it may take some time to complete and the user is un-necessarily made to wait till it is over. In such cases, we should keep separate threads, one to read user input and the other to handle any lengthy operations based on the input. This will make the program more responsive. It will also give the user the flexibility to cancel the operation at any point of the running of the thread. Hence, applications that use user-input should always have one thread to handle input, which will keep the user interface active at all times, and let the processor-intensive tasks execute on separate threads

In the case of drawing graphics, the application should always be listening to messages (like a repaint command) from the system. If the application gets busy doing some other work then the screen might remain blank for a long time, which of course is not very appealing visually. So, in such cases it is advisable to have one thread always dedicated to handling messages (like repaint) from the underlying system.  

Always remember that a thread, which manages time-critical tasks, should be given a high priority and the others a low priority. Like the thread listening for client requests should always remain responsive and hence allotted high priority. A user-interface thread that manages interactions with the users should delegate all requests immediately to the worker threads rather than trying to work on those requests. This way, it will remain responsive to the users at all times. 

Conclusion:

This is just the first of the series of articles that we are going to study on multi-threading. In this part we have got familiarity with what threads are and why they are needed. In the next part we would learn how threading is implemented in C#.

COMMENT USING