Bytemaster's Boost Libraries
|
In a cooperative implementation of multitasking, each task must explicitly yield control to the central scheduler to allow the next task to run. This means that a misbehaving task that never yields control, can starve all other tasks.
Multithreading on the other hand, at least on most implementations, implies preemptive multitasking; each task is allowed to run for a certain amount of time, called time-slice. When the time-slice is over the task is forcibly interrupted and the scheduler selects the next task. If the interrupted task was manipulating some shared resource, this can be left in an undefined state. A task cannot control when is preempted, so it must be pessimistic and lock all shared resources that it uses. As any programmer that has had to work with heavily threaded applications knows, dealing with complex locking is not a trivial task. In addition both locking and thread switching imposes some considerable overhead.
Cooperative multitasking does not have these problems as long as a task never yields (waits) while manipulating shared state.
This does not mean that multithreading has not its place, there are at least two scenarios where true concurrency and preemption are required:
Unfortunately threads are often abused for general multitasking, where preemption is a burden instead of a benefit. The primary use case for cooperative multi-tasking is waiting upon many asynchronous events and executing small, light-weight tasks asynchronously.
In this use case only one thread is needed and it can run other tasks any time one task needs to wait for more input.
The cooperative multi-tasking implementation is far supperior to the QApplication/QThread event loop when it comes to waiting for asynchronous tasks. If you want to implement a method in Qt that synchronously invokes a remote procedure call, then it must block the thread while it waits for the return value. If you want to keep the user interface responsive then you may optionally "recursively" process events.
There are many problems with recursive event loop invocations that lead to dead locks because the tasks must complete in the order in which they were called or the stack can never unwind.
Typically the solution to this problem is to use callbacks, signals, and other notification techniques. The problem with this approach is that you lose the "localization of code" and variables / algorithms end up spread across multiple methods. Local variables then need to be "maintained" outside of function scope as class member variables, often allocated on the heap. This greatly increases the complexity of the code.
This complexity becomes obvious when you have many asynchronous operations that must be performed synchrously or have some non-trivial dependency. Suppose you need to invoke 3 remote procedure calls on 3 different servers and that you need the return value from 1 of the calls before you can invoke the other two and that you need all three values before you can do your final calculations. This task is creates a mess of speghetti code with callbacks, state machine variables, etc unless you are willing to accept the performance hit of blocking an entire "heavy weight", preemitvely multi-tasked, operating system thread.
This same problem becomes trivial with the use of the Boost.CMT library. Simply asynchrounsly invoke each method which will return a future object. Then pass the futures into the other methods which will automatically run when the data is available. A complex asynchronous mess turns into what looks like synchronous code.