Threads and the future of computer programming
Threads are evil.
There, I said it, and I stand by it too. If you find this sentiment a strange one, I suggest you start with a little reading.
Here is a great paper that says what I’d like to say better than I ever will in this blog: The Problem with Threads.
Illuminating, isn’t it? I agree 100% with what was written here. I suppose at this point it would be useful to clarify that its not so much that threads themselves are evil. Instead, it is the hoops that programmers must jump through in order to write correct multi-threaded code when they are working with most popular procedural based languages (think C/C++, Java, Python, and so on).
I’m going to repeat a choice phrase from the paper here:
To offer a third analogy, a folk definition of insanity is to do the same thing over and over again and to expect the results to be different. By this definition, we in fact require that programmers of multithreaded systems be insane. Were they sane, they could not understand their programs.
This is not really hyperbole. To exhaustively test a multi-threaded program, one must consider all possible execution orders for the atomic instructions that make up the individual threads. In practice, this never happens because its not possible to reproduce all execution sequences. Further more, if you are coding in a high level language like Python, it is probably not even clear what sorts of operations make up any one method or language operation.
Modern computer systems are organized chaos, meaning that when considered as a whole, the system is nondeterministic. While each program and thread may be deterministic, the way the system interacts with them and they are scheduled, with respect to one another, to execute on the hardware is not deterministic.
Because of this, programmers are required to guard against this nondeterminism and ensure that memory is modified in a sequential and controlled manner. Beyond the most basic embarrassingly parallel problems, programmers are forced to deal with mutexes, semaphores, and other synchronization tools. I feel that every moment spent thinking about these items is a moment wasted.
To top it off, it is becoming increasingly difficult to write code without giving any thought to multiple threads. Try writing even a basic GUI without the use of threads and you will find that even many of the simplest cases really require additional threads to do the interesting work.
Even worse is the fact that individual processors are not becoming any faster. Instead, multiple processing units are placed on one chip. This requires the programmer to use multiple threads to make full use of the system’s hardware. This has implications beyond the application level, see SMP Scaling Considered Harmful for information about how operating system performance is affected by SMP systems.
So, I think it has clearly been established that concurrent programming, in its current form, is bad and will not be able to support a future where hardware becomes massively parallel (think 32 or 64 computation cores on a single system).
What can be done?
Well, there are a few ideas that are being worked on. One is to add more features to programming languages to support concurrent execution. If we take good old C as our example, it is clear that this is a language that was not designed for concurrent computing. Access to threads is supplied as a library (something like pthreads) and is not a part of the language itself. Some have proposed extensions to C to enable concurrent programming, possibly supported by the compiler. However, approaches similar to this don’t seem to have gained much traction lately.
Instead, I think a better alternative to extending a language like C will be to develop novel languages that allow programmers to express their design in cooperation with a parallel computation environment. I don’t have any brilliant examples to offer, but I imagine that such a language might offer synchronization methods and safe ways to communicate information from one computation element to another without having to burden the programmer with the details of synchronization.
Still, it seems that it will be some time before such languages ever gain any sort of traction for every day programming. An alternative to entirely new languages might be frameworks designed to support parallel computations. One such framework, made famous by Google, is MapReduce (see Hadoop for an open source implementation of the same concept). I think MapReduce is a great idea because it allows the programmer to concentrate on the problem while the MapReduce framework handles the details of actually performing the computation in parallel.
It seems likely that frameworks with similar goals to MapReduce will be created to allow the programmer to write code that executes in parallel. Another example of a framework that supports concurrent computation is CUDA which is developed by NVIDIA to support computation on GPUs. Development with CUDA is largely based on the concept of many threads operating on independant areas of memory. While CUDA doesn’t guarantee thread safe code, it goes a long way to providing the resources necessary to make writing such code easier.
Another avenue of development resides along the lines of entirely new types of hardware. As the amount of transistors that can be placed on a single chip increases, processors are going to gain more features and, most likely, more memory. Currently, the existence of cache memory is hidden from the programmer. That is, you cannot control the contents of the cache. Instead the data that is present is a product of previously requested memory locations.
I think that future systems will provide programmers ways to control what resides in the fast memory that is available on the chip. An example of such a chip is the Cell processor where many vector units operate on local memory only. I think that systems similar to the Cell as well as different NUMA designs will only become more common as transistor counts increase.
Another example of new hardware is the GPU, which I already mentioned above with the CUDA framework. When exposed through CUDA, a GPU is really nothing more than a very fast vector processor with its own on-board memory. I think that CPUs of the future will likely begin to more closely resemble the Cell and GPUs chips that are available now. Expect these chips to provide more access to the low level memory that directly feeds the computation units. Expect these new processors to resemble a system on a chip instead ofjust a processing unit.
The hurdle right now is to create software to support these sorts of systems and I think there is a lot of development that can be done here. Imagine taking a language similar to Python and, without further involving the programmer, having it fully utilize all the computation units available. For some types of computations, this may be trivial, for others it is likely very difficult.
In short, I think it might become necessary for programmers to rethink how they approach describing computation to the computer. It seems that in addition to dividing our programs into subroutines or objects and then tying them together new abstractions will be created that make it easier for compiliers and interpreters to produce machine code that works efficiently on massively parallel hardware.
Whew, this has been quite a long and rambling first post to this blog. The problems mentioned here are ones that I’ve been giving more thought to recently. They are certainly problems that should have a lot of brains thrown at them, as they stand in my mind as some of the fundamental problems facing computer science in the upcoming years. Outside of specialized code, hardware is quickly out pacing the state of the art in software. New and powerful chips aren’t useful if no one can program for them. I think John McCarthy stated it best when he said, “Such systems tend to be immune to programming.”
I hope you enjoyed this little rant. I’d like to finish it by thanking the fine folks (namely btillyand Hangar) from the XKCD forum for posting links to the papers I’ve sited here. It’s a good thread and I’ve contributed some of my thougths to it.