Now that Google Summer of Code is drawing to an end, it's time to summarize what has been achieved during these two months. Before anything else I would to thank again everybody (and most noticeably the folks from Mono and Google) for this summer, it has been awesome :) !

Base API

First of all let's see how we compare to Microsoft implementation with the output of corcompare. As you can see, a lot of the reported missings come from boilerplate attributes and such, so we are pretty much feature complete (except for PLinq).

In addition, I showed throughout this blog different examples coming from Microsoft CTP releases and running on our implementation. Each one showcased a different part of the available API with every time a good performance boost. In order we have parallelized ray tracer, various samples and PLinq stress test.

As remarked, PLinq lags a little behind and will requires a good deal of love to complete operators and get some internal reworking until it's satisfactory.

Extensions

Even though most of the additions are still draft, we already have some interesting bits.

System.Threading namespace got two new classes : AtomicBoolean and InterlockedExtensions. AtomicBoolean is a simple wrapper that allows the same sort of operations provided by Interlocked.Exchange and Interlocked.CompareAndExchange but on a boolean type.

InterlockedExtensions only contains a work-in-progress MCAS instruction at the moment. MCAS (Multiple Compare-and-Swap) is a handy primitive that can be used in a lot of situation. It has a semantic close to the traditional Compare-and-swap instruction (Interlocked.CompareExchange on .NET) except that it works with multiple target. Think of it as a series of CAS executed atomically and failing if any of the CAS failed. A naive implementation could be :

public static bool MCas<T>(T[] locations, T[] expected, T[] newValues)
{
    for (int i = 0; i < locations.Length; i++)
        if (locations[i] != expected[i])
            return false;
    for (int i = 0; i < locations.Length; i++)
        locations[i] = newValues[i];

    return true;
}

It's particularly useful with linked-list like data structures that maintain more than one reference to nodes.

In System.Threading.Collections we also have a first draft of a concurrent skip-list that can be used both as a standard list (like List) and as a hash map (like Dictionary). Skip list are very interesting structures that possess a lot of the advantages of balanced trees but with much less complexity. The current version uses fine-grained locking in some place but I plan to use the aforementioned MCAS primitive to remove them completely.

Finally we have the new System.Threading.Actors namespace which is also much like a playground to my experimentations for the moment. It will allows actors-based concurrency with .NET, a feature that is already present in concurrent languages like Erlang and Java-ish languages like Scala and Clojure. For the moment the model is close to a teared-down version of the one used in Scala but I have to look into Clojure's model which seems lighter and doesn't require pattern matching to be usable.

I still have more ideas that could be useful for concurrent programming like more lock-free data structures and a STM manager. STM, which stands for Software Transactional Memory, is a concept close to the transaction model you have in database for example. The principle is that, when you want to do a set of operations concurrently on shared data, you encapsulate them in a transaction where read/write are logged. If, in the middle of your operations, the data is changed by another thread you are able to 'rollback' the transaction and try again. If no thread got in the way the modification can be 'committed' to the shared data and thus be visible by other threads. In practice it should be possible to adopt a helping scheme where threads that are colliding with an existing transaction will help it finish before doing their own work.

In conclusion, concurrency is becoming a first-class citizens these days and just cannot be ignored anymore by programmers. Big actors like Microsoft are already pushing for that with ParallelFx and, probably, future evolution of C# itself (C#4 ?) so we mustn't lag behind ;) .