WaitForMultipleObjects and WIN32 events for Linux/OS X/*nix and Read-Write Locks for Windows

As every programmer worth his salt knows, synchronization primitives form the very building blocks of multithreaded programming. Without them, the world as we know it would cease to exist and chaos would reign free and unchecked.

All joking aside, synchronization objects such as mutexes and semaphores are essential to safe multithreading and are found on just about any platform under the sun. Mutexes and semaphores alike have one purpose: to keep different threads from messing around with bits and bytes at the same time another thread is, keeping your code free of segfaults and memory access violations alike. But that’s about where the similarities between the synchronization primitives on different platforms end.

POSIX-compliant operating systems with pthreads offer additional really neat synchronization primitives not found on Windows, such as condition variables and read-write locks (the latter is now available on Windows Vista+). And Windows programmers have at their disposal automatic and manual reset events, which make designing certain types of multithreaded software incredibly easy, abstracting away much of the hard-core synchronization logic that lies beneath the hood.

We’ve decided to open source two libraries we’ve found useful in transitioning from Windows development to Linux and from Linux development to Windows. The first (and most important) is an implementation of WIN32 manual/auto-reset events for Linux. While there’s nothing WIN32 events can do that POSIX condition variables can’t, the differences between the syntax and usage semantics of both has resulted in entirely different programming paradigms on the different platforms, making it hard for some developers to port code from one platform to the other or even write code from scratch on the platform they’re unfamiliar with.

Enter pevents. pevents is a C++ library (easily portable to C) for *nix platforms that provides an implementation of WIN32 events on Linux, giving developers access to the CreateEvent, SetEvent, ResetEvent, and WaitForSingleObject functions that make them feel warm and fuzzy inside. More importantly and unlike all the other efforts at porting WIN32 events to *nix in the past, pevents also has support for the all-important WaitForMultipleObjects. WFMO is an important concept in multithreaded programming on Windows, and allows a developer to wait in the kernel until one or more events has fired (or, alternatively, until they have all fired) with a single line of code, resulting in high-performance synchronization waits.

While *nix zealots have long maintained that WaitForMultipleObjects encourages bad programming practices, the fact remains that it can be a powerful tool in the arsenal of a good developer… and any claims that WaitForMultipleObjects is inherently flawed as it leads to the loss of events are outright incorrect statements that only those unfamiliar with correct multithreaded programming on Windows would say. With pevents, Windows developers can feel right at home on Linux/*nix with access to WIN32 events in both manual and auto-reset flavors (MSDN explanation for the uninitiated) with both WaitForSingleObject and WaitForMultipleObjects functions.

On the other hand, *nix developers have long had at their fingertips powerful and lightweight locks adapted for the readers-writers problem (Wikipedia overview). ReadWrite locks (pthread_rwlock_t) are powerful objects that can drastically improve multithreaded performance by allowing unlimited simultaneous read-only access to shared variables while only limiting access to one thread at a time for writing purposes. Microsoft has realized the importance of this over time, and with Windows Vista now has support for read-write locks in the kernel (SRW Locks).

However, as very few developers today are free to target only Vista and above, we have written RWLocks for Windows, a library which provides access to three different flavors of read-write locks, with advanced features not found in either pthread_rwlock_t on POSIX and SRW Locks on Vista such as support for cross-process synchronization, reentrance support, and writer -> reader declination.

Both these libraries are released under the terms of the MIT license and hosted on github. These libraries were developed from the ground-up to be as minimalistic, lightweight and fast as possible (though WFMO requires a bit more overhead and can be disabled at compile-time for better performance). Fork, use, and contribute your changes back. Enjoy!

pevents Linux/OS X/*nix

RWLock for Windows

Please note that these are on-going projects still undergoing development and maintenance. WFMO support in particular is in BETA and can be #define’d out if it’s not required.

Find anything wrong? Drop us an email at neosmart@neosmart.net, comment below, or fork the code at github.

7 thoughts on “WaitForMultipleObjects and WIN32 events for Linux/OS X/*nix and Read-Write Locks for Windows

  1. I downloaded pevents and quickly browsed through it. I noticed that in function CreateEvent(), a mutexattr is being initialised and set to PTHREAD_MUTEX_RECURSIVE, but there it ends. Nothing is being done with the mutexattr instance. Shouldn’t it be fed to pthread_mutex_init() ?

    Best regards

  2. Dear Mahmoud,

    Last year, I wrote a C++ port of Windows Events to Linux. In testing it recently I had some problems with SetEvent and WFMO, so I was pleased to find your article and was hoping your code would indicate to me where I had gone wrong. Unfortunately not, as our approaches seem very much the same. A slightly different approach for WFMO, which I took, is to have the Event structs/objects that are to be used by WFMO share a parent “EventGroup” object. This can enable a simpler and more efficient solution at the expense of not being a 100% plug and play replacement for Windows code.

    I’d like to reiterate a question asked above. Why isn’t the recursive attribute used, and is it in fact really necessary? When using a Linux mutex to mimic a Window’s critical section, the recursive attribute is essential, otherwise a thread cannot re-enter a critical section it has already locked. However my understanding of Linux condition variables, and please correct me if I am wrong, is that after locking the mutex, when you call pthread_cond_wait the mutex is atomically released and it is the condition that the thread is blocked on, not the mutex. After the condition is signalled the mutex atomically locks again and should be unlocked. I can’t think of any reason why a thread would need recursive re-entry in these circumstances. If there is a good reason, I’d certainly like to know what it is.

  3. Thanks for your comment, Sahlan. I must first apologize for my earlier reply; as Peter notes, the mutexattr was being set but not used during the call to pthread_mutex_init. The mutexattr was a leftover artifact from an earlier version of the code – there is (as Sahlan notes) no need for it whatsoever in the code as written. I have updated the pevents code on GitHub to remove this dead/unused code.

    If recursion support is required (note that Windows events are not recursive/reentrant), it can be easily implemented with pthread_getspecific and added to the WaitForEvent function. Even then, there is no need for the recursion mutex attribute – the mutex is not exposed outside of pevents.cpp. The conditional variable is what needs to be reentrant to create a reentrant event; pthreads does not contain a reentrant conditional variables implementation and it will need to be implemented with thread local storage (TLS is not supported via __thread on OS X, hence the pthread_get/setspecific requirement).

    Sahlan, there’s nothing inherently wrong with the method employed in pevents – it should (in theory) be a drop-in replacement for Windows events and WFMO. If you experience problems with it, it’s likely it’s due to coding error – have you tried using our pevents library or are you only commenting based on your similar, earlier attempt?

    The pevents library, when compiled without WFMO, should be very fast and easily usable for high-speed signaling without any problems. The performance of the WFMO version would depend on the compiler and C++ library – in particular, how optimized the vector implementation is. On most systems, a straight-forward vector implementation is usually very highly optimized, and WFMO can then too be used for high-speed signaling with little to no overhead.

    There is one difference between pevents and Windows events when it comes to WFMO: on Windows, WFMO will generally work on a “first come, first serve” basis. That is to say, if four threads block on a call to WFMO/WFSO with an autoreset event, they will generally be released in the same order as they came. With pevents, threads using WFMO are always released before threads using WFSO as an algorithmic optimization. However, the order in which threads are signalled with WFMO/WFSO is not officially documented nor guaranteed and MS explicitly says not to rely on this, so I don’t consider this to be a major difference.

  4. Mahmoud,

    Many thanks for your detailed reply and reassuring comments. I already sorted out my earlier problem, which was due to a stupid error I made unconnected with the main event mechanism. I also removed public dependence on my EventGroup helper class, so now everything is done behind the scenes and I have a true C++ drop-in replacement for the Windows event mechanism. If I hit upon any problems in future I will do what you suggest and drop in your own code to see what difference it makes. The remarks I made about performance were speculative rather than informed. In the context in which I am currently using Events, performance is not a critical factor, and therefore not an area that I have looked at in any depth.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>