Philippe, I have observed a couple of odd things, and wanted to see if these were intentional or not: 1. According to the documentation on evl_attach_thread(), the CPU affinity is locked to a the single CPU on which the thread is running at the time. Although one can call sched_setaffinity(), at the cost of a one-time in-band switch, it doesn't appear that this actually changes the affinity settings. It appears the thread will continue to run on the single CPU it was running on when it was attached to the EVL scheduler. Is this the intended effect? 2. Through experimentation, it doesn't appear that I can start multiple threads which are identical copies of each other. Specifically, it appears that if I start multiple threads with an identical entry point, that only the last thread actually remains active, at least within the EVL scheduler (as reported by "evl ps -l"). If, however, I start each thread with a unique entry point, then I can see multiple threads starting up (although this is one of the scenarios that is still crashing on us). The reason I ask these questions is that we have a thread pool of a variable number of identical "worker" threads, any of which can take a task and operate on them. I would like to have the pool of worker threads share a pool of CPUs, and let the scheduler decide how to allocate them. I can work around this issue if necessary, but it may constrain the amount of parallelism we can achieve. I can work around the second issue as well, but it will make for some kind of ugly code.