All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] thread-pool: fix performance regression
@ 2022-05-12 10:43 Paolo Bonzini
  2022-05-12 10:43 ` [PATCH v2 1/3] thread-pool: optimize scheduling of completion bottom half Paolo Bonzini
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Paolo Bonzini @ 2022-05-12 10:43 UTC (permalink / raw)
  To: qemu-devel; +Cc: nsaenzju, stefanha

Together, these two patches fix the performance regression induced by
QemuSemaphore; individually they don't though.

The third patch is a small cleanup on top, that was enabled by the
recent introduction of min_threads/max_threads knobs for the
thread pool.

6.2:
   iops        : min=58051, max=62260, avg=60282.57, stdev=1081.18, samples=30
    clat percentiles (usec):   1.00th=[  490],   99.99th=[  775]
   iops        : min=59401, max=61290, avg=60651.27, stdev=468.24, samples=30
    clat percentiles (usec):   1.00th=[  490],   99.99th=[  717]
   iops        : min=59583, max=60816, avg=60353.43, stdev=282.69, samples=30
    clat percentiles (usec):   1.00th=[  490],   99.99th=[  701]
   iops        : min=58099, max=60713, avg=59739.53, stdev=755.49, samples=30
    clat percentiles (usec):   1.00th=[  494],   99.99th=[  717]
	
patched:
   iops        : min=60616, max=62522, avg=61654.37, stdev=555.67, samples=30
    clat percentiles (usec):   1.00th=[  474],   99.99th=[ 1303]
   iops        : min=61841, max=63600, avg=62878.47, stdev=442.40, samples=30
    clat percentiles (usec):   1.00th=[  465],   99.99th=[  685]
   iops        : min=62976, max=63910, avg=63531.60, stdev=261.05, samples=30
    clat percentiles (usec):   1.00th=[  461],   99.99th=[  693]
   iops        : min=60803, max=63623, avg=62653.37, stdev=808.76, samples=30
    clat percentiles (usec):   1.00th=[  465],   99.99th=[  685]
	
Paolo

v1->v2: support min_threads/max_threads

Paolo Bonzini (3):
  thread-pool: optimize scheduling of completion bottom half
  thread-pool: replace semaphore with condition variable
  thread-pool: remove stopping variable

 util/thread-pool.c | 70 +++++++++++++++++-----------------------------
 1 file changed, 26 insertions(+), 44 deletions(-)

-- 
2.36.0



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/3] thread-pool: optimize scheduling of completion bottom half
  2022-05-12 10:43 [PATCH v2 0/3] thread-pool: fix performance regression Paolo Bonzini
@ 2022-05-12 10:43 ` Paolo Bonzini
  2022-05-12 10:43 ` [PATCH v2 2/3] thread-pool: replace semaphore with condition variable Paolo Bonzini
  2022-05-12 10:43 ` [PATCH v2 3/3] thread-pool: remove stopping variable Paolo Bonzini
  2 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2022-05-12 10:43 UTC (permalink / raw)
  To: qemu-devel; +Cc: nsaenzju, stefanha

The completion bottom half was scheduled within the pool->lock
critical section.  That actually results in worse performance,
because the worker thread can run its own small critical section
and go to sleep before the bottom half starts running.

Note that this simple change does not produce an improvement without
changing the thread pool QemuSemaphore to a condition variable.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 util/thread-pool.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/util/thread-pool.c b/util/thread-pool.c
index 196835b4d3..4979f30ca3 100644
--- a/util/thread-pool.c
+++ b/util/thread-pool.c
@@ -127,9 +127,8 @@ static void *worker_thread(void *opaque)
         smp_wmb();
         req->state = THREAD_DONE;
 
-        qemu_mutex_lock(&pool->lock);
-
         qemu_bh_schedule(pool->completion_bh);
+        qemu_mutex_lock(&pool->lock);
     }
 
     pool->cur_threads--;
-- 
2.36.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/3] thread-pool: replace semaphore with condition variable
  2022-05-12 10:43 [PATCH v2 0/3] thread-pool: fix performance regression Paolo Bonzini
  2022-05-12 10:43 ` [PATCH v2 1/3] thread-pool: optimize scheduling of completion bottom half Paolo Bonzini
@ 2022-05-12 10:43 ` Paolo Bonzini
  2022-05-13 11:56   ` Nicolas Saenz Julienne
  2022-05-12 10:43 ` [PATCH v2 3/3] thread-pool: remove stopping variable Paolo Bonzini
  2 siblings, 1 reply; 7+ messages in thread
From: Paolo Bonzini @ 2022-05-12 10:43 UTC (permalink / raw)
  To: qemu-devel; +Cc: nsaenzju, stefanha, Lukáš Doktor

Since commit f9fc8932b1 ("thread-posix: remove the posix semaphore
support", 2022-04-06) QemuSemaphore has its own mutex and condition
variable; this adds unnecessary overhead on I/O with small block sizes.

Check the QTAILQ directly instead of adding the indirection of a
semaphore's count.  Using a semaphore has not been necessary since
qemu_cond_timedwait was introduced; the new code has to be careful about
spurious wakeups but it is simpler, for example thread_pool_cancel does
not have to worry about synchronizing the semaphore count with the number
of elements of pool->request_list.

Note that the return value of qemu_cond_timedwait (0 for timeout, 1 for
signal or spurious wakeup) is different from that of qemu_sem_timedwait
(-1 for timeout, 0 for success).

Reported-by: Lukáš Doktor <ldoktor@redhat.com>
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 util/thread-pool.c | 64 +++++++++++++++++-----------------------------
 1 file changed, 24 insertions(+), 40 deletions(-)

diff --git a/util/thread-pool.c b/util/thread-pool.c
index 4979f30ca3..da189d9338 100644
--- a/util/thread-pool.c
+++ b/util/thread-pool.c
@@ -57,7 +57,7 @@ struct ThreadPool {
     QEMUBH *completion_bh;
     QemuMutex lock;
     QemuCond worker_stopped;
-    QemuSemaphore sem;
+    QemuCond request_cond;
     QEMUBH *new_thread_bh;
 
     /* The following variables are only accessed from one AioContext. */
@@ -74,23 +74,6 @@ struct ThreadPool {
     int max_threads;
 };
 
-static inline bool back_to_sleep(ThreadPool *pool, int ret)
-{
-    /*
-     * The semaphore timed out, we should exit the loop except when:
-     *  - There is work to do, we raced with the signal.
-     *  - The max threads threshold just changed, we raced with the signal.
-     *  - The thread pool forces a minimum number of readily available threads.
-     */
-    if (ret == -1 && (!QTAILQ_EMPTY(&pool->request_list) ||
-            pool->cur_threads > pool->max_threads ||
-            pool->cur_threads <= pool->min_threads)) {
-            return true;
-    }
-
-    return false;
-}
-
 static void *worker_thread(void *opaque)
 {
     ThreadPool *pool = opaque;
@@ -99,20 +82,27 @@ static void *worker_thread(void *opaque)
     pool->pending_threads--;
     do_spawn_thread(pool);
 
-    while (!pool->stopping) {
+    while (!pool->stopping && pool->cur_threads <= pool->max_threads) {
         ThreadPoolElement *req;
         int ret;
 
-        do {
+        if (QTAILQ_EMPTY(&pool->request_list)) {
             pool->idle_threads++;
-            qemu_mutex_unlock(&pool->lock);
-            ret = qemu_sem_timedwait(&pool->sem, 10000);
-            qemu_mutex_lock(&pool->lock);
+            ret = qemu_cond_timedwait(&pool->request_cond, &pool->lock, 10000);
             pool->idle_threads--;
-        } while (back_to_sleep(pool, ret));
-        if (ret == -1 || pool->stopping ||
-            pool->cur_threads > pool->max_threads) {
-            break;
+            if (ret == 0) {
+                if (QTAILQ_EMPTY(&pool->request_list) &&
+                    pool->cur_threads > pool->min_threads) {
+                    /* Timed out + no work to do + no need for warm threads = exit.  */
+                    break;
+                } else {
+                    /*
+                     * Even if there is some work to do, check if there aren't
+                     * too many worker threads before picking it up.
+                     */
+                    continue;
+                }
+            }
         }
 
         req = QTAILQ_FIRST(&pool->request_list);
@@ -229,13 +219,7 @@ static void thread_pool_cancel(BlockAIOCB *acb)
     trace_thread_pool_cancel(elem, elem->common.opaque);
 
     QEMU_LOCK_GUARD(&pool->lock);
-    if (elem->state == THREAD_QUEUED &&
-        /* No thread has yet started working on elem. we can try to "steal"
-         * the item from the worker if we can get a signal from the
-         * semaphore.  Because this is non-blocking, we can do it with
-         * the lock taken and ensure that elem will remain THREAD_QUEUED.
-         */
-        qemu_sem_timedwait(&pool->sem, 0) == 0) {
+    if (elem->state == THREAD_QUEUED) {
         QTAILQ_REMOVE(&pool->request_list, elem, reqs);
         qemu_bh_schedule(pool->completion_bh);
 
@@ -280,7 +264,7 @@ BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool,
     }
     QTAILQ_INSERT_TAIL(&pool->request_list, req, reqs);
     qemu_mutex_unlock(&pool->lock);
-    qemu_sem_post(&pool->sem);
+    qemu_cond_signal(&pool->request_cond);
     return &req->common;
 }
 
@@ -323,7 +307,7 @@ void thread_pool_update_params(ThreadPool *pool, AioContext *ctx)
      * We either have to:
      *  - Increase the number available of threads until over the min_threads
      *    threshold.
-     *  - Decrease the number of available threads until under the max_threads
+     *  - Bump the worker threads so that they exit, until under the max_threads
      *    threshold.
      *  - Do nothing. The current number of threads fall in between the min and
      *    max thresholds. We'll let the pool manage itself.
@@ -333,7 +317,7 @@ void thread_pool_update_params(ThreadPool *pool, AioContext *ctx)
     }
 
     for (int i = pool->cur_threads; i > pool->max_threads; i--) {
-        qemu_sem_post(&pool->sem);
+        qemu_cond_signal(&pool->request_cond);
     }
 
     qemu_mutex_unlock(&pool->lock);
@@ -350,7 +334,7 @@ static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx)
     pool->completion_bh = aio_bh_new(ctx, thread_pool_completion_bh, pool);
     qemu_mutex_init(&pool->lock);
     qemu_cond_init(&pool->worker_stopped);
-    qemu_sem_init(&pool->sem, 0);
+    qemu_cond_init(&pool->request_cond);
     pool->new_thread_bh = aio_bh_new(ctx, spawn_thread_bh_fn, pool);
 
     QLIST_INIT(&pool->head);
@@ -383,15 +367,15 @@ void thread_pool_free(ThreadPool *pool)
 
     /* Wait for worker threads to terminate */
     pool->stopping = true;
+    qemu_cond_broadcast(&pool->request_cond);
     while (pool->cur_threads > 0) {
-        qemu_sem_post(&pool->sem);
         qemu_cond_wait(&pool->worker_stopped, &pool->lock);
     }
 
     qemu_mutex_unlock(&pool->lock);
 
     qemu_bh_delete(pool->completion_bh);
-    qemu_sem_destroy(&pool->sem);
+    qemu_cond_destroy(&pool->request_cond);
     qemu_cond_destroy(&pool->worker_stopped);
     qemu_mutex_destroy(&pool->lock);
     g_free(pool);
-- 
2.36.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 3/3] thread-pool: remove stopping variable
  2022-05-12 10:43 [PATCH v2 0/3] thread-pool: fix performance regression Paolo Bonzini
  2022-05-12 10:43 ` [PATCH v2 1/3] thread-pool: optimize scheduling of completion bottom half Paolo Bonzini
  2022-05-12 10:43 ` [PATCH v2 2/3] thread-pool: replace semaphore with condition variable Paolo Bonzini
@ 2022-05-12 10:43 ` Paolo Bonzini
  2 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2022-05-12 10:43 UTC (permalink / raw)
  To: qemu-devel; +Cc: nsaenzju, stefanha

Just setting the max threads to 0 is enough to stop all workers.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 util/thread-pool.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/util/thread-pool.c b/util/thread-pool.c
index da189d9338..392c7d7843 100644
--- a/util/thread-pool.c
+++ b/util/thread-pool.c
@@ -69,7 +69,6 @@ struct ThreadPool {
     int idle_threads;
     int new_threads;     /* backlog of threads we need to create */
     int pending_threads; /* threads created but not running yet */
-    bool stopping;
     int min_threads;
     int max_threads;
 };
@@ -82,7 +81,7 @@ static void *worker_thread(void *opaque)
     pool->pending_threads--;
     do_spawn_thread(pool);
 
-    while (!pool->stopping && pool->cur_threads <= pool->max_threads) {
+    while (pool->cur_threads <= pool->max_threads) {
         ThreadPoolElement *req;
         int ret;
 
@@ -366,7 +365,7 @@ void thread_pool_free(ThreadPool *pool)
     pool->new_threads = 0;
 
     /* Wait for worker threads to terminate */
-    pool->stopping = true;
+    pool->max_threads = 0;
     qemu_cond_broadcast(&pool->request_cond);
     while (pool->cur_threads > 0) {
         qemu_cond_wait(&pool->worker_stopped, &pool->lock);
-- 
2.36.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/3] thread-pool: replace semaphore with condition variable
  2022-05-12 10:43 ` [PATCH v2 2/3] thread-pool: replace semaphore with condition variable Paolo Bonzini
@ 2022-05-13 11:56   ` Nicolas Saenz Julienne
  2022-05-13 12:38     ` Paolo Bonzini
  0 siblings, 1 reply; 7+ messages in thread
From: Nicolas Saenz Julienne @ 2022-05-13 11:56 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel; +Cc: stefanha, Lukáš Doktor

Hi Paolo,

On Thu, 2022-05-12 at 12:43 +0200, Paolo Bonzini wrote:

[...]

> diff --git a/util/thread-pool.c b/util/thread-pool.c
> index 4979f30ca3..da189d9338 100644
> --- a/util/thread-pool.c
> +++ b/util/thread-pool.c
> @@ -57,7 +57,7 @@ struct ThreadPool {
>      QEMUBH *completion_bh;
>      QemuMutex lock;
>      QemuCond worker_stopped;
> -    QemuSemaphore sem;
> +    QemuCond request_cond;
>      QEMUBH *new_thread_bh;
>  
>      /* The following variables are only accessed from one
> AioContext. */
> @@ -74,23 +74,6 @@ struct ThreadPool {
>      int max_threads;
>  };
>  
> -static inline bool back_to_sleep(ThreadPool *pool, int ret)
> -{
> -    /*
> -     * The semaphore timed out, we should exit the loop except when:
> -     *  - There is work to do, we raced with the signal.
> -     *  - The max threads threshold just changed, we raced with the
> signal.
> -     *  - The thread pool forces a minimum number of readily
> available threads.
> -     */
> -    if (ret == -1 && (!QTAILQ_EMPTY(&pool->request_list) ||
> -            pool->cur_threads > pool->max_threads ||
> -            pool->cur_threads <= pool->min_threads)) {
> -            return true;
> -    }
> -
> -    return false;
> -}
> -
>  static void *worker_thread(void *opaque)
>  {
>      ThreadPool *pool = opaque;
> @@ -99,20 +82,27 @@ static void *worker_thread(void *opaque)
>      pool->pending_threads--;
>      do_spawn_thread(pool);
>  
> -    while (!pool->stopping) {
> +    while (!pool->stopping && pool->cur_threads <= pool-
> >max_threads) {
>          ThreadPoolElement *req;
>          int ret;
>  
> -        do {
> +        if (QTAILQ_EMPTY(&pool->request_list)) {
>              pool->idle_threads++;
> -            qemu_mutex_unlock(&pool->lock);
> -            ret = qemu_sem_timedwait(&pool->sem, 10000);
> -            qemu_mutex_lock(&pool->lock);
> +            ret = qemu_cond_timedwait(&pool->request_cond, &pool-
> >lock, 10000);
>              pool->idle_threads--;
> -        } while (back_to_sleep(pool, ret));
> -        if (ret == -1 || pool->stopping ||

I think, you need to check for 'pool->stopping' upon exiting wait_cond().
Otherwise it'll blindly try to dequeue a request from a list that is otherwise
empty.

BTW, I see there is no thread_pool_free() unit test.

> -            pool->cur_threads > pool->max_threads) {
> -            break;
> +            if (ret == 0) {
> +                if (QTAILQ_EMPTY(&pool->request_list) &&
> +                    pool->cur_threads > pool->min_threads) {
> +                    /* Timed out + no work to do + no need for warm
> threads = exit.  */
> +                    break;
> +                } else {
> +                    /*
> +                     * Even if there is some work to do, check if
> there aren't
> +                     * too many worker threads before picking it up.
> +                     */
> +                    continue;
> +                }
> +            }
>          }
>  
>          req = QTAILQ_FIRST(&pool->request_list);
> @@ -229,13 +219,7 @@ static void thread_pool_cancel(BlockAIOCB *acb)
>      trace_thread_pool_cancel(elem, elem->common.opaque);
>  
>      QEMU_LOCK_GUARD(&pool->lock);
> -    if (elem->state == THREAD_QUEUED &&
> -        /* No thread has yet started working on elem. we can try to
> "steal"
> -         * the item from the worker if we can get a signal from the
> -         * semaphore.  Because this is non-blocking, we can do it
> with
> -         * the lock taken and ensure that elem will remain
> THREAD_QUEUED.
> -         */
> -        qemu_sem_timedwait(&pool->sem, 0) == 0) {
> +    if (elem->state == THREAD_QUEUED) {
>          QTAILQ_REMOVE(&pool->request_list, elem, reqs);
>          qemu_bh_schedule(pool->completion_bh);


The 'thread-pool cancel' unit test fails.

I think it's because there is an assumption in worker_thread() that if you get
woken up, you'll have a pending request. And you're now 'stealing' work
requests, without 'stealing' a wakeup (what qemu_sem_timedwait(sem, 0) achieved
in the past).

Regards,

-- 
Nicolás Sáenz

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/3] thread-pool: replace semaphore with condition variable
  2022-05-13 11:56   ` Nicolas Saenz Julienne
@ 2022-05-13 12:38     ` Paolo Bonzini
  0 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2022-05-13 12:38 UTC (permalink / raw)
  To: Nicolas Saenz Julienne, qemu-devel; +Cc: stefanha, Lukáš Doktor

On 5/13/22 13:56, Nicolas Saenz Julienne wrote:
>>               pool->idle_threads++;
>> -            qemu_mutex_unlock(&pool->lock);
>> -            ret = qemu_sem_timedwait(&pool->sem, 10000);
>> -            qemu_mutex_lock(&pool->lock);
>> +            ret = qemu_cond_timedwait(&pool->request_cond, &pool-
>>> lock, 10000);
>>               pool->idle_threads--;
>> -        } while (back_to_sleep(pool, ret));
>> -        if (ret == -1 || pool->stopping ||
>
> I think, you need to check for 'pool->stopping' upon exiting wait_cond().
> Otherwise it'll blindly try to dequeue a request from a list that is otherwise
> empty.

Good point, thanks.

>> -    if (elem->state == THREAD_QUEUED &&
>> -        /* No thread has yet started working on elem. we can try to
>> "steal"
>> -         * the item from the worker if we can get a signal from the
>> -         * semaphore.  Because this is non-blocking, we can do it
>> with
>> -         * the lock taken and ensure that elem will remain
>> THREAD_QUEUED.
>> -         */
>> -        qemu_sem_timedwait(&pool->sem, 0) == 0) {
>> +    if (elem->state == THREAD_QUEUED) {
>>           QTAILQ_REMOVE(&pool->request_list, elem, reqs);
>>           qemu_bh_schedule(pool->completion_bh);
> 
> The 'thread-pool cancel' unit test fails.
> 
> I think it's because there is an assumption in worker_thread() that if you get
> woken up, you'll have a pending request. And you're now 'stealing' work
> requests, without 'stealing' a wakeup (what qemu_sem_timedwait(sem, 0) achieved
> in the past).

You don't need to steal a wakeup because cond_wait does not "count", but 
yeah it's essentially the same issue that you mentioned above.

Paolo



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 0/3] thread-pool: fix performance regression
@ 2022-05-12 10:43 Paolo Bonzini
  0 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2022-05-12 10:43 UTC (permalink / raw)
  To: qemu-devel

Together, these two patches fix the performance regression induced by
QemuSemaphore; individually they don't though.

The third patch is a small cleanup on top, that was enabled by the
recent introduction of min_threads/max_threads knobs for the
thread pool.

6.2:
   iops        : min=58051, max=62260, avg=60282.57, stdev=1081.18, samples=30
    clat percentiles (usec):   1.00th=[  490],   99.99th=[  775]
   iops        : min=59401, max=61290, avg=60651.27, stdev=468.24, samples=30
    clat percentiles (usec):   1.00th=[  490],   99.99th=[  717]
   iops        : min=59583, max=60816, avg=60353.43, stdev=282.69, samples=30
    clat percentiles (usec):   1.00th=[  490],   99.99th=[  701]
   iops        : min=58099, max=60713, avg=59739.53, stdev=755.49, samples=30
    clat percentiles (usec):   1.00th=[  494],   99.99th=[  717]
	
patched:
   iops        : min=60616, max=62522, avg=61654.37, stdev=555.67, samples=30
    clat percentiles (usec):   1.00th=[  474],   99.99th=[ 1303]
   iops        : min=61841, max=63600, avg=62878.47, stdev=442.40, samples=30
    clat percentiles (usec):   1.00th=[  465],   99.99th=[  685]
   iops        : min=62976, max=63910, avg=63531.60, stdev=261.05, samples=30
    clat percentiles (usec):   1.00th=[  461],   99.99th=[  693]
   iops        : min=60803, max=63623, avg=62653.37, stdev=808.76, samples=30
    clat percentiles (usec):   1.00th=[  465],   99.99th=[  685]
	
Paolo

v1->v2: support min_threads/max_threads

Paolo Bonzini (3):
  thread-pool: optimize scheduling of completion bottom half
  thread-pool: replace semaphore with condition variable
  thread-pool: remove stopping variable

 util/thread-pool.c | 70 +++++++++++++++++-----------------------------
 1 file changed, 26 insertions(+), 44 deletions(-)

-- 
2.36.0



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-05-13 12:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-12 10:43 [PATCH v2 0/3] thread-pool: fix performance regression Paolo Bonzini
2022-05-12 10:43 ` [PATCH v2 1/3] thread-pool: optimize scheduling of completion bottom half Paolo Bonzini
2022-05-12 10:43 ` [PATCH v2 2/3] thread-pool: replace semaphore with condition variable Paolo Bonzini
2022-05-13 11:56   ` Nicolas Saenz Julienne
2022-05-13 12:38     ` Paolo Bonzini
2022-05-12 10:43 ` [PATCH v2 3/3] thread-pool: remove stopping variable Paolo Bonzini
  -- strict thread matches above, loose matches on Subject: below --
2022-05-12 10:43 [PATCH v2 0/3] thread-pool: fix performance regression Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.