All of lore.kernel.org
 help / color / mirror / Atom feed
* fix max-workers not correctly set on multi-node system
@ 2021-11-02 17:36 beld zhang
  2021-11-02 18:35 ` Jens Axboe
  0 siblings, 1 reply; 2+ messages in thread
From: beld zhang @ 2021-11-02 17:36 UTC (permalink / raw)
  To: io-uring

in io-wq.c: io_wq_max_workers(), new_count[] was changed right after
each node's value was set.
this cause the next node got the setting of the previous one,
following patch fixed it.

returned values are copied from node 0.

Signed-off-by: Beld Zhang <beldzhang@gmail.com>
---
diff --git a/fs/io-wq.c b/fs/io-wq.c
index c51691262208..b6f903fa41bd 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -1308,7 +1308,8 @@ int io_wq_cpu_affinity(struct io_wq *wq,
cpumask_var_t mask)
  */
 int io_wq_max_workers(struct io_wq *wq, int *new_count)
 {
-    int i, node, prev = 0;
+    int i, node;
+    int prev[IO_WQ_ACCT_NR];

     BUILD_BUG_ON((int) IO_WQ_ACCT_BOUND   != (int) IO_WQ_BOUND);
     BUILD_BUG_ON((int) IO_WQ_ACCT_UNBOUND != (int) IO_WQ_UNBOUND);
@@ -1319,6 +1320,9 @@ int io_wq_max_workers(struct io_wq *wq, int *new_count)
             new_count[i] = task_rlimit(current, RLIMIT_NPROC);
     }

+    for (i = 0; i < IO_WQ_ACCT_NR; i++)
+        prev[i] = 0;
+
     rcu_read_lock();
     for_each_node(node) {
         struct io_wqe *wqe = wq->wqes[node];
@@ -1327,13 +1331,16 @@ int io_wq_max_workers(struct io_wq *wq, int *new_count)
         raw_spin_lock(&wqe->lock);
         for (i = 0; i < IO_WQ_ACCT_NR; i++) {
             acct = &wqe->acct[i];
-            prev = max_t(int, acct->max_workers, prev);
+            if (node == 0)
+                prev[i] = max_t(int, acct->max_workers, prev[i]);
             if (new_count[i])
                 acct->max_workers = new_count[i];
-            new_count[i] = prev;
         }
         raw_spin_unlock(&wqe->lock);
     }
+    for (i = 0; i < IO_WQ_ACCT_NR; i++)
+        new_count[i] = prev[i];
+
     rcu_read_unlock();
     return 0;
 }

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: fix max-workers not correctly set on multi-node system
  2021-11-02 17:36 fix max-workers not correctly set on multi-node system beld zhang
@ 2021-11-02 18:35 ` Jens Axboe
  0 siblings, 0 replies; 2+ messages in thread
From: Jens Axboe @ 2021-11-02 18:35 UTC (permalink / raw)
  To: beld zhang, io-uring

On 11/2/21 11:36 AM, beld zhang wrote:
> in io-wq.c: io_wq_max_workers(), new_count[] was changed right after
> each node's value was set.
> this cause the next node got the setting of the previous one,
> following patch fixed it.
> 
> returned values are copied from node 0.

Thanks! I've applied this, with some minor changes:

- Your email client is not honoring tabs and spaces
- Improved the commit message a bit
- Use a separate bool to detect first node, instead of assuming
  nodes are numbered from 0..N
- Move last copy out of RCU read lock protection
- Add fixes line

Here's the end result:

https://git.kernel.dk/cgit/linux-block/commit/?h=io_uring-5.16&id=71c9ce27bb57c59d8d7f5298e730c8096eef3d1f

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-11-02 18:35 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-02 17:36 fix max-workers not correctly set on multi-node system beld zhang
2021-11-02 18:35 ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.