linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>
To: Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Tejun Heo <htejun@gmail.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Abdul Haleem <abdhalee@linux.vnet.ibm.com>,
	Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org,
	"Gautham R. Shenoy" <ego@linux.vnet.ibm.com>
Subject: [PATCH 1/2] workqueue: Move wq_update_unbound_numa() to the beginning of CPU_ONLINE
Date: Tue,  7 Jun 2016 20:44:02 +0530	[thread overview]
Message-ID: <6b3c7059ec5d2d6157d23d619e4507692a42a5bd.1465311052.git.ego@linux.vnet.ibm.com> (raw)
In-Reply-To: <cover.1465311052.git.ego@linux.vnet.ibm.com>
In-Reply-To: <cover.1465311052.git.ego@linux.vnet.ibm.com>

Currently in the CPU_ONLINE workqueue handler, the
restore_unbound_workers_cpumask() will never call
set_cpus_allowed_ptr() for a newly created unbound worker thread.

This is because the function which creates a new unbound worker thread
when the first CPU in the node comes online [wq_update_unbound_numa()]
is invoked after the call to restore_unbound_workers_cpumask(). Thus
the affinity is never set for this worker thread when the first CPU in
the node comes online.

Furthermore, due to an optimization in
restore_unbound_workers_cpumask(), set_cpus_allowed_ptr() is not
called when subsequent CPUs in the node come online since it assumes
that the affinity would have been set when the first CPU has come
online.

This patch fixes this issue by invoking wq_update_unbound_numa()
before the calling restore_unbound_workers_cpumask().

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
---
 kernel/workqueue.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e1c0e99..e412794 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4638,6 +4638,10 @@ static int workqueue_cpu_up_callback(struct notifier_block *nfb,
 	case CPU_ONLINE:
 		mutex_lock(&wq_pool_mutex);
 
+		/* update NUMA affinity of unbound workqueues */
+		list_for_each_entry(wq, &workqueues, list)
+			wq_update_unbound_numa(wq, cpu, true);
+
 		for_each_pool(pool, pi) {
 			mutex_lock(&pool->attach_mutex);
 
@@ -4649,10 +4653,6 @@ static int workqueue_cpu_up_callback(struct notifier_block *nfb,
 			mutex_unlock(&pool->attach_mutex);
 		}
 
-		/* update NUMA affinity of unbound workqueues */
-		list_for_each_entry(wq, &workqueues, list)
-			wq_update_unbound_numa(wq, cpu, true);
-
 		mutex_unlock(&wq_pool_mutex);
 		break;
 	}
-- 
1.9.3

  reply	other threads:[~2016-06-07 15:14 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <573D9C2D.4020609@linux.vnet.ibm.com>
     [not found] ` <20160526151137.GA26508@in.ibm.com>
2016-06-07 12:29   ` WARNING at kernel/sched/core.c:1166 while booting 4.6.0 mainline on ppc64le bare metal Abdul Haleem
2016-06-07 15:14     ` [PATCH 0/2] Fix CPU Online handling for unbounded worker threads Gautham R. Shenoy
2016-06-07 15:14       ` Gautham R. Shenoy [this message]
2016-06-15 15:53         ` [PATCH 1/2] workqueue: Move wq_update_unbound_numa() to the beginning of CPU_ONLINE Tejun Heo
2016-06-15 19:28           ` Gautham R Shenoy
2016-06-16 19:35             ` Tejun Heo
2016-06-21 14:12               ` Gautham R Shenoy
2016-06-21 15:36                 ` Tejun Heo
2016-06-21 19:37                   ` Peter Zijlstra
2016-06-21 19:43                     ` Tejun Heo
2016-06-21 19:47                       ` Peter Zijlstra
2016-06-22  5:15                         ` Gautham R Shenoy
2016-06-24  9:00               ` [tip:sched/urgent] sched/core: Allow kthreads to fall back to online && !active cpus tip-bot for Tejun Heo
2016-06-07 15:14       ` [PATCH 2/2] workqueue:Fix affinity of an unbound worker of a node with 1 online CPU Gautham R. Shenoy
2016-06-08  6:03         ` Abdul Haleem
2016-06-14 11:22         ` Peter Zijlstra
2016-06-15 10:19           ` Gautham R Shenoy
2016-06-15 11:32             ` Peter Zijlstra
2016-06-15 12:50               ` Gautham R Shenoy
2016-06-15 13:14                 ` Peter Zijlstra
2016-06-15 16:01                   ` Tejun Heo
2016-06-16 12:11                     ` Michael Ellerman
2016-06-16 12:45                       ` Peter Zijlstra
2016-06-16 19:39                         ` Tejun Heo
2016-06-17  1:49                           ` Michael Ellerman
2016-07-15  5:27                           ` Gautham R Shenoy
     [not found]                           ` <57887507.911f240a.687de.08c5SMTPIN_ADDED_BROKEN@mx.google.com>
2016-07-15 12:10                             ` Tejun Heo
2016-06-13  5:44       ` [PATCH 0/2] Fix CPU Online handling for unbounded worker threads Gautham R Shenoy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6b3c7059ec5d2d6157d23d619e4507692a42a5bd.1465311052.git.ego@linux.vnet.ibm.com \
    --to=ego@linux.vnet.ibm.com \
    --cc=abdhalee@linux.vnet.ibm.com \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=htejun@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).