linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] rcu/nocb: Last prep work before cpuset interface
@ 2021-11-17 15:56 Frederic Weisbecker
  2021-11-17 15:56 ` [PATCH 1/6] rcu/nocb: Remove rdp from nocb list when de-offloaded Frederic Weisbecker
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2021-11-17 15:56 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, Uladzislau Rezki, Neeraj Upadhyay,
	Boqun Feng, Josh Triplett, Joel Fernandes, rcu

Hi,

Hopefully this is the last RCU-side preparation work before I manage
to add a cpuset interface to toggle nocb (de-)offloading.

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
	rcu/dev

HEAD: 23dcef16e09a8a60cd15d001df56f72561d57a7f

Thanks,
	Frederic
---

Frederic Weisbecker (6):
      rcu/nocb: Remove rdp from nocb list when de-offloaded
      rcu/nocb: Prepare nocb_cb_wait() to start with a non-offloaded rdp
      rcu/nocb: Optimize kthreads and rdp initialization
      rcu/nocb: Create nocb kthreads on all CPUs as long as the "rcu_nocb=" is passed
      rcu/nocb: Allow empty "rcu_nocbs" kernel parameter
      rcu/nocb: Merge rcu_spawn_cpu_nocb_kthread() and rcu_spawn_one_nocb_kthread()


 kernel/rcu/tree.h      |   7 +++-
 kernel/rcu/tree_nocb.h | 111 +++++++++++++++++++++++++++++--------------------
 2 files changed, 70 insertions(+), 48 deletions(-)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/6] rcu/nocb: Remove rdp from nocb list when de-offloaded
  2021-11-17 15:56 [PATCH 0/6] rcu/nocb: Last prep work before cpuset interface Frederic Weisbecker
@ 2021-11-17 15:56 ` Frederic Weisbecker
  2021-11-17 18:53   ` Paul E. McKenney
  2021-11-17 15:56 ` [PATCH 2/6] rcu/nocb: Prepare nocb_cb_wait() to start with a non-offloaded rdp Frederic Weisbecker
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2021-11-17 15:56 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, Uladzislau Rezki, Neeraj Upadhyay,
	Boqun Feng, Josh Triplett, Joel Fernandes, rcu

nocb_gp_wait() iterates all CPUs within the rcuog's group even if they
are have been de-offloaded. This is suboptimal if only few CPUs are
offloaded within the group. And this will become even more a problem
when a nocb kthread will be created for all possible CPUs in the future.

Therefore use a standard double linked list to link all the offloaded
rdps and safely add/del their nodes as we (de-)offloaded them.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
---
 kernel/rcu/tree.h      |  7 +++++--
 kernel/rcu/tree_nocb.h | 29 ++++++++++++++++++++++-------
 2 files changed, 27 insertions(+), 9 deletions(-)

diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index deeaf2fee714..486fc901bd08 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -221,8 +221,11 @@ struct rcu_data {
 	struct swait_queue_head nocb_gp_wq; /* For nocb kthreads to sleep on. */
 	bool nocb_cb_sleep;		/* Is the nocb CB thread asleep? */
 	struct task_struct *nocb_cb_kthread;
-	struct rcu_data *nocb_next_cb_rdp;
-					/* Next rcu_data in wakeup chain. */
+	struct list_head nocb_head_rdp; /*
+					 * Head of rcu_data list in wakeup chain,
+					 * if rdp_gp.
+					 */
+	struct list_head nocb_entry_rdp; /* rcu_data node in wakeup chain. */
 
 	/* The following fields are used by CB kthread, hence new cacheline. */
 	struct rcu_data *nocb_gp_rdp ____cacheline_internodealigned_in_smp;
diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index 2461fe8d0c23..e728128be02a 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -625,7 +625,7 @@ static void nocb_gp_wait(struct rcu_data *my_rdp)
 	 * and the global grace-period kthread are awakened if needed.
 	 */
 	WARN_ON_ONCE(my_rdp->nocb_gp_rdp != my_rdp);
-	for (rdp = my_rdp; rdp; rdp = rdp->nocb_next_cb_rdp) {
+	list_for_each_entry_rcu(rdp, &my_rdp->nocb_head_rdp, nocb_entry_rdp, 1) {
 		bool needwake_state = false;
 
 		if (!nocb_gp_enabled_cb(rdp))
@@ -1003,6 +1003,8 @@ static long rcu_nocb_rdp_deoffload(void *arg)
 	swait_event_exclusive(rdp->nocb_state_wq,
 			      !rcu_segcblist_test_flags(cblist, SEGCBLIST_KTHREAD_CB |
 							SEGCBLIST_KTHREAD_GP));
+	/* Don't bother iterate this one anymore on nocb_gp_wait() */
+	list_del_rcu(&rdp->nocb_entry_rdp);
 	/*
 	 * Lock one last time to acquire latest callback updates from kthreads
 	 * so we can later handle callbacks locally without locking.
@@ -1066,6 +1068,15 @@ static long rcu_nocb_rdp_offload(void *arg)
 		return -EINVAL;
 
 	pr_info("Offloading %d\n", rdp->cpu);
+
+	/*
+	 * Iterate this CPU on nocb_gp_wait(). We do it before locking nocb_gp_lock,
+	 * resetting nocb_gp_sleep and waking up the related "rcuog". Since nocb_gp_wait()
+	 * in turn locks nocb_gp_lock before setting nocb_gp_sleep again, we are guaranteed
+	 * to iterate this new rdp before "rcuog" goes to sleep again.
+	 */
+	list_add_tail_rcu(&rdp->nocb_entry_rdp, &rdp->nocb_gp_rdp->nocb_head_rdp);
+
 	/*
 	 * Can't use rcu_nocb_lock_irqsave() before SEGCBLIST_LOCKING
 	 * is set.
@@ -1268,7 +1279,6 @@ static void __init rcu_organize_nocb_kthreads(void)
 	int nl = 0;  /* Next GP kthread. */
 	struct rcu_data *rdp;
 	struct rcu_data *rdp_gp = NULL;  /* Suppress misguided gcc warn. */
-	struct rcu_data *rdp_prev = NULL;
 
 	if (!cpumask_available(rcu_nocb_mask))
 		return;
@@ -1288,8 +1298,8 @@ static void __init rcu_organize_nocb_kthreads(void)
 			/* New GP kthread, set up for CBs & next GP. */
 			gotnocbs = true;
 			nl = DIV_ROUND_UP(rdp->cpu + 1, ls) * ls;
-			rdp->nocb_gp_rdp = rdp;
 			rdp_gp = rdp;
+			INIT_LIST_HEAD(&rdp->nocb_head_rdp);
 			if (dump_tree) {
 				if (!firsttime)
 					pr_cont("%s\n", gotnocbscbs
@@ -1302,12 +1312,11 @@ static void __init rcu_organize_nocb_kthreads(void)
 		} else {
 			/* Another CB kthread, link to previous GP kthread. */
 			gotnocbscbs = true;
-			rdp->nocb_gp_rdp = rdp_gp;
-			rdp_prev->nocb_next_cb_rdp = rdp;
 			if (dump_tree)
 				pr_cont(" %d", cpu);
 		}
-		rdp_prev = rdp;
+		rdp->nocb_gp_rdp = rdp_gp;
+		list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
 	}
 	if (gotnocbs && dump_tree)
 		pr_cont("%s\n", gotnocbscbs ? "" : " (self only)");
@@ -1369,6 +1378,7 @@ static void show_rcu_nocb_state(struct rcu_data *rdp)
 {
 	char bufw[20];
 	char bufr[20];
+	struct rcu_data *nocb_next_rdp = NULL;
 	struct rcu_segcblist *rsclp = &rdp->cblist;
 	bool waslocked;
 	bool wassleep;
@@ -1376,11 +1386,16 @@ static void show_rcu_nocb_state(struct rcu_data *rdp)
 	if (rdp->nocb_gp_rdp == rdp)
 		show_rcu_nocb_gp_state(rdp);
 
+	nocb_next_rdp = list_next_or_null_rcu(&rdp->nocb_gp_rdp->nocb_head_rdp,
+					      &rdp->nocb_entry_rdp,
+					      typeof(*rdp),
+					      nocb_entry_rdp);
+
 	sprintf(bufw, "%ld", rsclp->gp_seq[RCU_WAIT_TAIL]);
 	sprintf(bufr, "%ld", rsclp->gp_seq[RCU_NEXT_READY_TAIL]);
 	pr_info("   CB %d^%d->%d %c%c%c%c%c%c F%ld L%ld C%d %c%c%s%c%s%c%c q%ld %c CPU %d%s\n",
 		rdp->cpu, rdp->nocb_gp_rdp->cpu,
-		rdp->nocb_next_cb_rdp ? rdp->nocb_next_cb_rdp->cpu : -1,
+		nocb_next_rdp ? nocb_next_rdp->cpu : -1,
 		"kK"[!!rdp->nocb_cb_kthread],
 		"bB"[raw_spin_is_locked(&rdp->nocb_bypass_lock)],
 		"cC"[!!atomic_read(&rdp->nocb_lock_contended)],
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/6] rcu/nocb: Prepare nocb_cb_wait() to start with a non-offloaded rdp
  2021-11-17 15:56 [PATCH 0/6] rcu/nocb: Last prep work before cpuset interface Frederic Weisbecker
  2021-11-17 15:56 ` [PATCH 1/6] rcu/nocb: Remove rdp from nocb list when de-offloaded Frederic Weisbecker
@ 2021-11-17 15:56 ` Frederic Weisbecker
  2021-11-17 15:56 ` [PATCH 3/6] rcu/nocb: Optimize kthreads and rdp initialization Frederic Weisbecker
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2021-11-17 15:56 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, Uladzislau Rezki, Neeraj Upadhyay,
	Boqun Feng, Josh Triplett, Joel Fernandes, rcu

In order to be able to toggle the offloaded state from cpusets, a nocb
kthread will need to be created for all possible CPUs whenever the
"rcu_nocbs=" parameter is passed.

Therefore nocb_cb_wait() kthread must prepare to start running on a
de-offloaded rdp. Simply move the sleeping condition in the beginning
of the kthread callback is enough to prevent from running callbacks
before the rdp ever becomes offloaded.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
---
 kernel/rcu/tree_nocb.h | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index e728128be02a..0543d5e913bb 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -789,6 +789,18 @@ static void nocb_cb_wait(struct rcu_data *rdp)
 	bool can_sleep = true;
 	struct rcu_node *rnp = rdp->mynode;
 
+	do {
+		swait_event_interruptible_exclusive(rdp->nocb_cb_wq,
+						    nocb_cb_wait_cond(rdp));
+
+		// VVV Ensure CB invocation follows _sleep test.
+		if (smp_load_acquire(&rdp->nocb_cb_sleep)) { // ^^^
+			WARN_ON(signal_pending(current));
+			trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WokeEmpty"));
+		}
+	} while (!nocb_cb_can_run(rdp));
+
+
 	local_irq_save(flags);
 	rcu_momentary_dyntick_idle();
 	local_irq_restore(flags);
@@ -841,17 +853,6 @@ static void nocb_cb_wait(struct rcu_data *rdp)
 
 	if (needwake_state)
 		swake_up_one(&rdp->nocb_state_wq);
-
-	do {
-		swait_event_interruptible_exclusive(rdp->nocb_cb_wq,
-						    nocb_cb_wait_cond(rdp));
-
-		// VVV Ensure CB invocation follows _sleep test.
-		if (smp_load_acquire(&rdp->nocb_cb_sleep)) { // ^^^
-			WARN_ON(signal_pending(current));
-			trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WokeEmpty"));
-		}
-	} while (!nocb_cb_can_run(rdp));
 }
 
 /*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/6] rcu/nocb: Optimize kthreads and rdp initialization
  2021-11-17 15:56 [PATCH 0/6] rcu/nocb: Last prep work before cpuset interface Frederic Weisbecker
  2021-11-17 15:56 ` [PATCH 1/6] rcu/nocb: Remove rdp from nocb list when de-offloaded Frederic Weisbecker
  2021-11-17 15:56 ` [PATCH 2/6] rcu/nocb: Prepare nocb_cb_wait() to start with a non-offloaded rdp Frederic Weisbecker
@ 2021-11-17 15:56 ` Frederic Weisbecker
  2021-11-17 15:56 ` [PATCH 4/6] rcu/nocb: Create nocb kthreads on all CPUs as long as the "rcu_nocb=" is passed Frederic Weisbecker
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2021-11-17 15:56 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, Uladzislau Rezki, Neeraj Upadhyay,
	Boqun Feng, Josh Triplett, Joel Fernandes, rcu

Currently cpumask_available() is used to prevent from unwanted
NOCB initialization. However if "rcu_nocbs=" isn't passed but
CONFIG_CPUMASK_OFFSTACK=n, the initialization path is still taken,
running through all sorts of needless operations and iterations on an
empty cpumask.

Fix this with relying on a real initialization state instead. This
also optimize kthreads creation, sparing iteration over all online CPUs
when nocb isn't initialized.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
---
 kernel/rcu/tree_nocb.h | 24 +++++++++++++++++-------
 1 file changed, 17 insertions(+), 7 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index 0543d5e913bb..9fe4be10fde7 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -60,6 +60,9 @@ static inline bool rcu_current_is_nocb_kthread(struct rcu_data *rdp)
  * Parse the boot-time rcu_nocb_mask CPU list from the kernel parameters.
  * If the list is invalid, a warning is emitted and all CPUs are offloaded.
  */
+
+static bool rcu_nocb_is_setup;
+
 static int __init rcu_nocb_setup(char *str)
 {
 	alloc_bootmem_cpumask_var(&rcu_nocb_mask);
@@ -67,6 +70,7 @@ static int __init rcu_nocb_setup(char *str)
 		pr_warn("rcu_nocbs= bad CPU range, all CPUs set\n");
 		cpumask_setall(rcu_nocb_mask);
 	}
+	rcu_nocb_is_setup = true;
 	return 1;
 }
 __setup("rcu_nocbs=", rcu_nocb_setup);
@@ -1151,13 +1155,17 @@ void __init rcu_init_nohz(void)
 		need_rcu_nocb_mask = true;
 #endif /* #if defined(CONFIG_NO_HZ_FULL) */
 
-	if (!cpumask_available(rcu_nocb_mask) && need_rcu_nocb_mask) {
-		if (!zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL)) {
-			pr_info("rcu_nocb_mask allocation failed, callback offloading disabled.\n");
-			return;
+	if (need_rcu_nocb_mask) {
+		if (!cpumask_available(rcu_nocb_mask)) {
+			if (!zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL)) {
+				pr_info("rcu_nocb_mask allocation failed, callback offloading disabled.\n");
+				return;
+			}
 		}
+		rcu_nocb_is_setup = true;
 	}
-	if (!cpumask_available(rcu_nocb_mask))
+
+	if (!rcu_nocb_is_setup)
 		return;
 
 #if defined(CONFIG_NO_HZ_FULL)
@@ -1259,8 +1267,10 @@ static void __init rcu_spawn_nocb_kthreads(void)
 {
 	int cpu;
 
-	for_each_online_cpu(cpu)
-		rcu_spawn_cpu_nocb_kthread(cpu);
+	if (rcu_nocb_is_setup) {
+		for_each_online_cpu(cpu)
+			rcu_spawn_cpu_nocb_kthread(cpu);
+	}
 }
 
 /* How many CB CPU IDs per GP kthread?  Default of -1 for sqrt(nr_cpu_ids). */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/6] rcu/nocb: Create nocb kthreads on all CPUs as long as the "rcu_nocb=" is passed
  2021-11-17 15:56 [PATCH 0/6] rcu/nocb: Last prep work before cpuset interface Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2021-11-17 15:56 ` [PATCH 3/6] rcu/nocb: Optimize kthreads and rdp initialization Frederic Weisbecker
@ 2021-11-17 15:56 ` Frederic Weisbecker
  2021-11-17 19:27   ` Paul E. McKenney
  2021-11-17 15:56 ` [PATCH 5/6] rcu/nocb: Allow empty "rcu_nocbs" kernel parameter Frederic Weisbecker
  2021-11-17 15:56 ` [PATCH 6/6] rcu/nocb: Merge rcu_spawn_cpu_nocb_kthread() and rcu_spawn_one_nocb_kthread() Frederic Weisbecker
  5 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2021-11-17 15:56 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, Uladzislau Rezki, Neeraj Upadhyay,
	Boqun Feng, Josh Triplett, Joel Fernandes, rcu

In order to be able to (de-)offload any CPU using cpuset in the future,
create a NOCB kthread for all possible CPUs. For now this is done only
as long as the "rcu_nocb=" kernel parameter is passed to avoid
the unnecessary overhead for most users.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
---
 kernel/rcu/tree_nocb.h | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index 9fe4be10fde7..1871f15b8472 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1221,11 +1221,8 @@ static void rcu_spawn_one_nocb_kthread(int cpu)
 	struct rcu_data *rdp_gp;
 	struct task_struct *t;
 
-	/*
-	 * If this isn't a no-CBs CPU or if it already has an rcuo kthread,
-	 * then nothing to do.
-	 */
-	if (!rcu_is_nocb_cpu(cpu) || rdp->nocb_cb_kthread)
+	/* If it already has an rcuo kthread, then nothing to do. */
+	if (rdp->nocb_cb_kthread)
 		return;
 
 	/* If we didn't spawn the GP kthread first, reorganize! */
@@ -1253,7 +1250,7 @@ static void rcu_spawn_one_nocb_kthread(int cpu)
  */
 static void rcu_spawn_cpu_nocb_kthread(int cpu)
 {
-	if (rcu_scheduler_fully_active)
+	if (rcu_scheduler_fully_active && rcu_nocb_is_setup)
 		rcu_spawn_one_nocb_kthread(cpu);
 }
 
@@ -1268,7 +1265,7 @@ static void __init rcu_spawn_nocb_kthreads(void)
 	int cpu;
 
 	if (rcu_nocb_is_setup) {
-		for_each_online_cpu(cpu)
+		for_each_possible_cpu(cpu)
 			rcu_spawn_cpu_nocb_kthread(cpu);
 	}
 }
@@ -1303,7 +1300,7 @@ static void __init rcu_organize_nocb_kthreads(void)
 	 * Should the corresponding CPU come online in the future, then
 	 * we will spawn the needed set of rcu_nocb_kthread() kthreads.
 	 */
-	for_each_cpu(cpu, rcu_nocb_mask) {
+	for_each_possible_cpu(cpu) {
 		rdp = per_cpu_ptr(&rcu_data, cpu);
 		if (rdp->cpu >= nl) {
 			/* New GP kthread, set up for CBs & next GP. */
@@ -1327,7 +1324,8 @@ static void __init rcu_organize_nocb_kthreads(void)
 				pr_cont(" %d", cpu);
 		}
 		rdp->nocb_gp_rdp = rdp_gp;
-		list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
+		if (cpumask_test_cpu(cpu, rcu_nocb_mask))
+			list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
 	}
 	if (gotnocbs && dump_tree)
 		pr_cont("%s\n", gotnocbscbs ? "" : " (self only)");
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/6] rcu/nocb: Allow empty "rcu_nocbs" kernel parameter
  2021-11-17 15:56 [PATCH 0/6] rcu/nocb: Last prep work before cpuset interface Frederic Weisbecker
                   ` (3 preceding siblings ...)
  2021-11-17 15:56 ` [PATCH 4/6] rcu/nocb: Create nocb kthreads on all CPUs as long as the "rcu_nocb=" is passed Frederic Weisbecker
@ 2021-11-17 15:56 ` Frederic Weisbecker
  2021-11-17 19:46   ` Paul E. McKenney
  2021-11-17 15:56 ` [PATCH 6/6] rcu/nocb: Merge rcu_spawn_cpu_nocb_kthread() and rcu_spawn_one_nocb_kthread() Frederic Weisbecker
  5 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2021-11-17 15:56 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, Uladzislau Rezki, Neeraj Upadhyay,
	Boqun Feng, Josh Triplett, Joel Fernandes, rcu

If a user wants to boot without any CPU in offloaded mode initially but
with the possibility to offload them later using cpusets, provide a way
to simply pass an empty "rcu_nocbs" kernel parameter which will enforce
the creation of dormant nocb kthreads.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
---
 kernel/rcu/tree_nocb.h | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index 1871f15b8472..3845f1885ffc 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -66,14 +66,16 @@ static bool rcu_nocb_is_setup;
 static int __init rcu_nocb_setup(char *str)
 {
 	alloc_bootmem_cpumask_var(&rcu_nocb_mask);
-	if (cpulist_parse(str, rcu_nocb_mask)) {
-		pr_warn("rcu_nocbs= bad CPU range, all CPUs set\n");
-		cpumask_setall(rcu_nocb_mask);
+	if (*str == '=') {
+		if (cpulist_parse(++str, rcu_nocb_mask)) {
+			pr_warn("rcu_nocbs= bad CPU range, all CPUs set\n");
+			cpumask_setall(rcu_nocb_mask);
+		}
 	}
 	rcu_nocb_is_setup = true;
 	return 1;
 }
-__setup("rcu_nocbs=", rcu_nocb_setup);
+__setup("rcu_nocbs", rcu_nocb_setup);
 
 static int __init parse_rcu_nocb_poll(char *arg)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/6] rcu/nocb: Merge rcu_spawn_cpu_nocb_kthread() and rcu_spawn_one_nocb_kthread()
  2021-11-17 15:56 [PATCH 0/6] rcu/nocb: Last prep work before cpuset interface Frederic Weisbecker
                   ` (4 preceding siblings ...)
  2021-11-17 15:56 ` [PATCH 5/6] rcu/nocb: Allow empty "rcu_nocbs" kernel parameter Frederic Weisbecker
@ 2021-11-17 15:56 ` Frederic Weisbecker
  5 siblings, 0 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2021-11-17 15:56 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, Uladzislau Rezki, Neeraj Upadhyay,
	Boqun Feng, Josh Triplett, Joel Fernandes, rcu

rcu_spawn_one_nocb_kthread() is only called by
rcu_spawn_cpu_nocb_kthread(). Don't bother with two separate functions
and merge them.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
---
 kernel/rcu/tree_nocb.h | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index 3845f1885ffc..248b90333b08 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1217,12 +1217,15 @@ static void __init rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp)
  * rcuo CB kthread, spawn it.  Additionally, if the rcuo GP kthread
  * for this CPU's group has not yet been created, spawn it as well.
  */
-static void rcu_spawn_one_nocb_kthread(int cpu)
+static void rcu_spawn_cpu_nocb_kthread(int cpu)
 {
 	struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
 	struct rcu_data *rdp_gp;
 	struct task_struct *t;
 
+	if (!rcu_scheduler_fully_active || !rcu_nocb_is_setup)
+		return;
+
 	/* If it already has an rcuo kthread, then nothing to do. */
 	if (rdp->nocb_cb_kthread)
 		return;
@@ -1246,16 +1249,6 @@ static void rcu_spawn_one_nocb_kthread(int cpu)
 	WRITE_ONCE(rdp->nocb_gp_kthread, rdp_gp->nocb_gp_kthread);
 }
 
-/*
- * If the specified CPU is a no-CBs CPU that does not already have its
- * rcuo kthread, spawn it.
- */
-static void rcu_spawn_cpu_nocb_kthread(int cpu)
-{
-	if (rcu_scheduler_fully_active && rcu_nocb_is_setup)
-		rcu_spawn_one_nocb_kthread(cpu);
-}
-
 /*
  * Once the scheduler is running, spawn rcuo kthreads for all online
  * no-CBs CPUs.  This assumes that the early_initcall()s happen before
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/6] rcu/nocb: Remove rdp from nocb list when de-offloaded
  2021-11-17 15:56 ` [PATCH 1/6] rcu/nocb: Remove rdp from nocb list when de-offloaded Frederic Weisbecker
@ 2021-11-17 18:53   ` Paul E. McKenney
  2021-11-17 21:47     ` Frederic Weisbecker
  0 siblings, 1 reply; 18+ messages in thread
From: Paul E. McKenney @ 2021-11-17 18:53 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng,
	Josh Triplett, Joel Fernandes, rcu

On Wed, Nov 17, 2021 at 04:56:32PM +0100, Frederic Weisbecker wrote:
> nocb_gp_wait() iterates all CPUs within the rcuog's group even if they
> are have been de-offloaded. This is suboptimal if only few CPUs are
> offloaded within the group. And this will become even more a problem
> when a nocb kthread will be created for all possible CPUs in the future.
> 
> Therefore use a standard double linked list to link all the offloaded
> rdps and safely add/del their nodes as we (de-)offloaded them.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Uladzislau Rezki <urezki@gmail.com>
> Cc: Josh Triplett <josh@joshtriplett.org>
> Cc: Joel Fernandes <joel@joelfernandes.org>
> ---
>  kernel/rcu/tree.h      |  7 +++++--
>  kernel/rcu/tree_nocb.h | 29 ++++++++++++++++++++++-------
>  2 files changed, 27 insertions(+), 9 deletions(-)
> 
> diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> index deeaf2fee714..486fc901bd08 100644
> --- a/kernel/rcu/tree.h
> +++ b/kernel/rcu/tree.h
> @@ -221,8 +221,11 @@ struct rcu_data {
>  	struct swait_queue_head nocb_gp_wq; /* For nocb kthreads to sleep on. */
>  	bool nocb_cb_sleep;		/* Is the nocb CB thread asleep? */
>  	struct task_struct *nocb_cb_kthread;
> -	struct rcu_data *nocb_next_cb_rdp;
> -					/* Next rcu_data in wakeup chain. */
> +	struct list_head nocb_head_rdp; /*
> +					 * Head of rcu_data list in wakeup chain,
> +					 * if rdp_gp.
> +					 */
> +	struct list_head nocb_entry_rdp; /* rcu_data node in wakeup chain. */
>  
>  	/* The following fields are used by CB kthread, hence new cacheline. */
>  	struct rcu_data *nocb_gp_rdp ____cacheline_internodealigned_in_smp;
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index 2461fe8d0c23..e728128be02a 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -625,7 +625,7 @@ static void nocb_gp_wait(struct rcu_data *my_rdp)
>  	 * and the global grace-period kthread are awakened if needed.
>  	 */
>  	WARN_ON_ONCE(my_rdp->nocb_gp_rdp != my_rdp);
> -	for (rdp = my_rdp; rdp; rdp = rdp->nocb_next_cb_rdp) {
> +	list_for_each_entry_rcu(rdp, &my_rdp->nocb_head_rdp, nocb_entry_rdp, 1) {
>  		bool needwake_state = false;
>  
>  		if (!nocb_gp_enabled_cb(rdp))
> @@ -1003,6 +1003,8 @@ static long rcu_nocb_rdp_deoffload(void *arg)
>  	swait_event_exclusive(rdp->nocb_state_wq,
>  			      !rcu_segcblist_test_flags(cblist, SEGCBLIST_KTHREAD_CB |
>  							SEGCBLIST_KTHREAD_GP));
> +	/* Don't bother iterate this one anymore on nocb_gp_wait() */
> +	list_del_rcu(&rdp->nocb_entry_rdp);
>  	/*
>  	 * Lock one last time to acquire latest callback updates from kthreads
>  	 * so we can later handle callbacks locally without locking.
> @@ -1066,6 +1068,15 @@ static long rcu_nocb_rdp_offload(void *arg)
>  		return -EINVAL;
>  
>  	pr_info("Offloading %d\n", rdp->cpu);
> +
> +	/*
> +	 * Iterate this CPU on nocb_gp_wait(). We do it before locking nocb_gp_lock,
> +	 * resetting nocb_gp_sleep and waking up the related "rcuog". Since nocb_gp_wait()
> +	 * in turn locks nocb_gp_lock before setting nocb_gp_sleep again, we are guaranteed
> +	 * to iterate this new rdp before "rcuog" goes to sleep again.

Just to make sure that I understand...

The ->nocb_entry_rdp list is RCU-protected, with an odd flavor of RCU.
The list_add_tail_rcu() handles list insertion.  For list deletion,
on the one hand, the rcu_data structures are never freed, so their
lifetime never ends.  But one could be removed during an de-offloading
operation, then re-added by a later offloading operation.  It would be
bad if a reader came along for that ride because that reader would end
up skipping all the rcu_data structures that were in the list after the
initial position of the one that was removed and added back.

The trick seems to be that the de-offloading process cannot complete
until the relevant rcuog kthread has acknowledged the de-offloading,
which it cannot do until it has completed the list_for_each_entry_rcu()
loop.  And the rcuog kthread is the only thing that traverses the list,
except for the show_rcu_nocb_state() function, more on which later.

Therefore, it is not possible for the rcuog kthread to come along for
that ride.

On to show_rcu_nocb_state()...

This does not actually traverse the list, but instead looks at the ->cpu
field of the next structure.  Because the ->next pointer is never NULLed,
the worst that can happen is that a confusing ->cpu field is printed,
for example, the one that was in effect prior to the de-offloading
operation.  But that number would have been printed anyway had the
show_rcu_nocb_state() function not been delayed across the de-offloading
and offloading.

So no harm done.

Did I get it right?  If so, the comment might need help.  If not,
what am I missing?

							Thanx, Paul

> +	 */
> +	list_add_tail_rcu(&rdp->nocb_entry_rdp, &rdp->nocb_gp_rdp->nocb_head_rdp);
> +
>  	/*
>  	 * Can't use rcu_nocb_lock_irqsave() before SEGCBLIST_LOCKING
>  	 * is set.
> @@ -1268,7 +1279,6 @@ static void __init rcu_organize_nocb_kthreads(void)
>  	int nl = 0;  /* Next GP kthread. */
>  	struct rcu_data *rdp;
>  	struct rcu_data *rdp_gp = NULL;  /* Suppress misguided gcc warn. */
> -	struct rcu_data *rdp_prev = NULL;
>  
>  	if (!cpumask_available(rcu_nocb_mask))
>  		return;
> @@ -1288,8 +1298,8 @@ static void __init rcu_organize_nocb_kthreads(void)
>  			/* New GP kthread, set up for CBs & next GP. */
>  			gotnocbs = true;
>  			nl = DIV_ROUND_UP(rdp->cpu + 1, ls) * ls;
> -			rdp->nocb_gp_rdp = rdp;
>  			rdp_gp = rdp;
> +			INIT_LIST_HEAD(&rdp->nocb_head_rdp);
>  			if (dump_tree) {
>  				if (!firsttime)
>  					pr_cont("%s\n", gotnocbscbs
> @@ -1302,12 +1312,11 @@ static void __init rcu_organize_nocb_kthreads(void)
>  		} else {
>  			/* Another CB kthread, link to previous GP kthread. */
>  			gotnocbscbs = true;
> -			rdp->nocb_gp_rdp = rdp_gp;
> -			rdp_prev->nocb_next_cb_rdp = rdp;
>  			if (dump_tree)
>  				pr_cont(" %d", cpu);
>  		}
> -		rdp_prev = rdp;
> +		rdp->nocb_gp_rdp = rdp_gp;
> +		list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
>  	}
>  	if (gotnocbs && dump_tree)
>  		pr_cont("%s\n", gotnocbscbs ? "" : " (self only)");
> @@ -1369,6 +1378,7 @@ static void show_rcu_nocb_state(struct rcu_data *rdp)
>  {
>  	char bufw[20];
>  	char bufr[20];
> +	struct rcu_data *nocb_next_rdp = NULL;
>  	struct rcu_segcblist *rsclp = &rdp->cblist;
>  	bool waslocked;
>  	bool wassleep;
> @@ -1376,11 +1386,16 @@ static void show_rcu_nocb_state(struct rcu_data *rdp)
>  	if (rdp->nocb_gp_rdp == rdp)
>  		show_rcu_nocb_gp_state(rdp);
>  
> +	nocb_next_rdp = list_next_or_null_rcu(&rdp->nocb_gp_rdp->nocb_head_rdp,
> +					      &rdp->nocb_entry_rdp,
> +					      typeof(*rdp),
> +					      nocb_entry_rdp);
> +
>  	sprintf(bufw, "%ld", rsclp->gp_seq[RCU_WAIT_TAIL]);
>  	sprintf(bufr, "%ld", rsclp->gp_seq[RCU_NEXT_READY_TAIL]);
>  	pr_info("   CB %d^%d->%d %c%c%c%c%c%c F%ld L%ld C%d %c%c%s%c%s%c%c q%ld %c CPU %d%s\n",
>  		rdp->cpu, rdp->nocb_gp_rdp->cpu,
> -		rdp->nocb_next_cb_rdp ? rdp->nocb_next_cb_rdp->cpu : -1,
> +		nocb_next_rdp ? nocb_next_rdp->cpu : -1,
>  		"kK"[!!rdp->nocb_cb_kthread],
>  		"bB"[raw_spin_is_locked(&rdp->nocb_bypass_lock)],
>  		"cC"[!!atomic_read(&rdp->nocb_lock_contended)],
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/6] rcu/nocb: Create nocb kthreads on all CPUs as long as the "rcu_nocb=" is passed
  2021-11-17 15:56 ` [PATCH 4/6] rcu/nocb: Create nocb kthreads on all CPUs as long as the "rcu_nocb=" is passed Frederic Weisbecker
@ 2021-11-17 19:27   ` Paul E. McKenney
  2021-11-17 21:57     ` Frederic Weisbecker
  0 siblings, 1 reply; 18+ messages in thread
From: Paul E. McKenney @ 2021-11-17 19:27 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng,
	Josh Triplett, Joel Fernandes, rcu

On Wed, Nov 17, 2021 at 04:56:35PM +0100, Frederic Weisbecker wrote:
> In order to be able to (de-)offload any CPU using cpuset in the future,
> create a NOCB kthread for all possible CPUs. For now this is done only
> as long as the "rcu_nocb=" kernel parameter is passed to avoid
> the unnecessary overhead for most users.

The "nohz_full=" kernel parameter would also cause these kthreads to
be created, correct?  (Yeah, a nit, but...)

And some fallout of my forgetfulness below.  :-/

							Thanx, Paul

> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Uladzislau Rezki <urezki@gmail.com>
> Cc: Josh Triplett <josh@joshtriplett.org>
> Cc: Joel Fernandes <joel@joelfernandes.org>
> ---
>  kernel/rcu/tree_nocb.h | 16 +++++++---------
>  1 file changed, 7 insertions(+), 9 deletions(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index 9fe4be10fde7..1871f15b8472 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1221,11 +1221,8 @@ static void rcu_spawn_one_nocb_kthread(int cpu)
>  	struct rcu_data *rdp_gp;
>  	struct task_struct *t;
>  
> -	/*
> -	 * If this isn't a no-CBs CPU or if it already has an rcuo kthread,
> -	 * then nothing to do.
> -	 */
> -	if (!rcu_is_nocb_cpu(cpu) || rdp->nocb_cb_kthread)
> +	/* If it already has an rcuo kthread, then nothing to do. */
> +	if (rdp->nocb_cb_kthread)
>  		return;
>  
>  	/* If we didn't spawn the GP kthread first, reorganize! */
> @@ -1253,7 +1250,7 @@ static void rcu_spawn_one_nocb_kthread(int cpu)
>   */
>  static void rcu_spawn_cpu_nocb_kthread(int cpu)
>  {
> -	if (rcu_scheduler_fully_active)
> +	if (rcu_scheduler_fully_active && rcu_nocb_is_setup)
>  		rcu_spawn_one_nocb_kthread(cpu);
>  }
>  
> @@ -1268,7 +1265,7 @@ static void __init rcu_spawn_nocb_kthreads(void)
>  	int cpu;
>  
>  	if (rcu_nocb_is_setup) {
> -		for_each_online_cpu(cpu)
> +		for_each_possible_cpu(cpu)

Gah...  I had forgotten.  :-/

Some firmware lies about the OS instance's age.  Other firmware lies
about the number of CPUs, sometimes claiming large multiples of the
actual number of CPUs.

So this needs to stay "for_each_online_cpu".  Otherwise, Paul Gortmaker
will once again be afflicted with hundreds of unnecessary rcuo kthreads.

The later calls to rcutree_prepare_cpu() from rcutree_prepare_cpu()
will take care of any CPUs that first come online later.

Apologies for my earlier forgetfulness!!!

>  			rcu_spawn_cpu_nocb_kthread(cpu);
>  	}
>  }
> @@ -1303,7 +1300,7 @@ static void __init rcu_organize_nocb_kthreads(void)
>  	 * Should the corresponding CPU come online in the future, then
>  	 * we will spawn the needed set of rcu_nocb_kthread() kthreads.
>  	 */
> -	for_each_cpu(cpu, rcu_nocb_mask) {
> +	for_each_possible_cpu(cpu) {

This needs to change, but to for_each_online_cpu() instead of
for_each_possible_cpu().  That handles the case where the boot CPU is
not initially offloaded, but where the sysadm later needs to offload it.

>  		rdp = per_cpu_ptr(&rcu_data, cpu);
>  		if (rdp->cpu >= nl) {
>  			/* New GP kthread, set up for CBs & next GP. */
> @@ -1327,7 +1324,8 @@ static void __init rcu_organize_nocb_kthreads(void)
>  				pr_cont(" %d", cpu);
>  		}
>  		rdp->nocb_gp_rdp = rdp_gp;
> -		list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
> +		if (cpumask_test_cpu(cpu, rcu_nocb_mask))
> +			list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
>  	}
>  	if (gotnocbs && dump_tree)
>  		pr_cont("%s\n", gotnocbscbs ? "" : " (self only)");
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/6] rcu/nocb: Allow empty "rcu_nocbs" kernel parameter
  2021-11-17 15:56 ` [PATCH 5/6] rcu/nocb: Allow empty "rcu_nocbs" kernel parameter Frederic Weisbecker
@ 2021-11-17 19:46   ` Paul E. McKenney
  2021-11-17 22:02     ` Frederic Weisbecker
  0 siblings, 1 reply; 18+ messages in thread
From: Paul E. McKenney @ 2021-11-17 19:46 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng,
	Josh Triplett, Joel Fernandes, rcu

On Wed, Nov 17, 2021 at 04:56:36PM +0100, Frederic Weisbecker wrote:
> If a user wants to boot without any CPU in offloaded mode initially but
> with the possibility to offload them later using cpusets, provide a way
> to simply pass an empty "rcu_nocbs" kernel parameter which will enforce
> the creation of dormant nocb kthreads.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Uladzislau Rezki <urezki@gmail.com>
> Cc: Josh Triplett <josh@joshtriplett.org>
> Cc: Joel Fernandes <joel@joelfernandes.org>
> ---
>  kernel/rcu/tree_nocb.h | 10 ++++++----

Could you please also update kernel-parameters.txt?

>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index 1871f15b8472..3845f1885ffc 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -66,14 +66,16 @@ static bool rcu_nocb_is_setup;
>  static int __init rcu_nocb_setup(char *str)
>  {
>  	alloc_bootmem_cpumask_var(&rcu_nocb_mask);
> -	if (cpulist_parse(str, rcu_nocb_mask)) {
> -		pr_warn("rcu_nocbs= bad CPU range, all CPUs set\n");
> -		cpumask_setall(rcu_nocb_mask);
> +	if (*str == '=') {
> +		if (cpulist_parse(++str, rcu_nocb_mask)) {
> +			pr_warn("rcu_nocbs= bad CPU range, all CPUs set\n");
> +			cpumask_setall(rcu_nocb_mask);
> +		}

Wouldn't "*str == '='" indicate that the parameter passed in was of
the form "rcu_nocbs==8"?

Or am I misreading the next_arg() function in lib/cmdline.c?

If I am reading it correctly, doesn't the test instead want to be
something of the form "if (str && *str)"?

							Thanx, Paul

>  	}
>  	rcu_nocb_is_setup = true;
>  	return 1;
>  }
> -__setup("rcu_nocbs=", rcu_nocb_setup);
> +__setup("rcu_nocbs", rcu_nocb_setup);
>  
>  static int __init parse_rcu_nocb_poll(char *arg)
>  {
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/6] rcu/nocb: Remove rdp from nocb list when de-offloaded
  2021-11-17 18:53   ` Paul E. McKenney
@ 2021-11-17 21:47     ` Frederic Weisbecker
  2021-11-17 22:25       ` Paul E. McKenney
  0 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2021-11-17 21:47 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: LKML, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng,
	Josh Triplett, Joel Fernandes, rcu

On Wed, Nov 17, 2021 at 10:53:41AM -0800, Paul E. McKenney wrote:
> On Wed, Nov 17, 2021 at 04:56:32PM +0100, Frederic Weisbecker wrote:
> >  	pr_info("Offloading %d\n", rdp->cpu);
> > +
> > +	/*
> > +	 * Iterate this CPU on nocb_gp_wait(). We do it before locking nocb_gp_lock,
> > +	 * resetting nocb_gp_sleep and waking up the related "rcuog". Since nocb_gp_wait()
> > +	 * in turn locks nocb_gp_lock before setting nocb_gp_sleep again, we are guaranteed
> > +	 * to iterate this new rdp before "rcuog" goes to sleep again.
> 
> Just to make sure that I understand...
> 
> The ->nocb_entry_rdp list is RCU-protected, with an odd flavor of RCU.
> The list_add_tail_rcu() handles list insertion.  For list deletion,
> on the one hand, the rcu_data structures are never freed, so their
> lifetime never ends.  But one could be removed during an de-offloading
> operation, then re-added by a later offloading operation.  It would be
> bad if a reader came along for that ride because that reader would end
> up skipping all the rcu_data structures that were in the list after the
> initial position of the one that was removed and added back.

How did I miss that :-(

> 
> The trick seems to be that the de-offloading process cannot complete
> until the relevant rcuog kthread has acknowledged the de-offloading,
> which it cannot do until it has completed the list_for_each_entry_rcu()
> loop.  And the rcuog kthread is the only thing that traverses the list,
> except for the show_rcu_nocb_state() function, more on which later.
> 
> Therefore, it is not possible for the rcuog kthread to come along for
> that ride.

Actually it's worse than that: the node is removed _after_ the kthread
acknowledges deoffloading and added _before_ the kthread acknowledges
offloading. So a single rcuog loop can well run through a deletion/re-add
pair altogether.

Now since we force another loop with the new add guaranteed visible, the new
loop should handle the missed rdp's that went shortcut by the race.

Let's hope I'm not missing something else... And yes that definetly needs
a fat comment.

> 
> On to show_rcu_nocb_state()...
> 
> This does not actually traverse the list, but instead looks at the ->cpu
> field of the next structure.  Because the ->next pointer is never NULLed,
> the worst that can happen is that a confusing ->cpu field is printed,
> for example, the one that was in effect prior to the de-offloading
> operation.  But that number would have been printed anyway had the
> show_rcu_nocb_state() function not been delayed across the de-offloading
> and offloading.
> 
> So no harm done.

Exactly, that part is racy by nature.

> 
> Did I get it right?  If so, the comment might need help.  If not,
> what am I missing?

You got it right!

Thanks!

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/6] rcu/nocb: Create nocb kthreads on all CPUs as long as the "rcu_nocb=" is passed
  2021-11-17 19:27   ` Paul E. McKenney
@ 2021-11-17 21:57     ` Frederic Weisbecker
  2021-11-17 22:28       ` Paul E. McKenney
  0 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2021-11-17 21:57 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: LKML, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng,
	Josh Triplett, Joel Fernandes, rcu

On Wed, Nov 17, 2021 at 11:27:10AM -0800, Paul E. McKenney wrote:
> On Wed, Nov 17, 2021 at 04:56:35PM +0100, Frederic Weisbecker wrote:
> > In order to be able to (de-)offload any CPU using cpuset in the future,
> > create a NOCB kthread for all possible CPUs. For now this is done only
> > as long as the "rcu_nocb=" kernel parameter is passed to avoid
> > the unnecessary overhead for most users.
> 
> The "nohz_full=" kernel parameter would also cause these kthreads to
> be created, correct?  (Yeah, a nit, but...)
> 
> And some fallout of my forgetfulness below.  :-/

Ah right, that too!
> >  
> > @@ -1268,7 +1265,7 @@ static void __init rcu_spawn_nocb_kthreads(void)
> >  	int cpu;
> >  
> >  	if (rcu_nocb_is_setup) {
> > -		for_each_online_cpu(cpu)
> > +		for_each_possible_cpu(cpu)
> 
> Gah...  I had forgotten.  :-/
> 
> Some firmware lies about the OS instance's age.  Other firmware lies
> about the number of CPUs, sometimes claiming large multiples of the
> actual number of CPUs.
> 
> So this needs to stay "for_each_online_cpu".  Otherwise, Paul Gortmaker
> will once again be afflicted with hundreds of unnecessary rcuo kthreads.
> 
> The later calls to rcutree_prepare_cpu() from rcutree_prepare_cpu()
> will take care of any CPUs that first come online later.

Ok that makes sense.

> 
> >  			rcu_spawn_cpu_nocb_kthread(cpu);
> >  	}
> >  }
> > @@ -1303,7 +1300,7 @@ static void __init rcu_organize_nocb_kthreads(void)
> >  	 * Should the corresponding CPU come online in the future, then
> >  	 * we will spawn the needed set of rcu_nocb_kthread() kthreads.
> >  	 */
> > -	for_each_cpu(cpu, rcu_nocb_mask) {
> > +	for_each_possible_cpu(cpu) {
> 
> This needs to change, but to for_each_online_cpu() instead of
> for_each_possible_cpu().  That handles the case where the boot CPU is
> not initially offloaded, but where the sysadm later needs to offload it.

I'm less sure about that one. This is called early from rcu_init_nohz(). Only
the boot CPU should be online at that point. We really need to integrate all
possible CPUs within the nocb groups.

Thanks.

> 
> >  		rdp = per_cpu_ptr(&rcu_data, cpu);
> >  		if (rdp->cpu >= nl) {
> >  			/* New GP kthread, set up for CBs & next GP. */
> > @@ -1327,7 +1324,8 @@ static void __init rcu_organize_nocb_kthreads(void)
> >  				pr_cont(" %d", cpu);
> >  		}
> >  		rdp->nocb_gp_rdp = rdp_gp;
> > -		list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
> > +		if (cpumask_test_cpu(cpu, rcu_nocb_mask))
> > +			list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
> >  	}
> >  	if (gotnocbs && dump_tree)
> >  		pr_cont("%s\n", gotnocbscbs ? "" : " (self only)");
> > -- 
> > 2.25.1
> > 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/6] rcu/nocb: Allow empty "rcu_nocbs" kernel parameter
  2021-11-17 19:46   ` Paul E. McKenney
@ 2021-11-17 22:02     ` Frederic Weisbecker
  2021-11-17 22:33       ` Paul E. McKenney
  0 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2021-11-17 22:02 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: LKML, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng,
	Josh Triplett, Joel Fernandes, rcu

On Wed, Nov 17, 2021 at 11:46:05AM -0800, Paul E. McKenney wrote:
> On Wed, Nov 17, 2021 at 04:56:36PM +0100, Frederic Weisbecker wrote:
> > If a user wants to boot without any CPU in offloaded mode initially but
> > with the possibility to offload them later using cpusets, provide a way
> > to simply pass an empty "rcu_nocbs" kernel parameter which will enforce
> > the creation of dormant nocb kthreads.
> > 
> > Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> > Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> > Cc: Boqun Feng <boqun.feng@gmail.com>
> > Cc: Uladzislau Rezki <urezki@gmail.com>
> > Cc: Josh Triplett <josh@joshtriplett.org>
> > Cc: Joel Fernandes <joel@joelfernandes.org>
> > ---
> >  kernel/rcu/tree_nocb.h | 10 ++++++----
> 
> Could you please also update kernel-parameters.txt?

Ah right!

> 
> >  1 file changed, 6 insertions(+), 4 deletions(-)
> > 
> > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > index 1871f15b8472..3845f1885ffc 100644
> > --- a/kernel/rcu/tree_nocb.h
> > +++ b/kernel/rcu/tree_nocb.h
> > @@ -66,14 +66,16 @@ static bool rcu_nocb_is_setup;
> >  static int __init rcu_nocb_setup(char *str)
> >  {
> >  	alloc_bootmem_cpumask_var(&rcu_nocb_mask);
> > -	if (cpulist_parse(str, rcu_nocb_mask)) {
> > -		pr_warn("rcu_nocbs= bad CPU range, all CPUs set\n");
> > -		cpumask_setall(rcu_nocb_mask);
> > +	if (*str == '=') {
> > +		if (cpulist_parse(++str, rcu_nocb_mask)) {
> > +			pr_warn("rcu_nocbs= bad CPU range, all CPUs set\n");
> > +			cpumask_setall(rcu_nocb_mask);
> > +		}
> 
> Wouldn't "*str == '='" indicate that the parameter passed in was of
> the form "rcu_nocbs==8"?
> 
> Or am I misreading the next_arg() function in lib/cmdline.c?
> 
> If I am reading it correctly, doesn't the test instead want to be
> something of the form "if (str && *str)"?
> 
> 							Thanx, Paul
> 
> >  	}
> >  	rcu_nocb_is_setup = true;
> >  	return 1;
> >  }
> > -__setup("rcu_nocbs=", rcu_nocb_setup);
> > +__setup("rcu_nocbs", rcu_nocb_setup);

Don't miss that line, that should probably answer your above question, if
I didn't miss something from my end (which is not unlikely...)

> >  
> >  static int __init parse_rcu_nocb_poll(char *arg)
> >  {
> > -- 
> > 2.25.1
> > 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/6] rcu/nocb: Remove rdp from nocb list when de-offloaded
  2021-11-17 21:47     ` Frederic Weisbecker
@ 2021-11-17 22:25       ` Paul E. McKenney
  0 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2021-11-17 22:25 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng,
	Josh Triplett, Joel Fernandes, rcu

On Wed, Nov 17, 2021 at 10:47:50PM +0100, Frederic Weisbecker wrote:
> On Wed, Nov 17, 2021 at 10:53:41AM -0800, Paul E. McKenney wrote:
> > On Wed, Nov 17, 2021 at 04:56:32PM +0100, Frederic Weisbecker wrote:
> > >  	pr_info("Offloading %d\n", rdp->cpu);
> > > +
> > > +	/*
> > > +	 * Iterate this CPU on nocb_gp_wait(). We do it before locking nocb_gp_lock,
> > > +	 * resetting nocb_gp_sleep and waking up the related "rcuog". Since nocb_gp_wait()
> > > +	 * in turn locks nocb_gp_lock before setting nocb_gp_sleep again, we are guaranteed
> > > +	 * to iterate this new rdp before "rcuog" goes to sleep again.
> > 
> > Just to make sure that I understand...
> > 
> > The ->nocb_entry_rdp list is RCU-protected, with an odd flavor of RCU.
> > The list_add_tail_rcu() handles list insertion.  For list deletion,
> > on the one hand, the rcu_data structures are never freed, so their
> > lifetime never ends.  But one could be removed during an de-offloading
> > operation, then re-added by a later offloading operation.  It would be
> > bad if a reader came along for that ride because that reader would end
> > up skipping all the rcu_data structures that were in the list after the
> > initial position of the one that was removed and added back.
> 
> How did I miss that :-(
> 
> > 
> > The trick seems to be that the de-offloading process cannot complete
> > until the relevant rcuog kthread has acknowledged the de-offloading,
> > which it cannot do until it has completed the list_for_each_entry_rcu()
> > loop.  And the rcuog kthread is the only thing that traverses the list,
> > except for the show_rcu_nocb_state() function, more on which later.
> > 
> > Therefore, it is not possible for the rcuog kthread to come along for
> > that ride.
> 
> Actually it's worse than that: the node is removed _after_ the kthread
> acknowledges deoffloading and added _before_ the kthread acknowledges
> offloading. So a single rcuog loop can well run through a deletion/re-add
> pair altogether.
> 
> Now since we force another loop with the new add guaranteed visible, the new
> loop should handle the missed rdp's that went shortcut by the race.
> 
> Let's hope I'm not missing something else... And yes that definetly needs
> a fat comment.
> 
> > 
> > On to show_rcu_nocb_state()...
> > 
> > This does not actually traverse the list, but instead looks at the ->cpu
> > field of the next structure.  Because the ->next pointer is never NULLed,
> > the worst that can happen is that a confusing ->cpu field is printed,
> > for example, the one that was in effect prior to the de-offloading
> > operation.  But that number would have been printed anyway had the
> > show_rcu_nocb_state() function not been delayed across the de-offloading
> > and offloading.
> > 
> > So no harm done.
> 
> Exactly, that part is racy by nature.
> 
> > 
> > Did I get it right?  If so, the comment might need help.  If not,
> > what am I missing?
> 
> You got it right!

Next question...  What things are the two of us put together missing?  ;-)

							Thanx, Paul

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/6] rcu/nocb: Create nocb kthreads on all CPUs as long as the "rcu_nocb=" is passed
  2021-11-17 21:57     ` Frederic Weisbecker
@ 2021-11-17 22:28       ` Paul E. McKenney
  0 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2021-11-17 22:28 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng,
	Josh Triplett, Joel Fernandes, rcu

On Wed, Nov 17, 2021 at 10:57:53PM +0100, Frederic Weisbecker wrote:
> On Wed, Nov 17, 2021 at 11:27:10AM -0800, Paul E. McKenney wrote:
> > On Wed, Nov 17, 2021 at 04:56:35PM +0100, Frederic Weisbecker wrote:
> > > In order to be able to (de-)offload any CPU using cpuset in the future,
> > > create a NOCB kthread for all possible CPUs. For now this is done only
> > > as long as the "rcu_nocb=" kernel parameter is passed to avoid
> > > the unnecessary overhead for most users.
> > 
> > The "nohz_full=" kernel parameter would also cause these kthreads to
> > be created, correct?  (Yeah, a nit, but...)
> > 
> > And some fallout of my forgetfulness below.  :-/
> 
> Ah right, that too!
> > >  
> > > @@ -1268,7 +1265,7 @@ static void __init rcu_spawn_nocb_kthreads(void)
> > >  	int cpu;
> > >  
> > >  	if (rcu_nocb_is_setup) {
> > > -		for_each_online_cpu(cpu)
> > > +		for_each_possible_cpu(cpu)
> > 
> > Gah...  I had forgotten.  :-/
> > 
> > Some firmware lies about the OS instance's age.  Other firmware lies
> > about the number of CPUs, sometimes claiming large multiples of the
> > actual number of CPUs.
> > 
> > So this needs to stay "for_each_online_cpu".  Otherwise, Paul Gortmaker
> > will once again be afflicted with hundreds of unnecessary rcuo kthreads.
> > 
> > The later calls to rcutree_prepare_cpu() from rcutree_prepare_cpu()
> > will take care of any CPUs that first come online later.
> 
> Ok that makes sense.
> 
> > 
> > >  			rcu_spawn_cpu_nocb_kthread(cpu);
> > >  	}
> > >  }
> > > @@ -1303,7 +1300,7 @@ static void __init rcu_organize_nocb_kthreads(void)
> > >  	 * Should the corresponding CPU come online in the future, then
> > >  	 * we will spawn the needed set of rcu_nocb_kthread() kthreads.
> > >  	 */
> > > -	for_each_cpu(cpu, rcu_nocb_mask) {
> > > +	for_each_possible_cpu(cpu) {
> > 
> > This needs to change, but to for_each_online_cpu() instead of
> > for_each_possible_cpu().  That handles the case where the boot CPU is
> > not initially offloaded, but where the sysadm later needs to offload it.
> 
> I'm less sure about that one. This is called early from rcu_init_nohz(). Only
> the boot CPU should be online at that point. We really need to integrate all
> possible CPUs within the nocb groups.

Ah, yes, this is creating the data-structure linkages, not the kthreads.
Your for_each_possible_cpu() is quite correct, good!

							Thanx, Paul

> Thanks.
> 
> > 
> > >  		rdp = per_cpu_ptr(&rcu_data, cpu);
> > >  		if (rdp->cpu >= nl) {
> > >  			/* New GP kthread, set up for CBs & next GP. */
> > > @@ -1327,7 +1324,8 @@ static void __init rcu_organize_nocb_kthreads(void)
> > >  				pr_cont(" %d", cpu);
> > >  		}
> > >  		rdp->nocb_gp_rdp = rdp_gp;
> > > -		list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
> > > +		if (cpumask_test_cpu(cpu, rcu_nocb_mask))
> > > +			list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
> > >  	}
> > >  	if (gotnocbs && dump_tree)
> > >  		pr_cont("%s\n", gotnocbscbs ? "" : " (self only)");
> > > -- 
> > > 2.25.1
> > > 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/6] rcu/nocb: Allow empty "rcu_nocbs" kernel parameter
  2021-11-17 22:02     ` Frederic Weisbecker
@ 2021-11-17 22:33       ` Paul E. McKenney
  0 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2021-11-17 22:33 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng,
	Josh Triplett, Joel Fernandes, rcu

On Wed, Nov 17, 2021 at 11:02:30PM +0100, Frederic Weisbecker wrote:
> On Wed, Nov 17, 2021 at 11:46:05AM -0800, Paul E. McKenney wrote:
> > On Wed, Nov 17, 2021 at 04:56:36PM +0100, Frederic Weisbecker wrote:
> > > If a user wants to boot without any CPU in offloaded mode initially but
> > > with the possibility to offload them later using cpusets, provide a way
> > > to simply pass an empty "rcu_nocbs" kernel parameter which will enforce
> > > the creation of dormant nocb kthreads.
> > > 
> > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> > > Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> > > Cc: Boqun Feng <boqun.feng@gmail.com>
> > > Cc: Uladzislau Rezki <urezki@gmail.com>
> > > Cc: Josh Triplett <josh@joshtriplett.org>
> > > Cc: Joel Fernandes <joel@joelfernandes.org>
> > > ---
> > >  kernel/rcu/tree_nocb.h | 10 ++++++----
> > 
> > Could you please also update kernel-parameters.txt?
> 
> Ah right!
> 
> > 
> > >  1 file changed, 6 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > > index 1871f15b8472..3845f1885ffc 100644
> > > --- a/kernel/rcu/tree_nocb.h
> > > +++ b/kernel/rcu/tree_nocb.h
> > > @@ -66,14 +66,16 @@ static bool rcu_nocb_is_setup;
> > >  static int __init rcu_nocb_setup(char *str)
> > >  {
> > >  	alloc_bootmem_cpumask_var(&rcu_nocb_mask);
> > > -	if (cpulist_parse(str, rcu_nocb_mask)) {
> > > -		pr_warn("rcu_nocbs= bad CPU range, all CPUs set\n");
> > > -		cpumask_setall(rcu_nocb_mask);
> > > +	if (*str == '=') {
> > > +		if (cpulist_parse(++str, rcu_nocb_mask)) {
> > > +			pr_warn("rcu_nocbs= bad CPU range, all CPUs set\n");
> > > +			cpumask_setall(rcu_nocb_mask);
> > > +		}
> > 
> > Wouldn't "*str == '='" indicate that the parameter passed in was of
> > the form "rcu_nocbs==8"?
> > 
> > Or am I misreading the next_arg() function in lib/cmdline.c?
> > 
> > If I am reading it correctly, doesn't the test instead want to be
> > something of the form "if (str && *str)"?
> > 
> > 							Thanx, Paul
> > 
> > >  	}
> > >  	rcu_nocb_is_setup = true;
> > >  	return 1;
> > >  }
> > > -__setup("rcu_nocbs=", rcu_nocb_setup);
> > > +__setup("rcu_nocbs", rcu_nocb_setup);
> 
> Don't miss that line, that should probably answer your above question, if
> I didn't miss something from my end (which is not unlikely...)

My next step would be to add a printk() and try booting with different
rcu_nocbs parameter settings.  ;-)

							Thanx, Paul

> > >  static int __init parse_rcu_nocb_poll(char *arg)
> > >  {
> > > -- 
> > > 2.25.1
> > > 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/6] rcu/nocb: Merge rcu_spawn_cpu_nocb_kthread() and rcu_spawn_one_nocb_kthread()
  2021-11-23  0:37 ` [PATCH 6/6] rcu/nocb: Merge rcu_spawn_cpu_nocb_kthread() and rcu_spawn_one_nocb_kthread() Frederic Weisbecker
@ 2021-12-01  9:28   ` Neeraj Upadhyay
  0 siblings, 0 replies; 18+ messages in thread
From: Neeraj Upadhyay @ 2021-12-01  9:28 UTC (permalink / raw)
  To: Frederic Weisbecker, Paul E . McKenney
  Cc: LKML, Uladzislau Rezki, Boqun Feng, Josh Triplett, Joel Fernandes, rcu



On 11/23/2021 6:07 AM, Frederic Weisbecker wrote:
> rcu_spawn_one_nocb_kthread() is only called by
> rcu_spawn_cpu_nocb_kthread(). Don't bother with two separate functions
> and merge them.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Uladzislau Rezki <urezki@gmail.com>
> Cc: Josh Triplett <josh@joshtriplett.org>
> Cc: Joel Fernandes <joel@joelfernandes.org>
> ---


Reviewed-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>

>   kernel/rcu/tree_nocb.h | 15 ++++-----------
>   1 file changed, 4 insertions(+), 11 deletions(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index d915780d40c8..7e8da346127a 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1225,12 +1225,15 @@ static void __init rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp)
>    * rcuo CB kthread, spawn it.  Additionally, if the rcuo GP kthread
>    * for this CPU's group has not yet been created, spawn it as well.
>    */
> -static void rcu_spawn_one_nocb_kthread(int cpu)
> +static void rcu_spawn_cpu_nocb_kthread(int cpu)
>   {
>   	struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
>   	struct rcu_data *rdp_gp;
>   	struct task_struct *t;
>   
> +	if (!rcu_scheduler_fully_active || !rcu_nocb_is_setup)
> +		return;
> +
>   	/* If it already has an rcuo kthread, then nothing to do. */
>   	if (rdp->nocb_cb_kthread)
>   		return;
> @@ -1254,16 +1257,6 @@ static void rcu_spawn_one_nocb_kthread(int cpu)
>   	WRITE_ONCE(rdp->nocb_gp_kthread, rdp_gp->nocb_gp_kthread);
>   }
>   
> -/*
> - * If the specified CPU is a no-CBs CPU that does not already have its
> - * rcuo kthread, spawn it.
> - */
> -static void rcu_spawn_cpu_nocb_kthread(int cpu)
> -{
> -	if (rcu_scheduler_fully_active && rcu_nocb_is_setup)
> -		rcu_spawn_one_nocb_kthread(cpu);
> -}
> -
>   /*
>    * Once the scheduler is running, spawn rcuo kthreads for all online
>    * no-CBs CPUs.  This assumes that the early_initcall()s happen before
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 6/6] rcu/nocb: Merge rcu_spawn_cpu_nocb_kthread() and rcu_spawn_one_nocb_kthread()
  2021-11-23  0:37 [PATCH 0/6] rcu/nocb: Last prep work before cpuset interface v2 Frederic Weisbecker
@ 2021-11-23  0:37 ` Frederic Weisbecker
  2021-12-01  9:28   ` Neeraj Upadhyay
  0 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2021-11-23  0:37 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, Uladzislau Rezki, Neeraj Upadhyay,
	Boqun Feng, Josh Triplett, Joel Fernandes, rcu

rcu_spawn_one_nocb_kthread() is only called by
rcu_spawn_cpu_nocb_kthread(). Don't bother with two separate functions
and merge them.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
---
 kernel/rcu/tree_nocb.h | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index d915780d40c8..7e8da346127a 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1225,12 +1225,15 @@ static void __init rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp)
  * rcuo CB kthread, spawn it.  Additionally, if the rcuo GP kthread
  * for this CPU's group has not yet been created, spawn it as well.
  */
-static void rcu_spawn_one_nocb_kthread(int cpu)
+static void rcu_spawn_cpu_nocb_kthread(int cpu)
 {
 	struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
 	struct rcu_data *rdp_gp;
 	struct task_struct *t;
 
+	if (!rcu_scheduler_fully_active || !rcu_nocb_is_setup)
+		return;
+
 	/* If it already has an rcuo kthread, then nothing to do. */
 	if (rdp->nocb_cb_kthread)
 		return;
@@ -1254,16 +1257,6 @@ static void rcu_spawn_one_nocb_kthread(int cpu)
 	WRITE_ONCE(rdp->nocb_gp_kthread, rdp_gp->nocb_gp_kthread);
 }
 
-/*
- * If the specified CPU is a no-CBs CPU that does not already have its
- * rcuo kthread, spawn it.
- */
-static void rcu_spawn_cpu_nocb_kthread(int cpu)
-{
-	if (rcu_scheduler_fully_active && rcu_nocb_is_setup)
-		rcu_spawn_one_nocb_kthread(cpu);
-}
-
 /*
  * Once the scheduler is running, spawn rcuo kthreads for all online
  * no-CBs CPUs.  This assumes that the early_initcall()s happen before
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-12-01  9:28 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-17 15:56 [PATCH 0/6] rcu/nocb: Last prep work before cpuset interface Frederic Weisbecker
2021-11-17 15:56 ` [PATCH 1/6] rcu/nocb: Remove rdp from nocb list when de-offloaded Frederic Weisbecker
2021-11-17 18:53   ` Paul E. McKenney
2021-11-17 21:47     ` Frederic Weisbecker
2021-11-17 22:25       ` Paul E. McKenney
2021-11-17 15:56 ` [PATCH 2/6] rcu/nocb: Prepare nocb_cb_wait() to start with a non-offloaded rdp Frederic Weisbecker
2021-11-17 15:56 ` [PATCH 3/6] rcu/nocb: Optimize kthreads and rdp initialization Frederic Weisbecker
2021-11-17 15:56 ` [PATCH 4/6] rcu/nocb: Create nocb kthreads on all CPUs as long as the "rcu_nocb=" is passed Frederic Weisbecker
2021-11-17 19:27   ` Paul E. McKenney
2021-11-17 21:57     ` Frederic Weisbecker
2021-11-17 22:28       ` Paul E. McKenney
2021-11-17 15:56 ` [PATCH 5/6] rcu/nocb: Allow empty "rcu_nocbs" kernel parameter Frederic Weisbecker
2021-11-17 19:46   ` Paul E. McKenney
2021-11-17 22:02     ` Frederic Weisbecker
2021-11-17 22:33       ` Paul E. McKenney
2021-11-17 15:56 ` [PATCH 6/6] rcu/nocb: Merge rcu_spawn_cpu_nocb_kthread() and rcu_spawn_one_nocb_kthread() Frederic Weisbecker
2021-11-23  0:37 [PATCH 0/6] rcu/nocb: Last prep work before cpuset interface v2 Frederic Weisbecker
2021-11-23  0:37 ` [PATCH 6/6] rcu/nocb: Merge rcu_spawn_cpu_nocb_kthread() and rcu_spawn_one_nocb_kthread() Frederic Weisbecker
2021-12-01  9:28   ` Neeraj Upadhyay

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).