All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements
@ 2017-06-02 15:31 Andrew Jones
  2017-06-02 15:31 ` [PATCH kvm-unit-tests v2 1/2] arm/arm64: smp: cpu0 is special Andrew Jones
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Andrew Jones @ 2017-06-02 15:31 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, rkrcmar

After posting on_cpu() support yesterday I thought of some potential
issues.
 1) cpu0 was the only cpu allowed to call on_cpus, and no cpu was
    allowed to call on_cpu on cpu0 (as it'll never idle).
 2) deadlocks are easy with this idle state scheduling model, but
    also easy to detect, so we should.

v2:
 - fix race in cpu_wait() by ordering set/check correctly

Andrew Jones (2):
  arm/arm64: smp: cpu0 is special
  arm/arm64: smp: add deadlock detection

 lib/arm/asm/smp.h |  2 ++
 lib/arm/smp.c     | 46 ++++++++++++++++++++++++++++++++++++----------
 2 files changed, 38 insertions(+), 10 deletions(-)

-- 
2.9.4

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH kvm-unit-tests v2 1/2] arm/arm64: smp: cpu0 is special
  2017-06-02 15:31 [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements Andrew Jones
@ 2017-06-02 15:31 ` Andrew Jones
  2017-06-02 15:31 ` [PATCH kvm-unit-tests v2 2/2] arm/arm64: smp: add deadlock detection Andrew Jones
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Andrew Jones @ 2017-06-02 15:31 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, rkrcmar

Unless cpu0 code explicitly calls do_idle(), then it never will,
and thus any on_cpu(0, ...) calls will hang. Let's assume that
if cpu0 hasn't already called do_idle() that the on_cpu(0, ...)
call from a secondary CPU was a programming error and assert.
However, it's possible the developer intends cpu0 to call do_idle
later, so give the developer a chance to disable the assert too.

Also, while cpu0 is special, it's not that special. It shouldn't
be the only CPU that can call on_cpus().

Finally, let's not mess with cpu0's idle state in on_cpus(), as
this could potentially confuse another CPU that's checking it.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/asm/smp.h |  2 ++
 lib/arm/smp.c     | 16 +++++++++++-----
 2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/lib/arm/asm/smp.h b/lib/arm/asm/smp.h
index 4272e0fd9253..62d14b07bc51 100644
--- a/lib/arm/asm/smp.h
+++ b/lib/arm/asm/smp.h
@@ -10,6 +10,8 @@
 
 #define smp_processor_id()		(current_thread_info()->cpu)
 
+extern bool cpu0_calls_idle;
+
 extern void halt(void);
 extern void do_idle(void);
 
diff --git a/lib/arm/smp.c b/lib/arm/smp.c
index 7e3c91ce3b48..b4b43237e32e 100644
--- a/lib/arm/smp.c
+++ b/lib/arm/smp.c
@@ -14,6 +14,8 @@
 #include <asm/psci.h>
 #include <asm/smp.h>
 
+bool cpu0_calls_idle;
+
 cpumask_t cpu_present_mask;
 cpumask_t cpu_online_mask;
 cpumask_t cpu_idle_mask;
@@ -81,6 +83,9 @@ void do_idle(void)
 {
 	int cpu = smp_processor_id();
 
+	if (cpu == 0)
+		cpu0_calls_idle = true;
+
 	set_cpu_idle(cpu, true);
 	sev();
 
@@ -103,6 +108,9 @@ void on_cpu_async(int cpu, void (*func)(void *data), void *data)
 		return;
 	}
 
+	assert_msg(cpu != 0 || cpu0_calls_idle, "Waiting on CPU0, which is unlikely to idle. "
+						"If this is intended set cpu0_calls_idle=1");
+
 	spin_lock(&lock);
 	if (!cpu_online(cpu))
 		__smp_boot_secondary(cpu, do_idle);
@@ -133,17 +141,15 @@ void on_cpu(int cpu, void (*func)(void *data), void *data)
 
 void on_cpus(void (*func)(void))
 {
-	int cpu;
+	int cpu, me = smp_processor_id();
 
 	for_each_present_cpu(cpu) {
-		if (cpu == 0)
+		if (cpu == me)
 			continue;
 		on_cpu_async(cpu, (on_cpu_func)func, NULL);
 	}
 	func();
 
-	set_cpu_idle(0, true);
-	while (!cpumask_full(&cpu_idle_mask))
+	while (cpumask_weight(&cpu_idle_mask) < nr_cpus - 1)
 		wfe();
-	set_cpu_idle(0, false);
 }
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH kvm-unit-tests v2 2/2] arm/arm64: smp: add deadlock detection
  2017-06-02 15:31 [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements Andrew Jones
  2017-06-02 15:31 ` [PATCH kvm-unit-tests v2 1/2] arm/arm64: smp: cpu0 is special Andrew Jones
@ 2017-06-02 15:31 ` Andrew Jones
  2017-06-03  8:28 ` [PATCH kvm-unit-tests v2 3/2] arm/arm64: smp: detect deadlock cycles Andrew Jones
  2017-06-07 15:00 ` [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements Radim Krčmář
  3 siblings, 0 replies; 5+ messages in thread
From: Andrew Jones @ 2017-06-02 15:31 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, rkrcmar

on_cpu() and friends are a bit risky when implemented without IPIs
(no preemption), because we can easily get deadlocks. Luckily, it's
also easy to detect those deadlocks, and assert, to at least make
them easier to debug.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/smp.c | 30 +++++++++++++++++++++++++-----
 1 file changed, 25 insertions(+), 5 deletions(-)

diff --git a/lib/arm/smp.c b/lib/arm/smp.c
index b4b43237e32e..bb999243de63 100644
--- a/lib/arm/smp.c
+++ b/lib/arm/smp.c
@@ -76,9 +76,24 @@ typedef void (*on_cpu_func)(void *);
 struct on_cpu_info {
 	on_cpu_func func;
 	void *data;
+	cpumask_t waiters;
 };
 static struct on_cpu_info on_cpu_info[NR_CPUS];
 
+static void cpu_wait(int cpu)
+{
+	int me = smp_processor_id();
+
+	if (cpu == me)
+		return;
+
+	cpumask_set_cpu(me, &on_cpu_info[cpu].waiters);
+	assert_msg(!cpumask_test_cpu(cpu, &on_cpu_info[me].waiters), "CPU%d <=> CPU%d deadlock detected", me, cpu);
+	while (!cpu_idle(cpu))
+		wfe();
+	cpumask_clear_cpu(me, &on_cpu_info[cpu].waiters);
+}
+
 void do_idle(void)
 {
 	int cpu = smp_processor_id();
@@ -117,8 +132,7 @@ void on_cpu_async(int cpu, void (*func)(void *data), void *data)
 	spin_unlock(&lock);
 
 	for (;;) {
-		while (!cpu_idle(cpu))
-			wfe();
+		cpu_wait(cpu);
 		spin_lock(&lock);
 		if ((volatile void *)on_cpu_info[cpu].func == NULL)
 			break;
@@ -134,9 +148,7 @@ void on_cpu_async(int cpu, void (*func)(void *data), void *data)
 void on_cpu(int cpu, void (*func)(void *data), void *data)
 {
 	on_cpu_async(cpu, func, data);
-
-	while (!cpu_idle(cpu))
-		wfe();
+	cpu_wait(cpu);
 }
 
 void on_cpus(void (*func)(void))
@@ -150,6 +162,14 @@ void on_cpus(void (*func)(void))
 	}
 	func();
 
+	for_each_present_cpu(cpu) {
+		if (cpu == me)
+			continue;
+		cpumask_set_cpu(me, &on_cpu_info[cpu].waiters);
+		assert_msg(!cpumask_test_cpu(cpu, &on_cpu_info[me].waiters), "CPU%d <=> CPU%d deadlock detected", me, cpu);
+	}
 	while (cpumask_weight(&cpu_idle_mask) < nr_cpus - 1)
 		wfe();
+	for_each_present_cpu(cpu)
+		cpumask_clear_cpu(me, &on_cpu_info[cpu].waiters);
 }
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH kvm-unit-tests v2 3/2] arm/arm64: smp: detect deadlock cycles
  2017-06-02 15:31 [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements Andrew Jones
  2017-06-02 15:31 ` [PATCH kvm-unit-tests v2 1/2] arm/arm64: smp: cpu0 is special Andrew Jones
  2017-06-02 15:31 ` [PATCH kvm-unit-tests v2 2/2] arm/arm64: smp: add deadlock detection Andrew Jones
@ 2017-06-03  8:28 ` Andrew Jones
  2017-06-07 15:00 ` [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements Radim Krčmář
  3 siblings, 0 replies; 5+ messages in thread
From: Andrew Jones @ 2017-06-03  8:28 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, rkrcmar

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/smp.c | 33 +++++++++++++++++++++++++++++++--
 1 file changed, 31 insertions(+), 2 deletions(-)

diff --git a/lib/arm/smp.c b/lib/arm/smp.c
index bb999243de63..5a6209ebcbfd 100644
--- a/lib/arm/smp.c
+++ b/lib/arm/smp.c
@@ -80,6 +80,35 @@ struct on_cpu_info {
 };
 static struct on_cpu_info on_cpu_info[NR_CPUS];
 
+static void __deadlock_check(int cpu, const cpumask_t *waiters, bool *found)
+{
+	int i;
+
+	for_each_cpu(i, waiters) {
+		if (i == cpu) {
+			printf("CPU%d", cpu);
+			*found = true;
+			return;
+		}
+		__deadlock_check(cpu, &on_cpu_info[i].waiters, found);
+		if (*found) {
+			printf(" <=> CPU%d", i);
+			return;
+		}
+	}
+}
+
+static void deadlock_check(int me, int cpu)
+{
+	bool found = false;
+
+	__deadlock_check(cpu, &on_cpu_info[me].waiters, &found);
+	if (found) {
+		printf(" <=> CPU%d deadlock detectd\n", me);
+		assert(0);
+	}
+}
+
 static void cpu_wait(int cpu)
 {
 	int me = smp_processor_id();
@@ -88,7 +117,7 @@ static void cpu_wait(int cpu)
 		return;
 
 	cpumask_set_cpu(me, &on_cpu_info[cpu].waiters);
-	assert_msg(!cpumask_test_cpu(cpu, &on_cpu_info[me].waiters), "CPU%d <=> CPU%d deadlock detected", me, cpu);
+	deadlock_check(me, cpu);
 	while (!cpu_idle(cpu))
 		wfe();
 	cpumask_clear_cpu(me, &on_cpu_info[cpu].waiters);
@@ -166,7 +195,7 @@ void on_cpus(void (*func)(void))
 		if (cpu == me)
 			continue;
 		cpumask_set_cpu(me, &on_cpu_info[cpu].waiters);
-		assert_msg(!cpumask_test_cpu(cpu, &on_cpu_info[me].waiters), "CPU%d <=> CPU%d deadlock detected", me, cpu);
+		deadlock_check(me, cpu);
 	}
 	while (cpumask_weight(&cpu_idle_mask) < nr_cpus - 1)
 		wfe();
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements
  2017-06-02 15:31 [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements Andrew Jones
                   ` (2 preceding siblings ...)
  2017-06-03  8:28 ` [PATCH kvm-unit-tests v2 3/2] arm/arm64: smp: detect deadlock cycles Andrew Jones
@ 2017-06-07 15:00 ` Radim Krčmář
  3 siblings, 0 replies; 5+ messages in thread
From: Radim Krčmář @ 2017-06-07 15:00 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, pbonzini

2017-06-02 17:31+0200, Andrew Jones:
> After posting on_cpu() support yesterday I thought of some potential
> issues.
>  1) cpu0 was the only cpu allowed to call on_cpus, and no cpu was
>     allowed to call on_cpu on cpu0 (as it'll never idle).
>  2) deadlocks are easy with this idle state scheduling model, but
>     also easy to detect, so we should.
> 
> v2:
>  - fix race in cpu_wait() by ordering set/check correctly

Applied, thanks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-06-07 15:00 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-02 15:31 [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements Andrew Jones
2017-06-02 15:31 ` [PATCH kvm-unit-tests v2 1/2] arm/arm64: smp: cpu0 is special Andrew Jones
2017-06-02 15:31 ` [PATCH kvm-unit-tests v2 2/2] arm/arm64: smp: add deadlock detection Andrew Jones
2017-06-03  8:28 ` [PATCH kvm-unit-tests v2 3/2] arm/arm64: smp: detect deadlock cycles Andrew Jones
2017-06-07 15:00 ` [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements Radim Krčmář

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.