All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu
@ 2017-05-29 13:58 Andrew Jones
  2017-05-29 13:58 ` [PATCH kvm-unit-tests 1/3] arm/arm64: smp: assert secondaries only boot once Andrew Jones
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Andrew Jones @ 2017-05-29 13:58 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: rkrcmar

Paolo inspired me to add x86's on_cpu and on_cpu_asynch to ARM's API.
I messed around with it a while, but haven't really settled on what
I want to do. I played with the following approaches:
 - wfe/sev, requires the concept of an idle state and knowing that the
   target CPU is in that state - i.e. no arbitrary preemption.
 - IPI, allows arbitrary preemption, but requires the GIC to be setup
   enough to receive SGIs and also a dedicated SGI for the scheduling
 - PSCI, poweroff state isn't better than an idle state, and PSCI works
   asynchronously wrt CPUs

I need to switch to more pressing work now, so I'm going to backburn
this. However, I'll probably come back to it sometime and while I
was messing with it I wrote the 3 patches in this series that we can
take now.

drew

Andrew Jones (3):
  arm/arm64: smp: assert secondaries only boot once
  arm/arm64: psci: cpu_psci_cpu_die operates on itself
  arm/arm64: psci: avoid calling halt directly

 arm/psci.c         |  5 ++---
 lib/arm/asm/psci.h |  2 +-
 lib/arm/asm/smp.h  |  8 --------
 lib/arm/psci.c     |  5 +++--
 lib/arm/smp.c      | 13 +++++++++++++
 5 files changed, 19 insertions(+), 14 deletions(-)

-- 
2.9.4

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH kvm-unit-tests 1/3] arm/arm64: smp: assert secondaries only boot once
  2017-05-29 13:58 [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu Andrew Jones
@ 2017-05-29 13:58 ` Andrew Jones
  2017-05-29 13:58 ` [PATCH kvm-unit-tests 2/3] arm/arm64: psci: cpu_psci_cpu_die operates on itself Andrew Jones
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Andrew Jones @ 2017-05-29 13:58 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: rkrcmar

It's tempting to try running smp_run() more than once, but that
won't currently work. In the meantime, let's fail better. Also,
there's no reason to expose the structure in smp.h

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/asm/smp.h |  8 --------
 lib/arm/smp.c     | 13 +++++++++++++
 2 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/lib/arm/asm/smp.h b/lib/arm/asm/smp.h
index 1d1cd7a29991..a3551a40e4c5 100644
--- a/lib/arm/asm/smp.h
+++ b/lib/arm/asm/smp.h
@@ -37,14 +37,6 @@ static inline void set_cpu_online(int cpu, bool online)
 }
 
 typedef void (*secondary_entry_fn)(void);
-
-/* secondary_data is reused for each cpu, so only boot one at a time */
-struct secondary_data {
-	void *stack;		/* must be first member of struct */
-	secondary_entry_fn entry;
-};
-extern struct secondary_data secondary_data;
-
 extern void smp_boot_secondary(int cpu, secondary_entry_fn entry);
 extern void smp_run(void (*func)(void));
 
diff --git a/lib/arm/smp.c b/lib/arm/smp.c
index 8528aa454108..fe64ed410377 100644
--- a/lib/arm/smp.c
+++ b/lib/arm/smp.c
@@ -7,6 +7,7 @@
  */
 #include <libcflat.h>
 #include <asm/thread_info.h>
+#include <asm/spinlock.h>
 #include <asm/cpumask.h>
 #include <asm/barrier.h>
 #include <asm/mmu.h>
@@ -16,7 +17,13 @@
 cpumask_t cpu_present_mask;
 cpumask_t cpu_online_mask;
 cpumask_t cpu_halted_mask;
+
+struct secondary_data {
+	void *stack;            /* must be first member of struct */
+	secondary_entry_fn entry;
+};
 struct secondary_data secondary_data;
+static struct spinlock lock;
 
 secondary_entry_fn secondary_cinit(void)
 {
@@ -45,6 +52,10 @@ void smp_boot_secondary(int cpu, secondary_entry_fn entry)
 {
 	int ret;
 
+	spin_lock(&lock);
+
+	assert_msg(!cpu_online(cpu), "CPU%d already boot once", cpu);
+
 	secondary_data.stack = thread_stack_alloc();
 	secondary_data.entry = entry;
 	mmu_mark_disabled(cpu);
@@ -53,6 +64,8 @@ void smp_boot_secondary(int cpu, secondary_entry_fn entry)
 
 	while (!cpu_online(cpu))
 		wfe();
+
+	spin_unlock(&lock);
 }
 
 void secondary_halt(void)
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH kvm-unit-tests 2/3] arm/arm64: psci: cpu_psci_cpu_die operates on itself
  2017-05-29 13:58 [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu Andrew Jones
  2017-05-29 13:58 ` [PATCH kvm-unit-tests 1/3] arm/arm64: smp: assert secondaries only boot once Andrew Jones
@ 2017-05-29 13:58 ` Andrew Jones
  2017-05-29 13:58 ` [PATCH kvm-unit-tests 3/3] arm/arm64: psci: avoid calling halt directly Andrew Jones
  2017-05-30 12:13 ` [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu Paolo Bonzini
  3 siblings, 0 replies; 8+ messages in thread
From: Andrew Jones @ 2017-05-29 13:58 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: rkrcmar

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/asm/psci.h | 2 +-
 lib/arm/psci.c     | 5 +++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/lib/arm/asm/psci.h b/lib/arm/asm/psci.h
index eeb55cf299d1..ed51708fd265 100644
--- a/lib/arm/asm/psci.h
+++ b/lib/arm/asm/psci.h
@@ -8,6 +8,6 @@ extern int psci_invoke(unsigned long function_id, unsigned long arg0,
 extern int psci_cpu_on(unsigned long cpuid, unsigned long entry_point);
 extern void psci_sys_reset(void);
 extern int cpu_psci_cpu_boot(unsigned int cpu);
-extern void cpu_psci_cpu_die(unsigned int cpu);
+extern void cpu_psci_cpu_die(void);
 
 #endif /* _ASMARM_PSCI_H_ */
diff --git a/lib/arm/psci.c b/lib/arm/psci.c
index a14acddeefd3..119f74e57e91 100644
--- a/lib/arm/psci.c
+++ b/lib/arm/psci.c
@@ -9,6 +9,7 @@
 #include <asm/psci.h>
 #include <asm/setup.h>
 #include <asm/page.h>
+#include <asm/smp.h>
 
 __attribute__((noinline))
 int psci_invoke(unsigned long function_id, unsigned long arg0,
@@ -40,11 +41,11 @@ int cpu_psci_cpu_boot(unsigned int cpu)
 }
 
 #define PSCI_POWER_STATE_TYPE_POWER_DOWN (1U << 16)
-void cpu_psci_cpu_die(unsigned int cpu)
+void cpu_psci_cpu_die(void)
 {
 	int err = psci_invoke(PSCI_0_2_FN_CPU_OFF,
 			PSCI_POWER_STATE_TYPE_POWER_DOWN, 0, 0);
-	printf("unable to power off CPU%d (%d)\n", cpu, err);
+	printf("CPU%d unable to power off (error = %d)\n", smp_processor_id(), err);
 }
 
 void psci_sys_reset(void)
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH kvm-unit-tests 3/3] arm/arm64: psci: avoid calling halt directly
  2017-05-29 13:58 [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu Andrew Jones
  2017-05-29 13:58 ` [PATCH kvm-unit-tests 1/3] arm/arm64: smp: assert secondaries only boot once Andrew Jones
  2017-05-29 13:58 ` [PATCH kvm-unit-tests 2/3] arm/arm64: psci: cpu_psci_cpu_die operates on itself Andrew Jones
@ 2017-05-29 13:58 ` Andrew Jones
  2017-05-30 12:13 ` [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu Paolo Bonzini
  3 siblings, 0 replies; 8+ messages in thread
From: Andrew Jones @ 2017-05-29 13:58 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: rkrcmar

Eventually we'll allow calling functions on cpus more than once,
to allow multiple subtests to run in a single execution. But
that'll only work if the subtests don't halt the cpus...

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arm/psci.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arm/psci.c b/arm/psci.c
index f1004339b30f..4e6bc58dbe42 100644
--- a/arm/psci.c
+++ b/arm/psci.c
@@ -78,9 +78,8 @@ static void cpu_on_secondary_entry(void)
 	cpumask_set_cpu(cpu, &cpu_on_ready);
 	while (!cpu_on_start)
 		cpu_relax();
-	cpu_on_ret[cpu] = psci_cpu_on(cpus[1], __pa(halt));
+	cpu_on_ret[cpu] = psci_cpu_on(cpus[1], __pa(cpu_psci_cpu_die));
 	cpumask_set_cpu(cpu, &cpu_on_done);
-	halt();
 }
 
 static bool psci_cpu_on_test(void)
@@ -104,7 +103,7 @@ static bool psci_cpu_on_test(void)
 	cpu_on_start = 1;
 	smp_mb();
 
-	cpu_on_ret[0] = psci_cpu_on(cpus[1], __pa(halt));
+	cpu_on_ret[0] = psci_cpu_on(cpus[1], __pa(cpu_psci_cpu_die));
 	cpumask_set_cpu(0, &cpu_on_done);
 
 	while (!cpumask_full(&cpu_on_done))
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu
  2017-05-29 13:58 [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu Andrew Jones
                   ` (2 preceding siblings ...)
  2017-05-29 13:58 ` [PATCH kvm-unit-tests 3/3] arm/arm64: psci: avoid calling halt directly Andrew Jones
@ 2017-05-30 12:13 ` Paolo Bonzini
  2017-05-30 13:39   ` Andrew Jones
  3 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2017-05-30 12:13 UTC (permalink / raw)
  To: Andrew Jones, kvm; +Cc: rkrcmar



On 29/05/2017 15:58, Andrew Jones wrote:
> Paolo inspired me to add x86's on_cpu and on_cpu_asynch to ARM's API.
> I messed around with it a while, but haven't really settled on what
> I want to do. I played with the following approaches:
>  - wfe/sev, requires the concept of an idle state and knowing that the
>    target CPU is in that state - i.e. no arbitrary preemption.

WFE really is just a power-save NOP.  I think you'd do something like

   spin_lock(&dest->ipi_lock);
   assert(dest->ipi_cb == NULL);
   dest->ipi_cb = fn;
   dest->ipi_opaque = opaque;
   spin_unlock(&dest->ipi_lock);
   asm("dsb; sev");
   while (dest->ipi_cb != NULL)
     asm("wfe");

and on the other side:

   while (dest->ipi_cb == NULL)
      asm("wfe");
   spin_lock(&dest->ipi_lock);
   cb = dest->ipi_cb;
   opaque = dest->ipi_opaque;
   dest->ipi_cb = NULL;
   spin_unlock(&dest->ipi_lock);
   asm("dsb; sev");
   cb(opaque);

Thanks,

Paolo

>  - IPI, allows arbitrary preemption, but requires the GIC to be setup
>    enough to receive SGIs and also a dedicated SGI for the scheduling
>  - PSCI, poweroff state isn't better than an idle state, and PSCI works
>    asynchronously wrt CPUs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu
  2017-05-30 12:13 ` [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu Paolo Bonzini
@ 2017-05-30 13:39   ` Andrew Jones
  2017-05-30 14:26     ` Paolo Bonzini
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Jones @ 2017-05-30 13:39 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, rkrcmar

On Tue, May 30, 2017 at 02:13:34PM +0200, Paolo Bonzini wrote:
> 
> 
> On 29/05/2017 15:58, Andrew Jones wrote:
> > Paolo inspired me to add x86's on_cpu and on_cpu_asynch to ARM's API.
> > I messed around with it a while, but haven't really settled on what
> > I want to do. I played with the following approaches:
> >  - wfe/sev, requires the concept of an idle state and knowing that the
> >    target CPU is in that state - i.e. no arbitrary preemption.
> 
> WFE really is just a power-save NOP.  I think you'd do something like
> 
>    spin_lock(&dest->ipi_lock);
>    assert(dest->ipi_cb == NULL);
>    dest->ipi_cb = fn;
>    dest->ipi_opaque = opaque;
>    spin_unlock(&dest->ipi_lock);
>    asm("dsb; sev");
>    while (dest->ipi_cb != NULL)
>      asm("wfe");
> 
> and on the other side:
> 
>    while (dest->ipi_cb == NULL)
>       asm("wfe");
>    spin_lock(&dest->ipi_lock);
>    cb = dest->ipi_cb;
>    opaque = dest->ipi_opaque;
>    dest->ipi_cb = NULL;
>    spin_unlock(&dest->ipi_lock);
>    asm("dsb; sev");
>    cb(opaque);

Yes, I know, I even wrote something similar to that already and it
works fine, *if* the calling cpu can always expect the target cpu
to eventually (for some value of eventually) run its code (the "other
side" code above). When I implemented it I decided that that code
implements an idle loop, i.e.

 void secondary_halt(void)  <-- maybe rename to do_idle()
 {
   cpu = smp_processor_id();
   set_cpu_idle(cpu, true);
   for (;;) {
       while (cpu_idle(cpu))
           wfe();
       ...
       cb(opaque);
       set_cpu_idle(cpu, true);
   }
 }

This implies on_cpu*() must wait for a target cpu's currently running
task to finish, i.e., as I said above, no arbitrary preemption.

So it's not as nice as an IPI solution that doesn't block an asynchronise
caller (possibly indefinitely), which would mean it doesn't actually
implement the semantics of on_cpu_async anyway.  However the IPI solution
requires the GIC, which may or may not be in the expected state after a
unit test mucks with it...

Right now I think I should just wait until the need arises for something
like on_cpu{_async}, and then implement it. That way I'll be sure it
provides the needed functionality.

Thanks,
drew

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu
  2017-05-30 13:39   ` Andrew Jones
@ 2017-05-30 14:26     ` Paolo Bonzini
  2017-05-30 14:37       ` Andrew Jones
  0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2017-05-30 14:26 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, rkrcmar



On 30/05/2017 15:39, Andrew Jones wrote:
> On Tue, May 30, 2017 at 02:13:34PM +0200, Paolo Bonzini wrote:
> Yes, I know, I even wrote something similar to that already and it
> works fine, *if* the calling cpu can always expect the target cpu
> to eventually (for some value of eventually) run its code (the "other
> side" code above). When I implemented it I decided that that code
> implements an idle loop, i.e.
> 
>  void secondary_halt(void)  <-- maybe rename to do_idle()
>  {
>    cpu = smp_processor_id();
>    set_cpu_idle(cpu, true);
>    for (;;) {
>        while (cpu_idle(cpu))
>            wfe();
>        ...
>        cb(opaque);
>        set_cpu_idle(cpu, true);
>    }
>  }
> 
> This implies on_cpu*() must wait for a target cpu's currently running
> task to finish, i.e., as I said above, no arbitrary preemption.

Yes, that's okay.

> So it's not as nice as an IPI solution that doesn't block an asynchronise
> caller (possibly indefinitely), which would mean it doesn't actually
> implement the semantics of on_cpu_async anyway.

It's probably a bug anyway to have on_cpu_async while a previous
on_cpu_async is running.  If you look at x86 tests, on_cpu_async usually
is followed by a busy wait for all CPUs to complete.

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu
  2017-05-30 14:26     ` Paolo Bonzini
@ 2017-05-30 14:37       ` Andrew Jones
  0 siblings, 0 replies; 8+ messages in thread
From: Andrew Jones @ 2017-05-30 14:37 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, rkrcmar

On Tue, May 30, 2017 at 04:26:34PM +0200, Paolo Bonzini wrote:
> 
> 
> On 30/05/2017 15:39, Andrew Jones wrote:
> > On Tue, May 30, 2017 at 02:13:34PM +0200, Paolo Bonzini wrote:
> > Yes, I know, I even wrote something similar to that already and it
> > works fine, *if* the calling cpu can always expect the target cpu
> > to eventually (for some value of eventually) run its code (the "other
> > side" code above). When I implemented it I decided that that code
> > implements an idle loop, i.e.
> > 
> >  void secondary_halt(void)  <-- maybe rename to do_idle()
> >  {
> >    cpu = smp_processor_id();
> >    set_cpu_idle(cpu, true);
> >    for (;;) {
> >        while (cpu_idle(cpu))
> >            wfe();
> >        ...
> >        cb(opaque);
> >        set_cpu_idle(cpu, true);
> >    }
> >  }
> > 
> > This implies on_cpu*() must wait for a target cpu's currently running
> > task to finish, i.e., as I said above, no arbitrary preemption.
> 
> Yes, that's okay.
> 
> > So it's not as nice as an IPI solution that doesn't block an asynchronise
> > caller (possibly indefinitely), which would mean it doesn't actually
> > implement the semantics of on_cpu_async anyway.
> 
> It's probably a bug anyway to have on_cpu_async while a previous
> on_cpu_async is running.  If you look at x86 tests, on_cpu_async usually
> is followed by a busy wait for all CPUs to complete.

OK, I'll clean up my idle-loop wfe/sev series and post soon then.

Thanks,
drew

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-05-30 14:37 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-29 13:58 [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu Andrew Jones
2017-05-29 13:58 ` [PATCH kvm-unit-tests 1/3] arm/arm64: smp: assert secondaries only boot once Andrew Jones
2017-05-29 13:58 ` [PATCH kvm-unit-tests 2/3] arm/arm64: psci: cpu_psci_cpu_die operates on itself Andrew Jones
2017-05-29 13:58 ` [PATCH kvm-unit-tests 3/3] arm/arm64: psci: avoid calling halt directly Andrew Jones
2017-05-30 12:13 ` [PATCH kvm-unit-tests 0/3] arm/arm64: smp: prepare for on_cpu Paolo Bonzini
2017-05-30 13:39   ` Andrew Jones
2017-05-30 14:26     ` Paolo Bonzini
2017-05-30 14:37       ` Andrew Jones

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.