All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] Fix queued spinlocks and a bit more
@ 2021-04-23  3:11 Nicholas Piggin
  2021-04-23  3:11 ` [PATCH 1/4] powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks Nicholas Piggin
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Nicholas Piggin @ 2021-04-23  3:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Naveen N . Rao, Nicholas Piggin

Patch 1 is the important fix. 2 might fix something, although I haven't
provoked a crash yet.

Patch 3 is a small cleanup, and patch 4 I think is the right thing to do
but these could wait for later.

Thanks,
Nick

Nicholas Piggin (4):
  powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks
  powerpc/pseries: Don't trace hcall tracing wrapper
  powerpc/pseries: use notrace hcall variant for H_CEDE idle
  powerpc/pseries: warn if recursing into the hcall tracing code

 arch/powerpc/include/asm/hvcall.h         |  3 +++
 arch/powerpc/include/asm/paravirt.h       | 22 +++++++++++++++++---
 arch/powerpc/include/asm/plpar_wrappers.h |  6 +++++-
 arch/powerpc/platforms/pseries/hvCall.S   | 10 +++++++++
 arch/powerpc/platforms/pseries/lpar.c     | 25 ++++++++---------------
 5 files changed, 46 insertions(+), 20 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/4] powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks
  2021-04-23  3:11 [PATCH 0/4] Fix queued spinlocks and a bit more Nicholas Piggin
@ 2021-04-23  3:11 ` Nicholas Piggin
  2021-04-27 13:43   ` Naveen N. Rao
  2021-04-23  3:11 ` [PATCH 2/4] powerpc/pseries: Don't trace hcall tracing wrapper Nicholas Piggin
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: Nicholas Piggin @ 2021-04-23  3:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Naveen N . Rao, Nicholas Piggin

The paravit queued spinlock slow path adds itself to the queue then
calls pv_wait to wait for the lock to become free. This is implemented
by calling H_CONFER to donate cycles.

When hcall tracing is enabled, this H_CONFER call can lead to a spin
lock being taken in the tracing code, which will result in the lock to
be taken again, which will also go to the slow path because it queues
behind itself and so won't ever make progress.

An example trace of a deadlock:

  __pv_queued_spin_lock_slowpath
  trace_clock_global
  ring_buffer_lock_reserve
  trace_event_buffer_lock_reserve
  trace_event_buffer_reserve
  trace_event_raw_event_hcall_exit
  __trace_hcall_exit
  plpar_hcall_norets_trace
  __pv_queued_spin_lock_slowpath
  trace_clock_global
  ring_buffer_lock_reserve
  trace_event_buffer_lock_reserve
  trace_event_buffer_reserve
  trace_event_raw_event_rcu_dyntick
  rcu_irq_exit
  irq_exit
  __do_irq
  call_do_irq
  do_IRQ
  hardware_interrupt_common_virt

Fix this by introducing plpar_hcall_norets_notrace(), and using that to
make SPLPAR virtual processor dispatching hcalls by the paravirt
spinlock code.

Fixes: 20c0e8269e9d ("powerpc/pseries: Implement paravirt qspinlocks for SPLPAR")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/hvcall.h       |  3 +++
 arch/powerpc/include/asm/paravirt.h     | 22 +++++++++++++++++++---
 arch/powerpc/platforms/pseries/hvCall.S | 10 ++++++++++
 arch/powerpc/platforms/pseries/lpar.c   |  4 ++--
 4 files changed, 34 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index ed6086d57b22..0c92b01a3c3c 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -446,6 +446,9 @@
  */
 long plpar_hcall_norets(unsigned long opcode, ...);
 
+/* Variant which does not do hcall tracing */
+long plpar_hcall_norets_notrace(unsigned long opcode, ...);
+
 /**
  * plpar_hcall: - Make a pseries hypervisor call
  * @opcode: The hypervisor call to make.
diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h
index 5d1726bb28e7..3c13c2ec70a9 100644
--- a/arch/powerpc/include/asm/paravirt.h
+++ b/arch/powerpc/include/asm/paravirt.h
@@ -30,17 +30,33 @@ static inline u32 yield_count_of(int cpu)
 
 static inline void yield_to_preempted(int cpu, u32 yield_count)
 {
-	plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count);
+	/*
+	 * Spinlock code yields and prods, so don't trace the hcalls because
+	 * tracing code takes spinlocks which could recurse.
+	 *
+	 * These calls are made while the lock is not held, the lock slowpath
+	 * yields if it can not acquire the lock, and unlock slow path might
+	 * prod if a waiter has yielded). So this did not seem to be a problem
+	 * for simple spin locks because technically it didn't recuse on the
+	 * lock. However the queued spin lock contended path is more strictly
+	 * ordered: the H_CONFER hcall is made after the task has queued itself
+	 * on the lock, so then recursing on the lock will queue up behind that
+	 * (or worse: queued spinlocks uses tricks that assume a context never
+	 * waits on more than one spinlock, so that may cause random
+	 * corruption).
+	 */
+	plpar_hcall_norets_notrace(H_CONFER,
+				   get_hard_smp_processor_id(cpu), yield_count);
 }
 
 static inline void prod_cpu(int cpu)
 {
-	plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu));
+	plpar_hcall_norets_notrace(H_PROD, get_hard_smp_processor_id(cpu));
 }
 
 static inline void yield_to_any(void)
 {
-	plpar_hcall_norets(H_CONFER, -1, 0);
+	plpar_hcall_norets_notrace(H_CONFER, -1, 0);
 }
 #else
 static inline bool is_shared_processor(void)
diff --git a/arch/powerpc/platforms/pseries/hvCall.S b/arch/powerpc/platforms/pseries/hvCall.S
index 2136e42833af..8a2b8d64265b 100644
--- a/arch/powerpc/platforms/pseries/hvCall.S
+++ b/arch/powerpc/platforms/pseries/hvCall.S
@@ -102,6 +102,16 @@ END_FTR_SECTION(0, 1);						\
 #define HCALL_BRANCH(LABEL)
 #endif
 
+_GLOBAL_TOC(plpar_hcall_norets_notrace)
+	HMT_MEDIUM
+
+	mfcr	r0
+	stw	r0,8(r1)
+	HVSC				/* invoke the hypervisor */
+	lwz	r0,8(r1)
+	mtcrf	0xff,r0
+	blr				/* return r3 = status */
+
 _GLOBAL_TOC(plpar_hcall_norets)
 	HMT_MEDIUM
 
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 3805519a6469..6011b7b80946 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -1828,8 +1828,8 @@ void hcall_tracepoint_unregfunc(void)
 
 /*
  * Since the tracing code might execute hcalls we need to guard against
- * recursion. One example of this are spinlocks calling H_YIELD on
- * shared processor partitions.
+ * recursion. H_CONFER from spin locks must be treated separately though
+ * and use _notrace plpar_hcall variants, see yield_to_preempted().
  */
 static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/4] powerpc/pseries: Don't trace hcall tracing wrapper
  2021-04-23  3:11 [PATCH 0/4] Fix queued spinlocks and a bit more Nicholas Piggin
  2021-04-23  3:11 ` [PATCH 1/4] powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks Nicholas Piggin
@ 2021-04-23  3:11 ` Nicholas Piggin
  2021-04-27 13:52   ` Naveen N. Rao
  2021-04-23  3:11 ` [PATCH 3/4] powerpc/pseries: use notrace hcall variant for H_CEDE idle Nicholas Piggin
  2021-04-23  3:11 ` [PATCH 4/4] powerpc/pseries: warn if recursing into the hcall tracing code Nicholas Piggin
  3 siblings, 1 reply; 15+ messages in thread
From: Nicholas Piggin @ 2021-04-23  3:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Naveen N . Rao, Nicholas Piggin

This doesn't seem very useful to trace before the recursion check, even
if the ftrace code has any recursion checks of its own. Be on the safe
side and don't trace the hcall trace wrappers.

Reported-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/platforms/pseries/lpar.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 6011b7b80946..0641779e5cde 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -1834,7 +1834,7 @@ void hcall_tracepoint_unregfunc(void)
 static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
 
 
-void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
+notrace void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
 {
 	unsigned long flags;
 	unsigned int *depth;
@@ -1862,7 +1862,7 @@ void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
 	local_irq_restore(flags);
 }
 
-void __trace_hcall_exit(long opcode, long retval, unsigned long *retbuf)
+notrace void __trace_hcall_exit(long opcode, long retval, unsigned long *retbuf)
 {
 	unsigned long flags;
 	unsigned int *depth;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/4] powerpc/pseries: use notrace hcall variant for H_CEDE idle
  2021-04-23  3:11 [PATCH 0/4] Fix queued spinlocks and a bit more Nicholas Piggin
  2021-04-23  3:11 ` [PATCH 1/4] powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks Nicholas Piggin
  2021-04-23  3:11 ` [PATCH 2/4] powerpc/pseries: Don't trace hcall tracing wrapper Nicholas Piggin
@ 2021-04-23  3:11 ` Nicholas Piggin
  2021-04-27 13:53   ` Naveen N. Rao
  2021-04-23  3:11 ` [PATCH 4/4] powerpc/pseries: warn if recursing into the hcall tracing code Nicholas Piggin
  3 siblings, 1 reply; 15+ messages in thread
From: Nicholas Piggin @ 2021-04-23  3:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Naveen N . Rao, Nicholas Piggin

Rather than special-case H_CEDE in the hcall trace wrappers, make the
idle H_CEDE call use plpar_hcall_norets_notrace().

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/plpar_wrappers.h |  6 +++++-
 arch/powerpc/platforms/pseries/lpar.c     | 10 ----------
 2 files changed, 5 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/plpar_wrappers.h b/arch/powerpc/include/asm/plpar_wrappers.h
index ece84a430701..83e0f701ebc6 100644
--- a/arch/powerpc/include/asm/plpar_wrappers.h
+++ b/arch/powerpc/include/asm/plpar_wrappers.h
@@ -28,7 +28,11 @@ static inline void set_cede_latency_hint(u8 latency_hint)
 
 static inline long cede_processor(void)
 {
-	return plpar_hcall_norets(H_CEDE);
+	/*
+	 * We cannot call tracepoints inside RCU idle regions which
+	 * means we must not trace H_CEDE.
+	 */
+	return plpar_hcall_norets_notrace(H_CEDE);
 }
 
 static inline long extended_cede_processor(unsigned long latency_hint)
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 0641779e5cde..835e7f661a05 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -1839,13 +1839,6 @@ notrace void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
 	unsigned long flags;
 	unsigned int *depth;
 
-	/*
-	 * We cannot call tracepoints inside RCU idle regions which
-	 * means we must not trace H_CEDE.
-	 */
-	if (opcode == H_CEDE)
-		return;
-
 	local_irq_save(flags);
 
 	depth = this_cpu_ptr(&hcall_trace_depth);
@@ -1867,9 +1860,6 @@ notrace void __trace_hcall_exit(long opcode, long retval, unsigned long *retbuf)
 	unsigned long flags;
 	unsigned int *depth;
 
-	if (opcode == H_CEDE)
-		return;
-
 	local_irq_save(flags);
 
 	depth = this_cpu_ptr(&hcall_trace_depth);
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/4] powerpc/pseries: warn if recursing into the hcall tracing code
  2021-04-23  3:11 [PATCH 0/4] Fix queued spinlocks and a bit more Nicholas Piggin
                   ` (2 preceding siblings ...)
  2021-04-23  3:11 ` [PATCH 3/4] powerpc/pseries: use notrace hcall variant for H_CEDE idle Nicholas Piggin
@ 2021-04-23  3:11 ` Nicholas Piggin
  2021-04-27 13:59   ` Naveen N. Rao
  3 siblings, 1 reply; 15+ messages in thread
From: Nicholas Piggin @ 2021-04-23  3:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Naveen N . Rao, Nicholas Piggin

---
 arch/powerpc/platforms/pseries/lpar.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 835e7f661a05..a961a7ebeab3 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -1828,8 +1828,11 @@ void hcall_tracepoint_unregfunc(void)
 
 /*
  * Since the tracing code might execute hcalls we need to guard against
- * recursion. H_CONFER from spin locks must be treated separately though
- * and use _notrace plpar_hcall variants, see yield_to_preempted().
+ * recursion, but this always seems risky -- __trace_hcall_entry might be
+ * ftraced, for example. So warn in this case.
+ *
+ * H_CONFER from spin locks must be treated separately though and use _notrace
+ * plpar_hcall variants, see yield_to_preempted().
  */
 static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
 
@@ -1843,7 +1846,7 @@ notrace void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
 
 	depth = this_cpu_ptr(&hcall_trace_depth);
 
-	if (*depth)
+	if (WARN_ON_ONCE(*depth))
 		goto out;
 
 	(*depth)++;
@@ -1864,7 +1867,7 @@ notrace void __trace_hcall_exit(long opcode, long retval, unsigned long *retbuf)
 
 	depth = this_cpu_ptr(&hcall_trace_depth);
 
-	if (*depth)
+	if (*depth) /* Don't warning again on the way out */
 		goto out;
 
 	(*depth)++;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/4] powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks
  2021-04-23  3:11 ` [PATCH 1/4] powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks Nicholas Piggin
@ 2021-04-27 13:43   ` Naveen N. Rao
  2021-05-01  1:22     ` Nicholas Piggin
  0 siblings, 1 reply; 15+ messages in thread
From: Naveen N. Rao @ 2021-04-27 13:43 UTC (permalink / raw)
  To: linuxppc-dev, Nicholas Piggin

Nicholas Piggin wrote:
> The paravit queued spinlock slow path adds itself to the queue then
> calls pv_wait to wait for the lock to become free. This is implemented
> by calling H_CONFER to donate cycles.
> 
> When hcall tracing is enabled, this H_CONFER call can lead to a spin
> lock being taken in the tracing code, which will result in the lock to
> be taken again, which will also go to the slow path because it queues
> behind itself and so won't ever make progress.
> 
> An example trace of a deadlock:
> 
>   __pv_queued_spin_lock_slowpath
>   trace_clock_global
>   ring_buffer_lock_reserve
>   trace_event_buffer_lock_reserve
>   trace_event_buffer_reserve
>   trace_event_raw_event_hcall_exit
>   __trace_hcall_exit
>   plpar_hcall_norets_trace
>   __pv_queued_spin_lock_slowpath
>   trace_clock_global
>   ring_buffer_lock_reserve
>   trace_event_buffer_lock_reserve
>   trace_event_buffer_reserve
>   trace_event_raw_event_rcu_dyntick
>   rcu_irq_exit
>   irq_exit
>   __do_irq
>   call_do_irq
>   do_IRQ
>   hardware_interrupt_common_virt
> 
> Fix this by introducing plpar_hcall_norets_notrace(), and using that to
> make SPLPAR virtual processor dispatching hcalls by the paravirt
> spinlock code.
> 
> Fixes: 20c0e8269e9d ("powerpc/pseries: Implement paravirt qspinlocks for SPLPAR")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/include/asm/hvcall.h       |  3 +++
>  arch/powerpc/include/asm/paravirt.h     | 22 +++++++++++++++++++---
>  arch/powerpc/platforms/pseries/hvCall.S | 10 ++++++++++
>  arch/powerpc/platforms/pseries/lpar.c   |  4 ++--
>  4 files changed, 34 insertions(+), 5 deletions(-)

Thanks for the fix! Some very minor nits below, but none the less:
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

> 
> diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
> index ed6086d57b22..0c92b01a3c3c 100644
> --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -446,6 +446,9 @@
>   */
>  long plpar_hcall_norets(unsigned long opcode, ...);
> 
> +/* Variant which does not do hcall tracing */
> +long plpar_hcall_norets_notrace(unsigned long opcode, ...);
> +
>  /**
>   * plpar_hcall: - Make a pseries hypervisor call
>   * @opcode: The hypervisor call to make.
> diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h
> index 5d1726bb28e7..3c13c2ec70a9 100644
> --- a/arch/powerpc/include/asm/paravirt.h
> +++ b/arch/powerpc/include/asm/paravirt.h
> @@ -30,17 +30,33 @@ static inline u32 yield_count_of(int cpu)
> 
>  static inline void yield_to_preempted(int cpu, u32 yield_count)
>  {

It looks like yield_to_preempted() is only used by simple spin locks 
today. I wonder if it makes more sense to put the below comment in 
yield_to_any() which is used by the qspinlock code.

> -	plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count);
> +	/*
> +	 * Spinlock code yields and prods, so don't trace the hcalls because
> +	 * tracing code takes spinlocks which could recurse.
> +	 *
> +	 * These calls are made while the lock is not held, the lock slowpath
> +	 * yields if it can not acquire the lock, and unlock slow path might
> +	 * prod if a waiter has yielded). So this did not seem to be a problem
> +	 * for simple spin locks because technically it didn't recuse on the
							       ^^^^^^
							       recurse

> +	 * lock. However the queued spin lock contended path is more strictly
> +	 * ordered: the H_CONFER hcall is made after the task has queued itself
> +	 * on the lock, so then recursing on the lock will queue up behind that
> +	 * (or worse: queued spinlocks uses tricks that assume a context never
> +	 * waits on more than one spinlock, so that may cause random
> +	 * corruption).
> +	 */
> +	plpar_hcall_norets_notrace(H_CONFER,
> +				   get_hard_smp_processor_id(cpu), yield_count);

This can all be on a single line.


- Naveen


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/4] powerpc/pseries: Don't trace hcall tracing wrapper
  2021-04-23  3:11 ` [PATCH 2/4] powerpc/pseries: Don't trace hcall tracing wrapper Nicholas Piggin
@ 2021-04-27 13:52   ` Naveen N. Rao
  0 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2021-04-27 13:52 UTC (permalink / raw)
  To: linuxppc-dev, Nicholas Piggin

Nicholas Piggin wrote:
> This doesn't seem very useful to trace before the recursion check, even
> if the ftrace code has any recursion checks of its own. Be on the safe
> side and don't trace the hcall trace wrappers.

These functions exist precisely to allow hcalls to be traced, so it 
doesn't make sense to "trace the tracer". Users wanting to know about 
hcalls are better off enabling the tracepoint here instead.

> 
> Reported-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/platforms/pseries/lpar.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>


- Naveen

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/4] powerpc/pseries: use notrace hcall variant for H_CEDE idle
  2021-04-23  3:11 ` [PATCH 3/4] powerpc/pseries: use notrace hcall variant for H_CEDE idle Nicholas Piggin
@ 2021-04-27 13:53   ` Naveen N. Rao
  0 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2021-04-27 13:53 UTC (permalink / raw)
  To: linuxppc-dev, Nicholas Piggin

Nicholas Piggin wrote:
> Rather than special-case H_CEDE in the hcall trace wrappers, make the
> idle H_CEDE call use plpar_hcall_norets_notrace().
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/include/asm/plpar_wrappers.h |  6 +++++-
>  arch/powerpc/platforms/pseries/lpar.c     | 10 ----------
>  2 files changed, 5 insertions(+), 11 deletions(-)

Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>


- Naveen


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/4] powerpc/pseries: warn if recursing into the hcall tracing code
  2021-04-23  3:11 ` [PATCH 4/4] powerpc/pseries: warn if recursing into the hcall tracing code Nicholas Piggin
@ 2021-04-27 13:59   ` Naveen N. Rao
  2021-05-01  1:24     ` Nicholas Piggin
  0 siblings, 1 reply; 15+ messages in thread
From: Naveen N. Rao @ 2021-04-27 13:59 UTC (permalink / raw)
  To: linuxppc-dev, Nicholas Piggin

Nicholas Piggin wrote:
> ---
>  arch/powerpc/platforms/pseries/lpar.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
> index 835e7f661a05..a961a7ebeab3 100644
> --- a/arch/powerpc/platforms/pseries/lpar.c
> +++ b/arch/powerpc/platforms/pseries/lpar.c
> @@ -1828,8 +1828,11 @@ void hcall_tracepoint_unregfunc(void)
> 
>  /*
>   * Since the tracing code might execute hcalls we need to guard against
> - * recursion. H_CONFER from spin locks must be treated separately though
> - * and use _notrace plpar_hcall variants, see yield_to_preempted().
> + * recursion, but this always seems risky -- __trace_hcall_entry might be
> + * ftraced, for example. So warn in this case.

__trace_hcall_[entry|exit] aren't traced anymore since they now have the 
'notrace' annotation.

> + *
> + * H_CONFER from spin locks must be treated separately though and use _notrace
> + * plpar_hcall variants, see yield_to_preempted().
>   */
>  static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
> 
> @@ -1843,7 +1846,7 @@ notrace void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
> 
>  	depth = this_cpu_ptr(&hcall_trace_depth);
> 
> -	if (*depth)
> +	if (WARN_ON_ONCE(*depth))
>  		goto out;

I don't think this will be helpful. The hcall trace depth tracking is 
for the tracepoint and I suspect that this warning will be triggered 
quite easily. Since we have recursion protection, I don't think we 
should warn here.


- Naveen


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/4] powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks
  2021-04-27 13:43   ` Naveen N. Rao
@ 2021-05-01  1:22     ` Nicholas Piggin
  2021-05-02 13:48       ` Michael Ellerman
  0 siblings, 1 reply; 15+ messages in thread
From: Nicholas Piggin @ 2021-05-01  1:22 UTC (permalink / raw)
  To: linuxppc-dev, Naveen N. Rao

Excerpts from Naveen N. Rao's message of April 27, 2021 11:43 pm:
> Nicholas Piggin wrote:
>> The paravit queued spinlock slow path adds itself to the queue then
>> calls pv_wait to wait for the lock to become free. This is implemented
>> by calling H_CONFER to donate cycles.
>> 
>> When hcall tracing is enabled, this H_CONFER call can lead to a spin
>> lock being taken in the tracing code, which will result in the lock to
>> be taken again, which will also go to the slow path because it queues
>> behind itself and so won't ever make progress.
>> 
>> An example trace of a deadlock:
>> 
>>   __pv_queued_spin_lock_slowpath
>>   trace_clock_global
>>   ring_buffer_lock_reserve
>>   trace_event_buffer_lock_reserve
>>   trace_event_buffer_reserve
>>   trace_event_raw_event_hcall_exit
>>   __trace_hcall_exit
>>   plpar_hcall_norets_trace
>>   __pv_queued_spin_lock_slowpath
>>   trace_clock_global
>>   ring_buffer_lock_reserve
>>   trace_event_buffer_lock_reserve
>>   trace_event_buffer_reserve
>>   trace_event_raw_event_rcu_dyntick
>>   rcu_irq_exit
>>   irq_exit
>>   __do_irq
>>   call_do_irq
>>   do_IRQ
>>   hardware_interrupt_common_virt
>> 
>> Fix this by introducing plpar_hcall_norets_notrace(), and using that to
>> make SPLPAR virtual processor dispatching hcalls by the paravirt
>> spinlock code.
>> 
>> Fixes: 20c0e8269e9d ("powerpc/pseries: Implement paravirt qspinlocks for SPLPAR")
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>  arch/powerpc/include/asm/hvcall.h       |  3 +++
>>  arch/powerpc/include/asm/paravirt.h     | 22 +++++++++++++++++++---
>>  arch/powerpc/platforms/pseries/hvCall.S | 10 ++++++++++
>>  arch/powerpc/platforms/pseries/lpar.c   |  4 ++--
>>  4 files changed, 34 insertions(+), 5 deletions(-)
> 
> Thanks for the fix! Some very minor nits below, but none the less:
> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> 
>> 
>> diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
>> index ed6086d57b22..0c92b01a3c3c 100644
>> --- a/arch/powerpc/include/asm/hvcall.h
>> +++ b/arch/powerpc/include/asm/hvcall.h
>> @@ -446,6 +446,9 @@
>>   */
>>  long plpar_hcall_norets(unsigned long opcode, ...);
>> 
>> +/* Variant which does not do hcall tracing */
>> +long plpar_hcall_norets_notrace(unsigned long opcode, ...);
>> +
>>  /**
>>   * plpar_hcall: - Make a pseries hypervisor call
>>   * @opcode: The hypervisor call to make.
>> diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h
>> index 5d1726bb28e7..3c13c2ec70a9 100644
>> --- a/arch/powerpc/include/asm/paravirt.h
>> +++ b/arch/powerpc/include/asm/paravirt.h
>> @@ -30,17 +30,33 @@ static inline u32 yield_count_of(int cpu)
>> 
>>  static inline void yield_to_preempted(int cpu, u32 yield_count)
>>  {
> 
> It looks like yield_to_preempted() is only used by simple spin locks 
> today. I wonder if it makes more sense to put the below comment in 
> yield_to_any() which is used by the qspinlock code.

Yeah, I just put it above the functions entirely because it refers to 
all of them.

> 
>> -	plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count);
>> +	/*
>> +	 * Spinlock code yields and prods, so don't trace the hcalls because
>> +	 * tracing code takes spinlocks which could recurse.
>> +	 *
>> +	 * These calls are made while the lock is not held, the lock slowpath
>> +	 * yields if it can not acquire the lock, and unlock slow path might
>> +	 * prod if a waiter has yielded). So this did not seem to be a problem
>> +	 * for simple spin locks because technically it didn't recuse on the
> 							       ^^^^^^
> 							       recurse
> 
>> +	 * lock. However the queued spin lock contended path is more strictly
>> +	 * ordered: the H_CONFER hcall is made after the task has queued itself
>> +	 * on the lock, so then recursing on the lock will queue up behind that
>> +	 * (or worse: queued spinlocks uses tricks that assume a context never
>> +	 * waits on more than one spinlock, so that may cause random
>> +	 * corruption).
>> +	 */
>> +	plpar_hcall_norets_notrace(H_CONFER,
>> +				   get_hard_smp_processor_id(cpu), yield_count);
> 
> This can all be on a single line.

Should it though? Linux in general allegedly changed to 100 column 
lines for checkpatch, but it seems to still be frowned upon to go
beyond 80 deliberately. What about arch/powerpc?

Thanks,
Nick

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/4] powerpc/pseries: warn if recursing into the hcall tracing code
  2021-04-27 13:59   ` Naveen N. Rao
@ 2021-05-01  1:24     ` Nicholas Piggin
  2021-05-04 10:25       ` Naveen N. Rao
  0 siblings, 1 reply; 15+ messages in thread
From: Nicholas Piggin @ 2021-05-01  1:24 UTC (permalink / raw)
  To: linuxppc-dev, Naveen N. Rao

Excerpts from Naveen N. Rao's message of April 27, 2021 11:59 pm:
> Nicholas Piggin wrote:
>> ---
>>  arch/powerpc/platforms/pseries/lpar.c | 11 +++++++----
>>  1 file changed, 7 insertions(+), 4 deletions(-)
>> 
>> diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
>> index 835e7f661a05..a961a7ebeab3 100644
>> --- a/arch/powerpc/platforms/pseries/lpar.c
>> +++ b/arch/powerpc/platforms/pseries/lpar.c
>> @@ -1828,8 +1828,11 @@ void hcall_tracepoint_unregfunc(void)
>> 
>>  /*
>>   * Since the tracing code might execute hcalls we need to guard against
>> - * recursion. H_CONFER from spin locks must be treated separately though
>> - * and use _notrace plpar_hcall variants, see yield_to_preempted().
>> + * recursion, but this always seems risky -- __trace_hcall_entry might be
>> + * ftraced, for example. So warn in this case.
> 
> __trace_hcall_[entry|exit] aren't traced anymore since they now have the 
> 'notrace' annotation.

Yes that's true I went back and added the other patch, so I should fix 
this comment.

>> + *
>> + * H_CONFER from spin locks must be treated separately though and use _notrace
>> + * plpar_hcall variants, see yield_to_preempted().
>>   */
>>  static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
>> 
>> @@ -1843,7 +1846,7 @@ notrace void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
>> 
>>  	depth = this_cpu_ptr(&hcall_trace_depth);
>> 
>> -	if (*depth)
>> +	if (WARN_ON_ONCE(*depth))
>>  		goto out;
> 
> I don't think this will be helpful. The hcall trace depth tracking is 
> for the tracepoint and I suspect that this warning will be triggered 
> quite easily. Since we have recursion protection, I don't think we 
> should warn here.

What would trigger recursion?

Thanks,
Nick

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/4] powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks
  2021-05-01  1:22     ` Nicholas Piggin
@ 2021-05-02 13:48       ` Michael Ellerman
  0 siblings, 0 replies; 15+ messages in thread
From: Michael Ellerman @ 2021-05-02 13:48 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev, Naveen N. Rao

Nicholas Piggin <npiggin@gmail.com> writes:
> Excerpts from Naveen N. Rao's message of April 27, 2021 11:43 pm:
>> Nicholas Piggin wrote:
>>> The paravit queued spinlock slow path adds itself to the queue then
>>> calls pv_wait to wait for the lock to become free. This is implemented
>>> by calling H_CONFER to donate cycles.
>>> 
>>> When hcall tracing is enabled, this H_CONFER call can lead to a spin
>>> lock being taken in the tracing code, which will result in the lock to
>>> be taken again, which will also go to the slow path because it queues
>>> behind itself and so won't ever make progress.
>>> 
>>> An example trace of a deadlock:
>>> 
>>>   __pv_queued_spin_lock_slowpath
>>>   trace_clock_global
>>>   ring_buffer_lock_reserve
>>>   trace_event_buffer_lock_reserve
>>>   trace_event_buffer_reserve
>>>   trace_event_raw_event_hcall_exit
>>>   __trace_hcall_exit
>>>   plpar_hcall_norets_trace
>>>   __pv_queued_spin_lock_slowpath
>>>   trace_clock_global
>>>   ring_buffer_lock_reserve
>>>   trace_event_buffer_lock_reserve
>>>   trace_event_buffer_reserve
>>>   trace_event_raw_event_rcu_dyntick
>>>   rcu_irq_exit
>>>   irq_exit
>>>   __do_irq
>>>   call_do_irq
>>>   do_IRQ
>>>   hardware_interrupt_common_virt
>>> 
>>> Fix this by introducing plpar_hcall_norets_notrace(), and using that to
>>> make SPLPAR virtual processor dispatching hcalls by the paravirt
>>> spinlock code.
>>> 
>>> Fixes: 20c0e8269e9d ("powerpc/pseries: Implement paravirt qspinlocks for SPLPAR")
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>>  arch/powerpc/include/asm/hvcall.h       |  3 +++
>>>  arch/powerpc/include/asm/paravirt.h     | 22 +++++++++++++++++++---
>>>  arch/powerpc/platforms/pseries/hvCall.S | 10 ++++++++++
>>>  arch/powerpc/platforms/pseries/lpar.c   |  4 ++--
>>>  4 files changed, 34 insertions(+), 5 deletions(-)
>> 
>> Thanks for the fix! Some very minor nits below, but none the less:
>> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>> 
>>> 
>>> diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
>>> index ed6086d57b22..0c92b01a3c3c 100644
>>> --- a/arch/powerpc/include/asm/hvcall.h
>>> +++ b/arch/powerpc/include/asm/hvcall.h
>>> @@ -446,6 +446,9 @@
>>>   */
>>>  long plpar_hcall_norets(unsigned long opcode, ...);
>>> 
>>> +/* Variant which does not do hcall tracing */
>>> +long plpar_hcall_norets_notrace(unsigned long opcode, ...);
>>> +
>>>  /**
>>>   * plpar_hcall: - Make a pseries hypervisor call
>>>   * @opcode: The hypervisor call to make.
>>> diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h
>>> index 5d1726bb28e7..3c13c2ec70a9 100644
>>> --- a/arch/powerpc/include/asm/paravirt.h
>>> +++ b/arch/powerpc/include/asm/paravirt.h
>>> @@ -30,17 +30,33 @@ static inline u32 yield_count_of(int cpu)
>>> 
>>>  static inline void yield_to_preempted(int cpu, u32 yield_count)
>>>  {
>> 
>> It looks like yield_to_preempted() is only used by simple spin locks 
>> today. I wonder if it makes more sense to put the below comment in 
>> yield_to_any() which is used by the qspinlock code.
>
> Yeah, I just put it above the functions entirely because it refers to 
> all of them.
>
>> 
>>> -	plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count);
>>> +	/*
>>> +	 * Spinlock code yields and prods, so don't trace the hcalls because
>>> +	 * tracing code takes spinlocks which could recurse.
>>> +	 *
>>> +	 * These calls are made while the lock is not held, the lock slowpath
>>> +	 * yields if it can not acquire the lock, and unlock slow path might
>>> +	 * prod if a waiter has yielded). So this did not seem to be a problem
>>> +	 * for simple spin locks because technically it didn't recuse on the
>> 							       ^^^^^^
>> 							       recurse
>> 
>>> +	 * lock. However the queued spin lock contended path is more strictly
>>> +	 * ordered: the H_CONFER hcall is made after the task has queued itself
>>> +	 * on the lock, so then recursing on the lock will queue up behind that
>>> +	 * (or worse: queued spinlocks uses tricks that assume a context never
>>> +	 * waits on more than one spinlock, so that may cause random
>>> +	 * corruption).
>>> +	 */
>>> +	plpar_hcall_norets_notrace(H_CONFER,
>>> +				   get_hard_smp_processor_id(cpu), yield_count);
>> 
>> This can all be on a single line.
>
> Should it though? Linux in general allegedly changed to 100 column 
> lines for checkpatch, but it seems to still be frowned upon to go
> beyond 80 deliberately. What about arch/powerpc?

Splitting it provides zero benefit to code readability IMO. And it would
be only 89 by my count, which is not grossly long.

cheers

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/4] powerpc/pseries: warn if recursing into the hcall tracing code
  2021-05-01  1:24     ` Nicholas Piggin
@ 2021-05-04 10:25       ` Naveen N. Rao
  2021-05-04 10:45         ` Nicholas Piggin
  0 siblings, 1 reply; 15+ messages in thread
From: Naveen N. Rao @ 2021-05-04 10:25 UTC (permalink / raw)
  To: linuxppc-dev, Nicholas Piggin

Nicholas Piggin wrote:
> Excerpts from Naveen N. Rao's message of April 27, 2021 11:59 pm:
>> Nicholas Piggin wrote:
>>> + *
>>> + * H_CONFER from spin locks must be treated separately though and use _notrace
>>> + * plpar_hcall variants, see yield_to_preempted().
>>>   */
>>>  static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
>>> 
>>> @@ -1843,7 +1846,7 @@ notrace void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
>>> 
>>>  	depth = this_cpu_ptr(&hcall_trace_depth);
>>> 
>>> -	if (*depth)
>>> +	if (WARN_ON_ONCE(*depth))
>>>  		goto out;
>> 
>> I don't think this will be helpful. The hcall trace depth tracking is 
>> for the tracepoint and I suspect that this warning will be triggered 
>> quite easily. Since we have recursion protection, I don't think we 
>> should warn here.
> 
> What would trigger recursion?

The trace code that this protects: trace_hcall_entry(). The tracing code 
itself can end up doing a hcall as we see in the first patch in this 
series:
  plpar_hcall_norets_trace+0x34/0x8c (unreliable)
  __pv_queued_spin_lock_slowpath+0x684/0x710
  trace_clock_global+0x148/0x150
  ring_buffer_lock_reserve+0x12c/0x630
  trace_event_buffer_lock_reserve+0x80/0x220
  trace_event_buffer_reserve+0x7c/0xd0
  trace_event_raw_event_hcall_entry+0x68/0x150
  __trace_hcall_entry+0x160/0x180


There is also a comment aroung hcall_trace_depth that mentions this:

  /*
   * Since the tracing code might execute hcalls we need to guard against
   * recursion. One example of this are spinlocks calling H_YIELD on
   * shared processor partitions.
   */


Thanks,
Naveen


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/4] powerpc/pseries: warn if recursing into the hcall tracing code
  2021-05-04 10:25       ` Naveen N. Rao
@ 2021-05-04 10:45         ` Nicholas Piggin
  2021-05-04 16:18           ` Naveen N. Rao
  0 siblings, 1 reply; 15+ messages in thread
From: Nicholas Piggin @ 2021-05-04 10:45 UTC (permalink / raw)
  To: linuxppc-dev, Naveen N. Rao

Excerpts from Naveen N. Rao's message of May 4, 2021 8:25 pm:
> Nicholas Piggin wrote:
>> Excerpts from Naveen N. Rao's message of April 27, 2021 11:59 pm:
>>> Nicholas Piggin wrote:
>>>> + *
>>>> + * H_CONFER from spin locks must be treated separately though and use _notrace
>>>> + * plpar_hcall variants, see yield_to_preempted().
>>>>   */
>>>>  static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
>>>> 
>>>> @@ -1843,7 +1846,7 @@ notrace void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
>>>> 
>>>>  	depth = this_cpu_ptr(&hcall_trace_depth);
>>>> 
>>>> -	if (*depth)
>>>> +	if (WARN_ON_ONCE(*depth))
>>>>  		goto out;
>>> 
>>> I don't think this will be helpful. The hcall trace depth tracking is 
>>> for the tracepoint and I suspect that this warning will be triggered 
>>> quite easily. Since we have recursion protection, I don't think we 
>>> should warn here.
>> 
>> What would trigger recursion?
> 
> The trace code that this protects: trace_hcall_entry(). The tracing code 
> itself can end up doing a hcall as we see in the first patch in this 
> series:
>   plpar_hcall_norets_trace+0x34/0x8c (unreliable)
>   __pv_queued_spin_lock_slowpath+0x684/0x710
>   trace_clock_global+0x148/0x150
>   ring_buffer_lock_reserve+0x12c/0x630
>   trace_event_buffer_lock_reserve+0x80/0x220
>   trace_event_buffer_reserve+0x7c/0xd0
>   trace_event_raw_event_hcall_entry+0x68/0x150
>   __trace_hcall_entry+0x160/0x180
> 
> 
> There is also a comment aroung hcall_trace_depth that mentions this:
> 
>   /*
>    * Since the tracing code might execute hcalls we need to guard against
>    * recursion. One example of this are spinlocks calling H_YIELD on
>    * shared processor partitions.
>    */

Right but since fixing those, my thought is we better not cause more
any recursion, so we should fix anything that does.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/4] powerpc/pseries: warn if recursing into the hcall tracing code
  2021-05-04 10:45         ` Nicholas Piggin
@ 2021-05-04 16:18           ` Naveen N. Rao
  0 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2021-05-04 16:18 UTC (permalink / raw)
  To: linuxppc-dev, Nicholas Piggin

Nicholas Piggin wrote:
> Excerpts from Naveen N. Rao's message of May 4, 2021 8:25 pm:
>> Nicholas Piggin wrote:
>>> Excerpts from Naveen N. Rao's message of April 27, 2021 11:59 pm:
>>>> Nicholas Piggin wrote:
>>>>> + *
>>>>> + * H_CONFER from spin locks must be treated separately though and use _notrace
>>>>> + * plpar_hcall variants, see yield_to_preempted().
>>>>>   */
>>>>>  static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
>>>>> 
>>>>> @@ -1843,7 +1846,7 @@ notrace void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
>>>>> 
>>>>>  	depth = this_cpu_ptr(&hcall_trace_depth);
>>>>> 
>>>>> -	if (*depth)
>>>>> +	if (WARN_ON_ONCE(*depth))
>>>>>  		goto out;
>>>> 
>>>> I don't think this will be helpful. The hcall trace depth tracking is 
>>>> for the tracepoint and I suspect that this warning will be triggered 
>>>> quite easily. Since we have recursion protection, I don't think we 
>>>> should warn here.
>>> 
>>> What would trigger recursion?
>> 
>> The trace code that this protects: trace_hcall_entry(). The tracing code 
>> itself can end up doing a hcall as we see in the first patch in this 
>> series:
>>   plpar_hcall_norets_trace+0x34/0x8c (unreliable)
>>   __pv_queued_spin_lock_slowpath+0x684/0x710
>>   trace_clock_global+0x148/0x150
>>   ring_buffer_lock_reserve+0x12c/0x630
>>   trace_event_buffer_lock_reserve+0x80/0x220
>>   trace_event_buffer_reserve+0x7c/0xd0
>>   trace_event_raw_event_hcall_entry+0x68/0x150
>>   __trace_hcall_entry+0x160/0x180
>> 
>> 
>> There is also a comment aroung hcall_trace_depth that mentions this:
>> 
>>   /*
>>    * Since the tracing code might execute hcalls we need to guard against
>>    * recursion. One example of this are spinlocks calling H_YIELD on
>>    * shared processor partitions.
>>    */
> 
> Right but since fixing those, my thought is we better not cause more
> any recursion, so we should fix anything that does.

Ah ok, I got confused by the reference to recursion, since the current 
fix did not involve hcall trace recursion per se.

Yes, with the current patch set, we have ensured that at least the 
qspinlock code doesn't invoke tracing if it has to do H_CONFER hcall. I 
am not entirely sure if that's the only hcall that could be invoked by 
the tracing code.

The reason I felt that this warning isn't useful is because it can be 
triggered by the perf NMI -- if the perf NMI happens while we are 
tracing a different hcall.


Thanks,
Naveen


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-05-04 16:19 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-23  3:11 [PATCH 0/4] Fix queued spinlocks and a bit more Nicholas Piggin
2021-04-23  3:11 ` [PATCH 1/4] powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks Nicholas Piggin
2021-04-27 13:43   ` Naveen N. Rao
2021-05-01  1:22     ` Nicholas Piggin
2021-05-02 13:48       ` Michael Ellerman
2021-04-23  3:11 ` [PATCH 2/4] powerpc/pseries: Don't trace hcall tracing wrapper Nicholas Piggin
2021-04-27 13:52   ` Naveen N. Rao
2021-04-23  3:11 ` [PATCH 3/4] powerpc/pseries: use notrace hcall variant for H_CEDE idle Nicholas Piggin
2021-04-27 13:53   ` Naveen N. Rao
2021-04-23  3:11 ` [PATCH 4/4] powerpc/pseries: warn if recursing into the hcall tracing code Nicholas Piggin
2021-04-27 13:59   ` Naveen N. Rao
2021-05-01  1:24     ` Nicholas Piggin
2021-05-04 10:25       ` Naveen N. Rao
2021-05-04 10:45         ` Nicholas Piggin
2021-05-04 16:18           ` Naveen N. Rao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.