All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
@ 2016-09-08 12:45 Chunyan Zhang
  0 siblings, 0 replies; 13+ messages in thread
From: Chunyan Zhang @ 2016-09-08 12:45 UTC (permalink / raw)
  To: rostedt, mingo
  Cc: zhang.lyra, linux-kernel, linux-arm-kernel, takahiro.akashi, mark.yang

When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
can be traced by function and function graph tracing, and
preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
subsystem we should use preempt_disable/enable_notrace instead.

In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
like events do") the function this_cpu_read() was added to
trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
tracer will go into a recursive loop, even if the tracing_on is
disabled.

So this patch change to use preempt_enable/disable_notrace instead in
this_cpu_read().

Since Yonghui Yang helped a lot to find the root cause of this problem,
so also add his SOB.

Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
---
 arch/arm64/include/asm/percpu.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
index 0a456be..2fee2f5 100644
--- a/arch/arm64/include/asm/percpu.h
+++ b/arch/arm64/include/asm/percpu.h
@@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
 #define _percpu_read(pcp)						\
 ({									\
 	typeof(pcp) __retval;						\
-	preempt_disable();						\
+	preempt_disable_notrace();					\
 	__retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), 	\
 					      sizeof(pcp));		\
-	preempt_enable();						\
+	preempt_enable_notrace();					\
 	__retval;							\
 })
 
 #define _percpu_write(pcp, val)						\
 do {									\
-	preempt_disable();						\
+	preempt_disable_notrace();					\
 	__percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), 	\
 				sizeof(pcp));				\
-	preempt_enable();						\
+	preempt_enable_notrace();					\
 } while(0)								\
 
 #define _pcp_protect(operation, pcp, val)			\
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread
* [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
@ 2016-09-08 12:46 ` Chunyan Zhang
  0 siblings, 0 replies; 13+ messages in thread
From: Chunyan Zhang @ 2016-09-08 12:46 UTC (permalink / raw)
  To: rostedt, mingo
  Cc: zhang.lyra, linux-kernel, linux-arm-kernel, takahiro.akashi, mark.yang

When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
can be traced by function and function graph tracing, and
preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
subsystem we should use preempt_disable/enable_notrace instead.

In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
like events do") the function this_cpu_read() was added to
trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
tracer will go into a recursive loop, even if the tracing_on is
disabled.

So this patch change to use preempt_enable/disable_notrace instead in
this_cpu_read().

Since Yonghui Yang helped a lot to find the root cause of this problem,
so also add his SOB.

Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
---
 arch/arm64/include/asm/percpu.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
index 0a456be..2fee2f5 100644
--- a/arch/arm64/include/asm/percpu.h
+++ b/arch/arm64/include/asm/percpu.h
@@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
 #define _percpu_read(pcp)						\
 ({									\
 	typeof(pcp) __retval;						\
-	preempt_disable();						\
+	preempt_disable_notrace();					\
 	__retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), 	\
 					      sizeof(pcp));		\
-	preempt_enable();						\
+	preempt_enable_notrace();					\
 	__retval;							\
 })
 
 #define _percpu_write(pcp, val)						\
 do {									\
-	preempt_disable();						\
+	preempt_disable_notrace();					\
 	__percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), 	\
 				sizeof(pcp));				\
-	preempt_enable();						\
+	preempt_enable_notrace();					\
 } while(0)								\
 
 #define _pcp_protect(operation, pcp, val)			\
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-09-09 11:43 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-08 12:45 [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write Chunyan Zhang
2016-09-08 12:46 Chunyan Zhang
2016-09-08 12:46 ` Chunyan Zhang
2016-09-08 13:02 ` Mark Rutland
2016-09-08 13:02   ` Mark Rutland
2016-09-08 13:17   ` Chunyan Zhang
2016-09-08 13:17     ` Chunyan Zhang
2016-09-09 10:07 ` Will Deacon
2016-09-09 10:07   ` Will Deacon
2016-09-09 11:43   ` Chunyan Zhang
2016-09-09 11:43     ` Chunyan Zhang
2016-09-09 11:35 ` Catalin Marinas
2016-09-09 11:35   ` Catalin Marinas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.