All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
@ 2016-09-08 12:46 ` Chunyan Zhang
  0 siblings, 0 replies; 13+ messages in thread
From: Chunyan Zhang @ 2016-09-08 12:46 UTC (permalink / raw)
  To: rostedt, mingo
  Cc: zhang.lyra, linux-kernel, linux-arm-kernel, takahiro.akashi, mark.yang

When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
can be traced by function and function graph tracing, and
preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
subsystem we should use preempt_disable/enable_notrace instead.

In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
like events do") the function this_cpu_read() was added to
trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
tracer will go into a recursive loop, even if the tracing_on is
disabled.

So this patch change to use preempt_enable/disable_notrace instead in
this_cpu_read().

Since Yonghui Yang helped a lot to find the root cause of this problem,
so also add his SOB.

Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
---
 arch/arm64/include/asm/percpu.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
index 0a456be..2fee2f5 100644
--- a/arch/arm64/include/asm/percpu.h
+++ b/arch/arm64/include/asm/percpu.h
@@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
 #define _percpu_read(pcp)						\
 ({									\
 	typeof(pcp) __retval;						\
-	preempt_disable();						\
+	preempt_disable_notrace();					\
 	__retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), 	\
 					      sizeof(pcp));		\
-	preempt_enable();						\
+	preempt_enable_notrace();					\
 	__retval;							\
 })
 
 #define _percpu_write(pcp, val)						\
 do {									\
-	preempt_disable();						\
+	preempt_disable_notrace();					\
 	__percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), 	\
 				sizeof(pcp));				\
-	preempt_enable();						\
+	preempt_enable_notrace();					\
 } while(0)								\
 
 #define _pcp_protect(operation, pcp, val)			\
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
@ 2016-09-08 12:46 ` Chunyan Zhang
  0 siblings, 0 replies; 13+ messages in thread
From: Chunyan Zhang @ 2016-09-08 12:46 UTC (permalink / raw)
  To: linux-arm-kernel

When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
can be traced by function and function graph tracing, and
preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
subsystem we should use preempt_disable/enable_notrace instead.

In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
like events do") the function this_cpu_read() was added to
trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
tracer will go into a recursive loop, even if the tracing_on is
disabled.

So this patch change to use preempt_enable/disable_notrace instead in
this_cpu_read().

Since Yonghui Yang helped a lot to find the root cause of this problem,
so also add his SOB.

Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
---
 arch/arm64/include/asm/percpu.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
index 0a456be..2fee2f5 100644
--- a/arch/arm64/include/asm/percpu.h
+++ b/arch/arm64/include/asm/percpu.h
@@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
 #define _percpu_read(pcp)						\
 ({									\
 	typeof(pcp) __retval;						\
-	preempt_disable();						\
+	preempt_disable_notrace();					\
 	__retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), 	\
 					      sizeof(pcp));		\
-	preempt_enable();						\
+	preempt_enable_notrace();					\
 	__retval;							\
 })
 
 #define _percpu_write(pcp, val)						\
 do {									\
-	preempt_disable();						\
+	preempt_disable_notrace();					\
 	__percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), 	\
 				sizeof(pcp));				\
-	preempt_enable();						\
+	preempt_enable_notrace();					\
 } while(0)								\
 
 #define _pcp_protect(operation, pcp, val)			\
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
  2016-09-08 12:46 ` Chunyan Zhang
@ 2016-09-08 13:02   ` Mark Rutland
  -1 siblings, 0 replies; 13+ messages in thread
From: Mark Rutland @ 2016-09-08 13:02 UTC (permalink / raw)
  To: Chunyan Zhang, will.deacon, catalin.marinas
  Cc: rostedt, mingo, mark.yang, zhang.lyra, linux-kernel,
	linux-arm-kernel, takahiro.akashi

Hi,

In future, please ensure that you include the arm64 maintainers when
sending changes to core arm64 code. I've copied Catalin and Will for you
this time.

Thanks,
Mark.

On Thu, Sep 08, 2016 at 08:46:42PM +0800, Chunyan Zhang wrote:
> When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
> can be traced by function and function graph tracing, and
> preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
> subsystem we should use preempt_disable/enable_notrace instead.
> 
> In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
> like events do") the function this_cpu_read() was added to
> trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
> tracer will go into a recursive loop, even if the tracing_on is
> disabled.
> 
> So this patch change to use preempt_enable/disable_notrace instead in
> this_cpu_read().
> 
> Since Yonghui Yang helped a lot to find the root cause of this problem,
> so also add his SOB.
> 
> Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
> Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
> ---
>  arch/arm64/include/asm/percpu.h | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
> index 0a456be..2fee2f5 100644
> --- a/arch/arm64/include/asm/percpu.h
> +++ b/arch/arm64/include/asm/percpu.h
> @@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
>  #define _percpu_read(pcp)						\
>  ({									\
>  	typeof(pcp) __retval;						\
> -	preempt_disable();						\
> +	preempt_disable_notrace();					\
>  	__retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), 	\
>  					      sizeof(pcp));		\
> -	preempt_enable();						\
> +	preempt_enable_notrace();					\
>  	__retval;							\
>  })
>  
>  #define _percpu_write(pcp, val)						\
>  do {									\
> -	preempt_disable();						\
> +	preempt_disable_notrace();					\
>  	__percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), 	\
>  				sizeof(pcp));				\
> -	preempt_enable();						\
> +	preempt_enable_notrace();					\
>  } while(0)								\
>  
>  #define _pcp_protect(operation, pcp, val)			\
> -- 
> 2.7.4
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
@ 2016-09-08 13:02   ` Mark Rutland
  0 siblings, 0 replies; 13+ messages in thread
From: Mark Rutland @ 2016-09-08 13:02 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

In future, please ensure that you include the arm64 maintainers when
sending changes to core arm64 code. I've copied Catalin and Will for you
this time.

Thanks,
Mark.

On Thu, Sep 08, 2016 at 08:46:42PM +0800, Chunyan Zhang wrote:
> When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
> can be traced by function and function graph tracing, and
> preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
> subsystem we should use preempt_disable/enable_notrace instead.
> 
> In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
> like events do") the function this_cpu_read() was added to
> trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
> tracer will go into a recursive loop, even if the tracing_on is
> disabled.
> 
> So this patch change to use preempt_enable/disable_notrace instead in
> this_cpu_read().
> 
> Since Yonghui Yang helped a lot to find the root cause of this problem,
> so also add his SOB.
> 
> Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
> Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
> ---
>  arch/arm64/include/asm/percpu.h | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
> index 0a456be..2fee2f5 100644
> --- a/arch/arm64/include/asm/percpu.h
> +++ b/arch/arm64/include/asm/percpu.h
> @@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
>  #define _percpu_read(pcp)						\
>  ({									\
>  	typeof(pcp) __retval;						\
> -	preempt_disable();						\
> +	preempt_disable_notrace();					\
>  	__retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), 	\
>  					      sizeof(pcp));		\
> -	preempt_enable();						\
> +	preempt_enable_notrace();					\
>  	__retval;							\
>  })
>  
>  #define _percpu_write(pcp, val)						\
>  do {									\
> -	preempt_disable();						\
> +	preempt_disable_notrace();					\
>  	__percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), 	\
>  				sizeof(pcp));				\
> -	preempt_enable();						\
> +	preempt_enable_notrace();					\
>  } while(0)								\
>  
>  #define _pcp_protect(operation, pcp, val)			\
> -- 
> 2.7.4
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
  2016-09-08 13:02   ` Mark Rutland
@ 2016-09-08 13:17     ` Chunyan Zhang
  -1 siblings, 0 replies; 13+ messages in thread
From: Chunyan Zhang @ 2016-09-08 13:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Chunyan Zhang, Will Deacon, Catalin Marinas, rostedt, mingo,
	mark.yang, linux-kernel, linux-arm-kernel, takahiro.akashi

Thanks Mark.

On 8 September 2016 at 21:02, Mark Rutland <mark.rutland@arm.com> wrote:
> Hi,
>
> In future, please ensure that you include the arm64 maintainers when
> sending changes to core arm64 code. I've copied Catalin and Will for you
> this time.

Sorry about this.

Chunyan

>
> Thanks,
> Mark.
>
> On Thu, Sep 08, 2016 at 08:46:42PM +0800, Chunyan Zhang wrote:
>> When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
>> can be traced by function and function graph tracing, and
>> preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
>> subsystem we should use preempt_disable/enable_notrace instead.
>>
>> In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
>> like events do") the function this_cpu_read() was added to
>> trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
>> tracer will go into a recursive loop, even if the tracing_on is
>> disabled.
>>
>> So this patch change to use preempt_enable/disable_notrace instead in
>> this_cpu_read().
>>
>> Since Yonghui Yang helped a lot to find the root cause of this problem,
>> so also add his SOB.
>>
>> Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
>> Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
>> ---
>>  arch/arm64/include/asm/percpu.h | 8 ++++----
>>  1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
>> index 0a456be..2fee2f5 100644
>> --- a/arch/arm64/include/asm/percpu.h
>> +++ b/arch/arm64/include/asm/percpu.h
>> @@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
>>  #define _percpu_read(pcp)                                            \
>>  ({                                                                   \
>>       typeof(pcp) __retval;                                           \
>> -     preempt_disable();                                              \
>> +     preempt_disable_notrace();                                      \
>>       __retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)),      \
>>                                             sizeof(pcp));             \
>> -     preempt_enable();                                               \
>> +     preempt_enable_notrace();                                       \
>>       __retval;                                                       \
>>  })
>>
>>  #define _percpu_write(pcp, val)                                              \
>>  do {                                                                 \
>> -     preempt_disable();                                              \
>> +     preempt_disable_notrace();                                      \
>>       __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val),       \
>>                               sizeof(pcp));                           \
>> -     preempt_enable();                                               \
>> +     preempt_enable_notrace();                                       \
>>  } while(0)                                                           \
>>
>>  #define _pcp_protect(operation, pcp, val)                    \
>> --
>> 2.7.4
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
@ 2016-09-08 13:17     ` Chunyan Zhang
  0 siblings, 0 replies; 13+ messages in thread
From: Chunyan Zhang @ 2016-09-08 13:17 UTC (permalink / raw)
  To: linux-arm-kernel

Thanks Mark.

On 8 September 2016 at 21:02, Mark Rutland <mark.rutland@arm.com> wrote:
> Hi,
>
> In future, please ensure that you include the arm64 maintainers when
> sending changes to core arm64 code. I've copied Catalin and Will for you
> this time.

Sorry about this.

Chunyan

>
> Thanks,
> Mark.
>
> On Thu, Sep 08, 2016 at 08:46:42PM +0800, Chunyan Zhang wrote:
>> When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
>> can be traced by function and function graph tracing, and
>> preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
>> subsystem we should use preempt_disable/enable_notrace instead.
>>
>> In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
>> like events do") the function this_cpu_read() was added to
>> trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
>> tracer will go into a recursive loop, even if the tracing_on is
>> disabled.
>>
>> So this patch change to use preempt_enable/disable_notrace instead in
>> this_cpu_read().
>>
>> Since Yonghui Yang helped a lot to find the root cause of this problem,
>> so also add his SOB.
>>
>> Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
>> Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
>> ---
>>  arch/arm64/include/asm/percpu.h | 8 ++++----
>>  1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
>> index 0a456be..2fee2f5 100644
>> --- a/arch/arm64/include/asm/percpu.h
>> +++ b/arch/arm64/include/asm/percpu.h
>> @@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
>>  #define _percpu_read(pcp)                                            \
>>  ({                                                                   \
>>       typeof(pcp) __retval;                                           \
>> -     preempt_disable();                                              \
>> +     preempt_disable_notrace();                                      \
>>       __retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)),      \
>>                                             sizeof(pcp));             \
>> -     preempt_enable();                                               \
>> +     preempt_enable_notrace();                                       \
>>       __retval;                                                       \
>>  })
>>
>>  #define _percpu_write(pcp, val)                                              \
>>  do {                                                                 \
>> -     preempt_disable();                                              \
>> +     preempt_disable_notrace();                                      \
>>       __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val),       \
>>                               sizeof(pcp));                           \
>> -     preempt_enable();                                               \
>> +     preempt_enable_notrace();                                       \
>>  } while(0)                                                           \
>>
>>  #define _pcp_protect(operation, pcp, val)                    \
>> --
>> 2.7.4
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel at lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
  2016-09-08 12:46 ` Chunyan Zhang
@ 2016-09-09 10:07   ` Will Deacon
  -1 siblings, 0 replies; 13+ messages in thread
From: Will Deacon @ 2016-09-09 10:07 UTC (permalink / raw)
  To: Chunyan Zhang
  Cc: rostedt, mingo, mark.yang, zhang.lyra, linux-kernel,
	linux-arm-kernel, takahiro.akashi, catalin.marinas

On Thu, Sep 08, 2016 at 08:46:42PM +0800, Chunyan Zhang wrote:
> When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
> can be traced by function and function graph tracing, and
> preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
> subsystem we should use preempt_disable/enable_notrace instead.
> 
> In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
> like events do") the function this_cpu_read() was added to
> trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
> tracer will go into a recursive loop, even if the tracing_on is
> disabled.
> 
> So this patch change to use preempt_enable/disable_notrace instead in
> this_cpu_read().
> 
> Since Yonghui Yang helped a lot to find the root cause of this problem,
> so also add his SOB.
> 
> Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
> Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
> ---
>  arch/arm64/include/asm/percpu.h | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)

Looks good to me:

Acked-by: Will Deacon <will.deacon@arm.com>

However, don't you need to make a similar change to asm-generic/percpu.h
for other architectures (e.g. arch/arm/)?

Will

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
@ 2016-09-09 10:07   ` Will Deacon
  0 siblings, 0 replies; 13+ messages in thread
From: Will Deacon @ 2016-09-09 10:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Sep 08, 2016 at 08:46:42PM +0800, Chunyan Zhang wrote:
> When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
> can be traced by function and function graph tracing, and
> preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
> subsystem we should use preempt_disable/enable_notrace instead.
> 
> In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
> like events do") the function this_cpu_read() was added to
> trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
> tracer will go into a recursive loop, even if the tracing_on is
> disabled.
> 
> So this patch change to use preempt_enable/disable_notrace instead in
> this_cpu_read().
> 
> Since Yonghui Yang helped a lot to find the root cause of this problem,
> so also add his SOB.
> 
> Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
> Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
> ---
>  arch/arm64/include/asm/percpu.h | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)

Looks good to me:

Acked-by: Will Deacon <will.deacon@arm.com>

However, don't you need to make a similar change to asm-generic/percpu.h
for other architectures (e.g. arch/arm/)?

Will

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
  2016-09-08 12:46 ` Chunyan Zhang
@ 2016-09-09 11:35   ` Catalin Marinas
  -1 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2016-09-09 11:35 UTC (permalink / raw)
  To: Chunyan Zhang
  Cc: rostedt, mingo, mark.yang, zhang.lyra, linux-kernel,
	linux-arm-kernel, takahiro.akashi

On Thu, Sep 08, 2016 at 08:46:42PM +0800, Chunyan Zhang wrote:
> When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
> can be traced by function and function graph tracing, and
> preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
> subsystem we should use preempt_disable/enable_notrace instead.
> 
> In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
> like events do") the function this_cpu_read() was added to
> trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
> tracer will go into a recursive loop, even if the tracing_on is
> disabled.
> 
> So this patch change to use preempt_enable/disable_notrace instead in
> this_cpu_read().
> 
> Since Yonghui Yang helped a lot to find the root cause of this problem,
> so also add his SOB.
> 
> Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
> Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>

Queued for 4.8-rc6. Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
@ 2016-09-09 11:35   ` Catalin Marinas
  0 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2016-09-09 11:35 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Sep 08, 2016 at 08:46:42PM +0800, Chunyan Zhang wrote:
> When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
> can be traced by function and function graph tracing, and
> preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
> subsystem we should use preempt_disable/enable_notrace instead.
> 
> In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
> like events do") the function this_cpu_read() was added to
> trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
> tracer will go into a recursive loop, even if the tracing_on is
> disabled.
> 
> So this patch change to use preempt_enable/disable_notrace instead in
> this_cpu_read().
> 
> Since Yonghui Yang helped a lot to find the root cause of this problem,
> so also add his SOB.
> 
> Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
> Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>

Queued for 4.8-rc6. Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
  2016-09-09 10:07   ` Will Deacon
@ 2016-09-09 11:43     ` Chunyan Zhang
  -1 siblings, 0 replies; 13+ messages in thread
From: Chunyan Zhang @ 2016-09-09 11:43 UTC (permalink / raw)
  To: Will Deacon
  Cc: Steven Rostedt, mingo, Mark Yang, Lyra Zhang, linux-kernel,
	linux-arm-kernel, Takahiro Akashi, Catalin Marinas

On 9 September 2016 at 18:07, Will Deacon <will.deacon@arm.com> wrote:
> On Thu, Sep 08, 2016 at 08:46:42PM +0800, Chunyan Zhang wrote:
>> When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
>> can be traced by function and function graph tracing, and
>> preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
>> subsystem we should use preempt_disable/enable_notrace instead.
>>
>> In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
>> like events do") the function this_cpu_read() was added to
>> trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
>> tracer will go into a recursive loop, even if the tracing_on is
>> disabled.
>>
>> So this patch change to use preempt_enable/disable_notrace instead in
>> this_cpu_read().
>>
>> Since Yonghui Yang helped a lot to find the root cause of this problem,
>> so also add his SOB.
>>
>> Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
>> Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
>> ---
>>  arch/arm64/include/asm/percpu.h | 8 ++++----
>>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> Looks good to me:
>
> Acked-by: Will Deacon <will.deacon@arm.com>
>
> However, don't you need to make a similar change to asm-generic/percpu.h
> for other architectures (e.g. arch/arm/)?

Yes, I will send out another patch to fix that.

Thanks,
Chunyan

>
> Will

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
@ 2016-09-09 11:43     ` Chunyan Zhang
  0 siblings, 0 replies; 13+ messages in thread
From: Chunyan Zhang @ 2016-09-09 11:43 UTC (permalink / raw)
  To: linux-arm-kernel

On 9 September 2016 at 18:07, Will Deacon <will.deacon@arm.com> wrote:
> On Thu, Sep 08, 2016 at 08:46:42PM +0800, Chunyan Zhang wrote:
>> When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
>> can be traced by function and function graph tracing, and
>> preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
>> subsystem we should use preempt_disable/enable_notrace instead.
>>
>> In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
>> like events do") the function this_cpu_read() was added to
>> trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
>> tracer will go into a recursive loop, even if the tracing_on is
>> disabled.
>>
>> So this patch change to use preempt_enable/disable_notrace instead in
>> this_cpu_read().
>>
>> Since Yonghui Yang helped a lot to find the root cause of this problem,
>> so also add his SOB.
>>
>> Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
>> Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
>> ---
>>  arch/arm64/include/asm/percpu.h | 8 ++++----
>>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> Looks good to me:
>
> Acked-by: Will Deacon <will.deacon@arm.com>
>
> However, don't you need to make a similar change to asm-generic/percpu.h
> for other architectures (e.g. arch/arm/)?

Yes, I will send out another patch to fix that.

Thanks,
Chunyan

>
> Will

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
@ 2016-09-08 12:45 Chunyan Zhang
  0 siblings, 0 replies; 13+ messages in thread
From: Chunyan Zhang @ 2016-09-08 12:45 UTC (permalink / raw)
  To: rostedt, mingo
  Cc: zhang.lyra, linux-kernel, linux-arm-kernel, takahiro.akashi, mark.yang

When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
can be traced by function and function graph tracing, and
preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
subsystem we should use preempt_disable/enable_notrace instead.

In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
like events do") the function this_cpu_read() was added to
trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
tracer will go into a recursive loop, even if the tracing_on is
disabled.

So this patch change to use preempt_enable/disable_notrace instead in
this_cpu_read().

Since Yonghui Yang helped a lot to find the root cause of this problem,
so also add his SOB.

Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
---
 arch/arm64/include/asm/percpu.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
index 0a456be..2fee2f5 100644
--- a/arch/arm64/include/asm/percpu.h
+++ b/arch/arm64/include/asm/percpu.h
@@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
 #define _percpu_read(pcp)						\
 ({									\
 	typeof(pcp) __retval;						\
-	preempt_disable();						\
+	preempt_disable_notrace();					\
 	__retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), 	\
 					      sizeof(pcp));		\
-	preempt_enable();						\
+	preempt_enable_notrace();					\
 	__retval;							\
 })
 
 #define _percpu_write(pcp, val)						\
 do {									\
-	preempt_disable();						\
+	preempt_disable_notrace();					\
 	__percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), 	\
 				sizeof(pcp));				\
-	preempt_enable();						\
+	preempt_enable_notrace();					\
 } while(0)								\
 
 #define _pcp_protect(operation, pcp, val)			\
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-09-09 11:43 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-08 12:46 [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write Chunyan Zhang
2016-09-08 12:46 ` Chunyan Zhang
2016-09-08 13:02 ` Mark Rutland
2016-09-08 13:02   ` Mark Rutland
2016-09-08 13:17   ` Chunyan Zhang
2016-09-08 13:17     ` Chunyan Zhang
2016-09-09 10:07 ` Will Deacon
2016-09-09 10:07   ` Will Deacon
2016-09-09 11:43   ` Chunyan Zhang
2016-09-09 11:43     ` Chunyan Zhang
2016-09-09 11:35 ` Catalin Marinas
2016-09-09 11:35   ` Catalin Marinas
  -- strict thread matches above, loose matches on Subject: below --
2016-09-08 12:45 Chunyan Zhang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.