All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 2/5] tracing: Disable preemption when using the filter buffer
@ 2021-11-30  6:13 kernel test robot
  0 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2021-11-30  6:13 UTC (permalink / raw)
  To: kbuild

[-- Attachment #1: Type: text/plain, Size: 13152 bytes --]

CC: kbuild-all(a)lists.01.org
In-Reply-To: <20211130024318.880190623@goodmis.org>
References: <20211130024318.880190623@goodmis.org>
TO: Steven Rostedt <rostedt@goodmis.org>
TO: linux-kernel(a)vger.kernel.org
CC: Ingo Molnar <mingo@kernel.org>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Linux Memory Management List <linux-mm@kvack.org>

Hi Steven,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on rostedt-trace/for-next]
[also build test WARNING on linux/master hnaz-mm/master linus/master v5.16-rc3 next-20211129]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Steven-Rostedt/tracing-Various-updates/20211130-104342
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git for-next
:::::: branch date: 3 hours ago
:::::: commit date: 3 hours ago
config: x86_64-randconfig-s032-20211128 (https://download.01.org/0day-ci/archive/20211130/202111301454.dzbIJfyp-lkp(a)intel.com/config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.4-dirty
        # https://github.com/0day-ci/linux/commit/1ac91c8764ae50601cd41dceb620205607ab59f6
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Steven-Rostedt/tracing-Various-updates/20211130-104342
        git checkout 1ac91c8764ae50601cd41dceb620205607ab59f6
        # save the config file to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=x86_64 SHELL=/bin/bash kernel/trace/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)
   kernel/trace/trace.c:5710:1: sparse: sparse: trying to concatenate 9583-character string (8191 bytes max)
   kernel/trace/trace.c:392:28: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct trace_export **list @@     got struct trace_export [noderef] __rcu ** @@
   kernel/trace/trace.c:392:28: sparse:     expected struct trace_export **list
   kernel/trace/trace.c:392:28: sparse:     got struct trace_export [noderef] __rcu **
   kernel/trace/trace.c:406:33: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct trace_export **list @@     got struct trace_export [noderef] __rcu ** @@
   kernel/trace/trace.c:406:33: sparse:     expected struct trace_export **list
   kernel/trace/trace.c:406:33: sparse:     got struct trace_export [noderef] __rcu **
>> kernel/trace/trace.c:2769:27: sparse: sparse: assignment expression in conditional
   kernel/trace/trace.c:2843:38: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct event_filter *filter @@     got struct event_filter [noderef] __rcu *filter @@
   kernel/trace/trace.c:2843:38: sparse:     expected struct event_filter *filter
   kernel/trace/trace.c:2843:38: sparse:     got struct event_filter [noderef] __rcu *filter
   kernel/trace/trace.c:3225:46: sparse: sparse: incorrect type in initializer (different address spaces) @@     expected void const [noderef] __percpu *__vpp_verify @@     got struct trace_buffer_struct * @@
   kernel/trace/trace.c:3225:46: sparse:     expected void const [noderef] __percpu *__vpp_verify
   kernel/trace/trace.c:3225:46: sparse:     got struct trace_buffer_struct *
   kernel/trace/trace.c:3241:9: sparse: sparse: incorrect type in initializer (different address spaces) @@     expected void const [noderef] __percpu *__vpp_verify @@     got int * @@
   kernel/trace/trace.c:3241:9: sparse:     expected void const [noderef] __percpu *__vpp_verify
   kernel/trace/trace.c:3241:9: sparse:     got int *
   kernel/trace/trace.c:3251:17: sparse: sparse: incorrect type in assignment (different address spaces) @@     expected struct trace_buffer_struct *buffers @@     got struct trace_buffer_struct [noderef] __percpu * @@
   kernel/trace/trace.c:3251:17: sparse:     expected struct trace_buffer_struct *buffers
   kernel/trace/trace.c:3251:17: sparse:     got struct trace_buffer_struct [noderef] __percpu *
   kernel/trace/trace.c:346:9: sparse: sparse: incompatible types in comparison expression (different address spaces):
   kernel/trace/trace.c:346:9: sparse:    struct trace_export [noderef] __rcu *
   kernel/trace/trace.c:346:9: sparse:    struct trace_export *
   kernel/trace/trace.c:361:9: sparse: sparse: incompatible types in comparison expression (different address spaces):
   kernel/trace/trace.c:361:9: sparse:    struct trace_export [noderef] __rcu *
   kernel/trace/trace.c:361:9: sparse:    struct trace_export *

vim +2769 kernel/trace/trace.c

2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2736) 
ccb469a198cffa Steven Rostedt            2012-08-02  2737  struct ring_buffer_event *
13292494379f92 Steven Rostedt (VMware    2019-12-13  2738) trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
7f1d2f8210195c Steven Rostedt (Red Hat   2015-05-05  2739) 			  struct trace_event_file *trace_file,
ccb469a198cffa Steven Rostedt            2012-08-02  2740  			  int type, unsigned long len,
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2741  			  unsigned int trace_ctx)
ccb469a198cffa Steven Rostedt            2012-08-02  2742  {
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2743) 	struct ring_buffer_event *entry;
b94bc80df64823 Steven Rostedt (VMware    2021-03-16  2744) 	struct trace_array *tr = trace_file->tr;
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2745) 	int val;
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2746) 
b94bc80df64823 Steven Rostedt (VMware    2021-03-16  2747) 	*current_rb = tr->array_buffer.buffer;
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2748) 
b94bc80df64823 Steven Rostedt (VMware    2021-03-16  2749) 	if (!tr->no_filter_buffering_ref &&
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2750) 	    (trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED))) {
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2751) 		preempt_disable_notrace();
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2752) 		/*
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2753) 		 * Filtering is on, so try to use the per cpu buffer first.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2754) 		 * This buffer will simulate a ring_buffer_event,
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2755) 		 * where the type_len is zero and the array[0] will
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2756) 		 * hold the full length.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2757) 		 * (see include/linux/ring-buffer.h for details on
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2758) 		 *  how the ring_buffer_event is structured).
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2759) 		 *
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2760) 		 * Using a temp buffer during filtering and copying it
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2761) 		 * on a matched filter is quicker than writing directly
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2762) 		 * into the ring buffer and then discarding it when
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2763) 		 * it doesn't match. That is because the discard
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2764) 		 * requires several atomic operations to get right.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2765) 		 * Copying on match and doing nothing on a failed match
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2766) 		 * is still quicker than no copy on match, but having
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2767) 		 * to discard out of the ring buffer on a failed match.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2768) 		 */
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29 @2769) 		if (entry = __this_cpu_read(trace_buffered_event)) {
faa76a6c289f43 Steven Rostedt (VMware    2021-06-09  2770) 			int max_len = PAGE_SIZE - struct_size(entry, array, 1);
faa76a6c289f43 Steven Rostedt (VMware    2021-06-09  2771) 
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2772) 			val = this_cpu_inc_return(trace_buffered_event_cnt);
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2773) 
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2774) 			/*
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2775) 			 * Preemption is disabled, but interrupts and NMIs
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2776) 			 * can still come in now. If that happens after
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2777) 			 * the above increment, then it will have to go
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2778) 			 * back to the old method of allocating the event
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2779) 			 * on the ring buffer, and if the filter fails, it
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2780) 			 * will have to call ring_buffer_discard_commit()
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2781) 			 * to remove it.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2782) 			 *
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2783) 			 * Need to also check the unlikely case that the
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2784) 			 * length is bigger than the temp buffer size.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2785) 			 * If that happens, then the reserve is pretty much
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2786) 			 * guaranteed to fail, as the ring buffer currently
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2787) 			 * only allows events less than a page. But that may
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2788) 			 * change in the future, so let the ring buffer reserve
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2789) 			 * handle the failure in that case.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2790) 			 */
faa76a6c289f43 Steven Rostedt (VMware    2021-06-09  2791) 			if (val == 1 && likely(len <= max_len)) {
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2792  				trace_event_setup(entry, type, trace_ctx);
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2793) 				entry->array[0] = len;
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2794) 				/* Return with preemption disabled */
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2795) 				return entry;
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2796) 			}
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2797) 			this_cpu_dec(trace_buffered_event_cnt);
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2798) 		}
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2799) 		/* __trace_buffer_lock_reserve() disables preemption */
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2800) 		preempt_enable_notrace();
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2801) 	}
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2802) 
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2803  	entry = __trace_buffer_lock_reserve(*current_rb, type, len,
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2804  					    trace_ctx);
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2805) 	/*
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2806) 	 * If tracing is off, but we have triggers enabled
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2807) 	 * we still need to look at the event data. Use the temp_buffer
906695e5932463 Qiujun Huang              2020-10-31  2808  	 * to store the trace event for the trigger to use. It's recursive
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2809) 	 * safe and will not be recorded anywhere.
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2810) 	 */
5d6ad960a71f0b Steven Rostedt (Red Hat   2015-05-13  2811) 	if (!entry && trace_file->flags & EVENT_FILE_FL_TRIGGER_COND) {
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2812) 		*current_rb = temp_buffer;
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2813  		entry = __trace_buffer_lock_reserve(*current_rb, type, len,
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2814  						    trace_ctx);
ccb469a198cffa Steven Rostedt            2012-08-02  2815  	}
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2816) 	return entry;
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2817) }
ccb469a198cffa Steven Rostedt            2012-08-02  2818  EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve);
ccb469a198cffa Steven Rostedt            2012-08-02  2819  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 2/5] tracing: Disable preemption when using the filter buffer
  2021-11-30  2:39 ` [PATCH 2/5] tracing: Disable preemption when using the filter buffer Steven Rostedt
@ 2021-11-30  5:02     ` kernel test robot
  0 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2021-11-30  5:02 UTC (permalink / raw)
  To: Steven Rostedt, linux-kernel
  Cc: kbuild-all, Ingo Molnar, Andrew Morton, Linux Memory Management List

Hi Steven,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on rostedt-trace/for-next]
[also build test WARNING on linux/master hnaz-mm/master linus/master v5.16-rc3 next-20211129]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Steven-Rostedt/tracing-Various-updates/20211130-104342
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git for-next
config: mips-allyesconfig (https://download.01.org/0day-ci/archive/20211130/202111301209.GarWNR3z-lkp@intel.com/config)
compiler: mips-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/1ac91c8764ae50601cd41dceb620205607ab59f6
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Steven-Rostedt/tracing-Various-updates/20211130-104342
        git checkout 1ac91c8764ae50601cd41dceb620205607ab59f6
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=mips SHELL=/bin/bash kernel/trace/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   kernel/trace/trace.c: In function 'trace_event_buffer_lock_reserve':
>> kernel/trace/trace.c:2769:21: warning: suggest parentheses around assignment used as truth value [-Wparentheses]
    2769 |                 if (entry = __this_cpu_read(trace_buffered_event)) {
         |                     ^~~~~
   kernel/trace/trace.c: In function 'trace_check_vprintf':
   kernel/trace/trace.c:3820:17: warning: function 'trace_check_vprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]
    3820 |                 trace_seq_vprintf(&iter->seq, iter->fmt, ap);
         |                 ^~~~~~~~~~~~~~~~~
   kernel/trace/trace.c:3887:17: warning: function 'trace_check_vprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]
    3887 |                 trace_seq_vprintf(&iter->seq, p, ap);
         |                 ^~~~~~~~~~~~~~~~~


vim +2769 kernel/trace/trace.c

  2736	
  2737	struct ring_buffer_event *
  2738	trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
  2739				  struct trace_event_file *trace_file,
  2740				  int type, unsigned long len,
  2741				  unsigned int trace_ctx)
  2742	{
  2743		struct ring_buffer_event *entry;
  2744		struct trace_array *tr = trace_file->tr;
  2745		int val;
  2746	
  2747		*current_rb = tr->array_buffer.buffer;
  2748	
  2749		if (!tr->no_filter_buffering_ref &&
  2750		    (trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED))) {
  2751			preempt_disable_notrace();
  2752			/*
  2753			 * Filtering is on, so try to use the per cpu buffer first.
  2754			 * This buffer will simulate a ring_buffer_event,
  2755			 * where the type_len is zero and the array[0] will
  2756			 * hold the full length.
  2757			 * (see include/linux/ring-buffer.h for details on
  2758			 *  how the ring_buffer_event is structured).
  2759			 *
  2760			 * Using a temp buffer during filtering and copying it
  2761			 * on a matched filter is quicker than writing directly
  2762			 * into the ring buffer and then discarding it when
  2763			 * it doesn't match. That is because the discard
  2764			 * requires several atomic operations to get right.
  2765			 * Copying on match and doing nothing on a failed match
  2766			 * is still quicker than no copy on match, but having
  2767			 * to discard out of the ring buffer on a failed match.
  2768			 */
> 2769			if (entry = __this_cpu_read(trace_buffered_event)) {
  2770				int max_len = PAGE_SIZE - struct_size(entry, array, 1);
  2771	
  2772				val = this_cpu_inc_return(trace_buffered_event_cnt);
  2773	
  2774				/*
  2775				 * Preemption is disabled, but interrupts and NMIs
  2776				 * can still come in now. If that happens after
  2777				 * the above increment, then it will have to go
  2778				 * back to the old method of allocating the event
  2779				 * on the ring buffer, and if the filter fails, it
  2780				 * will have to call ring_buffer_discard_commit()
  2781				 * to remove it.
  2782				 *
  2783				 * Need to also check the unlikely case that the
  2784				 * length is bigger than the temp buffer size.
  2785				 * If that happens, then the reserve is pretty much
  2786				 * guaranteed to fail, as the ring buffer currently
  2787				 * only allows events less than a page. But that may
  2788				 * change in the future, so let the ring buffer reserve
  2789				 * handle the failure in that case.
  2790				 */
  2791				if (val == 1 && likely(len <= max_len)) {
  2792					trace_event_setup(entry, type, trace_ctx);
  2793					entry->array[0] = len;
  2794					/* Return with preemption disabled */
  2795					return entry;
  2796				}
  2797				this_cpu_dec(trace_buffered_event_cnt);
  2798			}
  2799			/* __trace_buffer_lock_reserve() disables preemption */
  2800			preempt_enable_notrace();
  2801		}
  2802	
  2803		entry = __trace_buffer_lock_reserve(*current_rb, type, len,
  2804						    trace_ctx);
  2805		/*
  2806		 * If tracing is off, but we have triggers enabled
  2807		 * we still need to look at the event data. Use the temp_buffer
  2808		 * to store the trace event for the trigger to use. It's recursive
  2809		 * safe and will not be recorded anywhere.
  2810		 */
  2811		if (!entry && trace_file->flags & EVENT_FILE_FL_TRIGGER_COND) {
  2812			*current_rb = temp_buffer;
  2813			entry = __trace_buffer_lock_reserve(*current_rb, type, len,
  2814							    trace_ctx);
  2815		}
  2816		return entry;
  2817	}
  2818	EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve);
  2819	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 2/5] tracing: Disable preemption when using the filter buffer
@ 2021-11-30  5:02     ` kernel test robot
  0 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2021-11-30  5:02 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 6395 bytes --]

Hi Steven,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on rostedt-trace/for-next]
[also build test WARNING on linux/master hnaz-mm/master linus/master v5.16-rc3 next-20211129]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Steven-Rostedt/tracing-Various-updates/20211130-104342
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git for-next
config: mips-allyesconfig (https://download.01.org/0day-ci/archive/20211130/202111301209.GarWNR3z-lkp(a)intel.com/config)
compiler: mips-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/1ac91c8764ae50601cd41dceb620205607ab59f6
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Steven-Rostedt/tracing-Various-updates/20211130-104342
        git checkout 1ac91c8764ae50601cd41dceb620205607ab59f6
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=mips SHELL=/bin/bash kernel/trace/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   kernel/trace/trace.c: In function 'trace_event_buffer_lock_reserve':
>> kernel/trace/trace.c:2769:21: warning: suggest parentheses around assignment used as truth value [-Wparentheses]
    2769 |                 if (entry = __this_cpu_read(trace_buffered_event)) {
         |                     ^~~~~
   kernel/trace/trace.c: In function 'trace_check_vprintf':
   kernel/trace/trace.c:3820:17: warning: function 'trace_check_vprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]
    3820 |                 trace_seq_vprintf(&iter->seq, iter->fmt, ap);
         |                 ^~~~~~~~~~~~~~~~~
   kernel/trace/trace.c:3887:17: warning: function 'trace_check_vprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]
    3887 |                 trace_seq_vprintf(&iter->seq, p, ap);
         |                 ^~~~~~~~~~~~~~~~~


vim +2769 kernel/trace/trace.c

  2736	
  2737	struct ring_buffer_event *
  2738	trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
  2739				  struct trace_event_file *trace_file,
  2740				  int type, unsigned long len,
  2741				  unsigned int trace_ctx)
  2742	{
  2743		struct ring_buffer_event *entry;
  2744		struct trace_array *tr = trace_file->tr;
  2745		int val;
  2746	
  2747		*current_rb = tr->array_buffer.buffer;
  2748	
  2749		if (!tr->no_filter_buffering_ref &&
  2750		    (trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED))) {
  2751			preempt_disable_notrace();
  2752			/*
  2753			 * Filtering is on, so try to use the per cpu buffer first.
  2754			 * This buffer will simulate a ring_buffer_event,
  2755			 * where the type_len is zero and the array[0] will
  2756			 * hold the full length.
  2757			 * (see include/linux/ring-buffer.h for details on
  2758			 *  how the ring_buffer_event is structured).
  2759			 *
  2760			 * Using a temp buffer during filtering and copying it
  2761			 * on a matched filter is quicker than writing directly
  2762			 * into the ring buffer and then discarding it when
  2763			 * it doesn't match. That is because the discard
  2764			 * requires several atomic operations to get right.
  2765			 * Copying on match and doing nothing on a failed match
  2766			 * is still quicker than no copy on match, but having
  2767			 * to discard out of the ring buffer on a failed match.
  2768			 */
> 2769			if (entry = __this_cpu_read(trace_buffered_event)) {
  2770				int max_len = PAGE_SIZE - struct_size(entry, array, 1);
  2771	
  2772				val = this_cpu_inc_return(trace_buffered_event_cnt);
  2773	
  2774				/*
  2775				 * Preemption is disabled, but interrupts and NMIs
  2776				 * can still come in now. If that happens after
  2777				 * the above increment, then it will have to go
  2778				 * back to the old method of allocating the event
  2779				 * on the ring buffer, and if the filter fails, it
  2780				 * will have to call ring_buffer_discard_commit()
  2781				 * to remove it.
  2782				 *
  2783				 * Need to also check the unlikely case that the
  2784				 * length is bigger than the temp buffer size.
  2785				 * If that happens, then the reserve is pretty much
  2786				 * guaranteed to fail, as the ring buffer currently
  2787				 * only allows events less than a page. But that may
  2788				 * change in the future, so let the ring buffer reserve
  2789				 * handle the failure in that case.
  2790				 */
  2791				if (val == 1 && likely(len <= max_len)) {
  2792					trace_event_setup(entry, type, trace_ctx);
  2793					entry->array[0] = len;
  2794					/* Return with preemption disabled */
  2795					return entry;
  2796				}
  2797				this_cpu_dec(trace_buffered_event_cnt);
  2798			}
  2799			/* __trace_buffer_lock_reserve() disables preemption */
  2800			preempt_enable_notrace();
  2801		}
  2802	
  2803		entry = __trace_buffer_lock_reserve(*current_rb, type, len,
  2804						    trace_ctx);
  2805		/*
  2806		 * If tracing is off, but we have triggers enabled
  2807		 * we still need to look at the event data. Use the temp_buffer
  2808		 * to store the trace event for the trigger to use. It's recursive
  2809		 * safe and will not be recorded anywhere.
  2810		 */
  2811		if (!entry && trace_file->flags & EVENT_FILE_FL_TRIGGER_COND) {
  2812			*current_rb = temp_buffer;
  2813			entry = __trace_buffer_lock_reserve(*current_rb, type, len,
  2814							    trace_ctx);
  2815		}
  2816		return entry;
  2817	}
  2818	EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve);
  2819	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 2/5] tracing: Disable preemption when using the filter buffer
  2021-11-30  2:39 [PATCH 0/5] tracing: Various updates Steven Rostedt
@ 2021-11-30  2:39 ` Steven Rostedt
  2021-11-30  5:02     ` kernel test robot
  0 siblings, 1 reply; 4+ messages in thread
From: Steven Rostedt @ 2021-11-30  2:39 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

In case trace_event_buffer_lock_reserve() is called with preemption
enabled, the algorithm that defines the usage of the per cpu filter buffer
may fail if the task schedules to another CPU after determining which
buffer it will use.

Disable preemption when using the filter buffer. And because that same
buffer must be used throughout the call, keep preemption disabled until
the filter buffer is released.

This will also keep the semantics between the use case of when the filter
buffer is used, and when the ring buffer itself is used, as that case also
disables preemption until the ring buffer is released.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 59 +++++++++++++++++++++++++-------------------
 kernel/trace/trace.h |  4 ++-
 2 files changed, 36 insertions(+), 27 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 2e87b7bf2ba7..415f00d70b15 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -980,6 +980,8 @@ __buffer_unlock_commit(struct trace_buffer *buffer, struct ring_buffer_event *ev
 		ring_buffer_write(buffer, event->array[0], &event->array[1]);
 		/* Release the temp buffer */
 		this_cpu_dec(trace_buffered_event_cnt);
+		/* ring_buffer_unlock_commit() enables preemption */
+		preempt_enable_notrace();
 	} else
 		ring_buffer_unlock_commit(buffer, event);
 }
@@ -2745,8 +2747,8 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
 	*current_rb = tr->array_buffer.buffer;
 
 	if (!tr->no_filter_buffering_ref &&
-	    (trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED)) &&
-	    (entry = __this_cpu_read(trace_buffered_event))) {
+	    (trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED))) {
+		preempt_disable_notrace();
 		/*
 		 * Filtering is on, so try to use the per cpu buffer first.
 		 * This buffer will simulate a ring_buffer_event,
@@ -2764,33 +2766,38 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
 		 * is still quicker than no copy on match, but having
 		 * to discard out of the ring buffer on a failed match.
 		 */
-		int max_len = PAGE_SIZE - struct_size(entry, array, 1);
+		if (entry = __this_cpu_read(trace_buffered_event)) {
+			int max_len = PAGE_SIZE - struct_size(entry, array, 1);
 
-		val = this_cpu_inc_return(trace_buffered_event_cnt);
+			val = this_cpu_inc_return(trace_buffered_event_cnt);
 
-		/*
-		 * Preemption is disabled, but interrupts and NMIs
-		 * can still come in now. If that happens after
-		 * the above increment, then it will have to go
-		 * back to the old method of allocating the event
-		 * on the ring buffer, and if the filter fails, it
-		 * will have to call ring_buffer_discard_commit()
-		 * to remove it.
-		 *
-		 * Need to also check the unlikely case that the
-		 * length is bigger than the temp buffer size.
-		 * If that happens, then the reserve is pretty much
-		 * guaranteed to fail, as the ring buffer currently
-		 * only allows events less than a page. But that may
-		 * change in the future, so let the ring buffer reserve
-		 * handle the failure in that case.
-		 */
-		if (val == 1 && likely(len <= max_len)) {
-			trace_event_setup(entry, type, trace_ctx);
-			entry->array[0] = len;
-			return entry;
+			/*
+			 * Preemption is disabled, but interrupts and NMIs
+			 * can still come in now. If that happens after
+			 * the above increment, then it will have to go
+			 * back to the old method of allocating the event
+			 * on the ring buffer, and if the filter fails, it
+			 * will have to call ring_buffer_discard_commit()
+			 * to remove it.
+			 *
+			 * Need to also check the unlikely case that the
+			 * length is bigger than the temp buffer size.
+			 * If that happens, then the reserve is pretty much
+			 * guaranteed to fail, as the ring buffer currently
+			 * only allows events less than a page. But that may
+			 * change in the future, so let the ring buffer reserve
+			 * handle the failure in that case.
+			 */
+			if (val == 1 && likely(len <= max_len)) {
+				trace_event_setup(entry, type, trace_ctx);
+				entry->array[0] = len;
+				/* Return with preemption disabled */
+				return entry;
+			}
+			this_cpu_dec(trace_buffered_event_cnt);
 		}
-		this_cpu_dec(trace_buffered_event_cnt);
+		/* __trace_buffer_lock_reserve() disables preemption */
+		preempt_enable_notrace();
 	}
 
 	entry = __trace_buffer_lock_reserve(*current_rb, type, len,
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 7162157b970b..8bd1a815ce90 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1337,10 +1337,12 @@ __trace_event_discard_commit(struct trace_buffer *buffer,
 			     struct ring_buffer_event *event)
 {
 	if (this_cpu_read(trace_buffered_event) == event) {
-		/* Simply release the temp buffer */
+		/* Simply release the temp buffer and enable preemption */
 		this_cpu_dec(trace_buffered_event_cnt);
+		preempt_enable_notrace();
 		return;
 	}
+	/* ring_buffer_discard_commit() enables preemption */
 	ring_buffer_discard_commit(buffer, event);
 }
 
-- 
2.33.0

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-11-30  6:13 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-30  6:13 [PATCH 2/5] tracing: Disable preemption when using the filter buffer kernel test robot
  -- strict thread matches above, loose matches on Subject: below --
2021-11-30  2:39 [PATCH 0/5] tracing: Various updates Steven Rostedt
2021-11-30  2:39 ` [PATCH 2/5] tracing: Disable preemption when using the filter buffer Steven Rostedt
2021-11-30  5:02   ` kernel test robot
2021-11-30  5:02     ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.