All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] tracing: Various updates
@ 2021-11-30  2:39 Steven Rostedt
  2021-11-30  2:39 ` [PATCH 1/5] tracing: Use __this_cpu_read() in trace_event_buffer_lock_reserver() Steven Rostedt
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Steven Rostedt @ 2021-11-30  2:39 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

 - Make trace_event_buffer_lock_reserver always disable preemption
   (it says it does but not always)

 - Have kprobes, uprobes and eprobes use
   trace_event_buffer_lock_reserver() functions.

Steven Rostedt (VMware) (5):
      tracing: Use __this_cpu_read() in trace_event_buffer_lock_reserver()
      tracing: Disable preemption when using the filter buffer
      tracing: Have eprobes use filtering logic of trace events
      tracing/kprobes: Do not open code event reserve logic
      tracing/uprobes: Use trace_event_buffer_reserve() helper

----
 kernel/trace/trace.c        | 59 +++++++++++++++++++++++++--------------------
 kernel/trace/trace.h        |  4 ++-
 kernel/trace/trace_eprobe.c | 16 ++++--------
 kernel/trace/trace_kprobe.c | 25 ++++++-------------
 kernel/trace/trace_uprobe.c | 10 +++-----
 5 files changed, 52 insertions(+), 62 deletions(-)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/5] tracing: Use __this_cpu_read() in trace_event_buffer_lock_reserver()
  2021-11-30  2:39 [PATCH 0/5] tracing: Various updates Steven Rostedt
@ 2021-11-30  2:39 ` Steven Rostedt
  2021-11-30  2:39 ` [PATCH 2/5] tracing: Disable preemption when using the filter buffer Steven Rostedt
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2021-11-30  2:39 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

The value read by this_cpu_read() is used later and its use is expected to
stay on the same CPU as being read. But this_cpu_read() does not warn if
it is called without preemption disabled, where as __this_cpu_read() will
check if preemption is disabled on CONFIG_DEBUG_PREEMPT

Currently all callers have preemption disabled, but there may be new
callers in the future that may not.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 4821fe6a40a5..2e87b7bf2ba7 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2746,7 +2746,7 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
 
 	if (!tr->no_filter_buffering_ref &&
 	    (trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED)) &&
-	    (entry = this_cpu_read(trace_buffered_event))) {
+	    (entry = __this_cpu_read(trace_buffered_event))) {
 		/*
 		 * Filtering is on, so try to use the per cpu buffer first.
 		 * This buffer will simulate a ring_buffer_event,
-- 
2.33.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/5] tracing: Disable preemption when using the filter buffer
  2021-11-30  2:39 [PATCH 0/5] tracing: Various updates Steven Rostedt
  2021-11-30  2:39 ` [PATCH 1/5] tracing: Use __this_cpu_read() in trace_event_buffer_lock_reserver() Steven Rostedt
@ 2021-11-30  2:39 ` Steven Rostedt
  2021-11-30  5:02     ` kernel test robot
  2021-11-30  2:39 ` [PATCH 3/5] tracing: Have eprobes use filtering logic of trace events Steven Rostedt
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Steven Rostedt @ 2021-11-30  2:39 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

In case trace_event_buffer_lock_reserve() is called with preemption
enabled, the algorithm that defines the usage of the per cpu filter buffer
may fail if the task schedules to another CPU after determining which
buffer it will use.

Disable preemption when using the filter buffer. And because that same
buffer must be used throughout the call, keep preemption disabled until
the filter buffer is released.

This will also keep the semantics between the use case of when the filter
buffer is used, and when the ring buffer itself is used, as that case also
disables preemption until the ring buffer is released.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 59 +++++++++++++++++++++++++-------------------
 kernel/trace/trace.h |  4 ++-
 2 files changed, 36 insertions(+), 27 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 2e87b7bf2ba7..415f00d70b15 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -980,6 +980,8 @@ __buffer_unlock_commit(struct trace_buffer *buffer, struct ring_buffer_event *ev
 		ring_buffer_write(buffer, event->array[0], &event->array[1]);
 		/* Release the temp buffer */
 		this_cpu_dec(trace_buffered_event_cnt);
+		/* ring_buffer_unlock_commit() enables preemption */
+		preempt_enable_notrace();
 	} else
 		ring_buffer_unlock_commit(buffer, event);
 }
@@ -2745,8 +2747,8 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
 	*current_rb = tr->array_buffer.buffer;
 
 	if (!tr->no_filter_buffering_ref &&
-	    (trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED)) &&
-	    (entry = __this_cpu_read(trace_buffered_event))) {
+	    (trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED))) {
+		preempt_disable_notrace();
 		/*
 		 * Filtering is on, so try to use the per cpu buffer first.
 		 * This buffer will simulate a ring_buffer_event,
@@ -2764,33 +2766,38 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
 		 * is still quicker than no copy on match, but having
 		 * to discard out of the ring buffer on a failed match.
 		 */
-		int max_len = PAGE_SIZE - struct_size(entry, array, 1);
+		if (entry = __this_cpu_read(trace_buffered_event)) {
+			int max_len = PAGE_SIZE - struct_size(entry, array, 1);
 
-		val = this_cpu_inc_return(trace_buffered_event_cnt);
+			val = this_cpu_inc_return(trace_buffered_event_cnt);
 
-		/*
-		 * Preemption is disabled, but interrupts and NMIs
-		 * can still come in now. If that happens after
-		 * the above increment, then it will have to go
-		 * back to the old method of allocating the event
-		 * on the ring buffer, and if the filter fails, it
-		 * will have to call ring_buffer_discard_commit()
-		 * to remove it.
-		 *
-		 * Need to also check the unlikely case that the
-		 * length is bigger than the temp buffer size.
-		 * If that happens, then the reserve is pretty much
-		 * guaranteed to fail, as the ring buffer currently
-		 * only allows events less than a page. But that may
-		 * change in the future, so let the ring buffer reserve
-		 * handle the failure in that case.
-		 */
-		if (val == 1 && likely(len <= max_len)) {
-			trace_event_setup(entry, type, trace_ctx);
-			entry->array[0] = len;
-			return entry;
+			/*
+			 * Preemption is disabled, but interrupts and NMIs
+			 * can still come in now. If that happens after
+			 * the above increment, then it will have to go
+			 * back to the old method of allocating the event
+			 * on the ring buffer, and if the filter fails, it
+			 * will have to call ring_buffer_discard_commit()
+			 * to remove it.
+			 *
+			 * Need to also check the unlikely case that the
+			 * length is bigger than the temp buffer size.
+			 * If that happens, then the reserve is pretty much
+			 * guaranteed to fail, as the ring buffer currently
+			 * only allows events less than a page. But that may
+			 * change in the future, so let the ring buffer reserve
+			 * handle the failure in that case.
+			 */
+			if (val == 1 && likely(len <= max_len)) {
+				trace_event_setup(entry, type, trace_ctx);
+				entry->array[0] = len;
+				/* Return with preemption disabled */
+				return entry;
+			}
+			this_cpu_dec(trace_buffered_event_cnt);
 		}
-		this_cpu_dec(trace_buffered_event_cnt);
+		/* __trace_buffer_lock_reserve() disables preemption */
+		preempt_enable_notrace();
 	}
 
 	entry = __trace_buffer_lock_reserve(*current_rb, type, len,
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 7162157b970b..8bd1a815ce90 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1337,10 +1337,12 @@ __trace_event_discard_commit(struct trace_buffer *buffer,
 			     struct ring_buffer_event *event)
 {
 	if (this_cpu_read(trace_buffered_event) == event) {
-		/* Simply release the temp buffer */
+		/* Simply release the temp buffer and enable preemption */
 		this_cpu_dec(trace_buffered_event_cnt);
+		preempt_enable_notrace();
 		return;
 	}
+	/* ring_buffer_discard_commit() enables preemption */
 	ring_buffer_discard_commit(buffer, event);
 }
 
-- 
2.33.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/5] tracing: Have eprobes use filtering logic of trace events
  2021-11-30  2:39 [PATCH 0/5] tracing: Various updates Steven Rostedt
  2021-11-30  2:39 ` [PATCH 1/5] tracing: Use __this_cpu_read() in trace_event_buffer_lock_reserver() Steven Rostedt
  2021-11-30  2:39 ` [PATCH 2/5] tracing: Disable preemption when using the filter buffer Steven Rostedt
@ 2021-11-30  2:39 ` Steven Rostedt
  2021-11-30  2:39 ` [PATCH 4/5] tracing/kprobes: Do not open code event reserve logic Steven Rostedt
  2021-11-30  2:39 ` [PATCH 5/5] tracing/uprobes: Use trace_event_buffer_reserve() helper Steven Rostedt
  4 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2021-11-30  2:39 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

The eprobes open code the reserving of the event on the ring buffer for
ftrace instead of using the ftrace event wrappers, which means that it
doesn't get affected by the filters, breaking the filtering logic on user
space.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace_eprobe.c | 16 +++++-----------
 1 file changed, 5 insertions(+), 11 deletions(-)

diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
index 928867f527e7..88487752d307 100644
--- a/kernel/trace/trace_eprobe.c
+++ b/kernel/trace/trace_eprobe.c
@@ -489,18 +489,12 @@ __eprobe_trace_func(struct eprobe_data *edata, void *rec)
 	if (trace_trigger_soft_disabled(edata->file))
 		return;
 
-	fbuffer.trace_ctx = tracing_gen_ctx();
-	fbuffer.trace_file = edata->file;
-
 	dsize = get_eprobe_size(&edata->ep->tp, rec);
-	fbuffer.regs = NULL;
-
-	fbuffer.event =
-		trace_event_buffer_lock_reserve(&fbuffer.buffer, edata->file,
-					call->event.type,
-					sizeof(*entry) + edata->ep->tp.size + dsize,
-					fbuffer.trace_ctx);
-	if (!fbuffer.event)
+
+	entry = trace_event_buffer_reserve(&fbuffer, edata->file,
+					   sizeof(*entry) + edata->ep->tp.size + dsize);
+
+	if (!entry)
 		return;
 
 	entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event);
-- 
2.33.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/5] tracing/kprobes: Do not open code event reserve logic
  2021-11-30  2:39 [PATCH 0/5] tracing: Various updates Steven Rostedt
                   ` (2 preceding siblings ...)
  2021-11-30  2:39 ` [PATCH 3/5] tracing: Have eprobes use filtering logic of trace events Steven Rostedt
@ 2021-11-30  2:39 ` Steven Rostedt
  2021-11-30  4:38   ` Masami Hiramatsu
  2021-11-30  2:39 ` [PATCH 5/5] tracing/uprobes: Use trace_event_buffer_reserve() helper Steven Rostedt
  4 siblings, 1 reply; 10+ messages in thread
From: Steven Rostedt @ 2021-11-30  2:39 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Masami Hiramatsu

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

As kprobe events use trace_event_buffer_commit() to commit the event to
the ftrace ring buffer, for consistency, it should use
trace_event_buffer_reserve() to allocate it, as the two functions are
related.

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace_kprobe.c | 25 +++++++------------------
 1 file changed, 7 insertions(+), 18 deletions(-)

diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 33272a7b6912..d10c01948e68 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -1383,17 +1383,11 @@ __kprobe_trace_func(struct trace_kprobe *tk, struct pt_regs *regs,
 	if (trace_trigger_soft_disabled(trace_file))
 		return;
 
-	fbuffer.trace_ctx = tracing_gen_ctx();
-	fbuffer.trace_file = trace_file;
-
 	dsize = __get_data_size(&tk->tp, regs);
 
-	fbuffer.event =
-		trace_event_buffer_lock_reserve(&fbuffer.buffer, trace_file,
-					call->event.type,
-					sizeof(*entry) + tk->tp.size + dsize,
-					fbuffer.trace_ctx);
-	if (!fbuffer.event)
+	entry = trace_event_buffer_reserve(&fbuffer, trace_file,
+					   sizeof(*entry) + tk->tp.size + dsize);
+	if (!entry)
 		return;
 
 	fbuffer.regs = regs;
@@ -1430,16 +1424,11 @@ __kretprobe_trace_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
 	if (trace_trigger_soft_disabled(trace_file))
 		return;
 
-	fbuffer.trace_ctx = tracing_gen_ctx();
-	fbuffer.trace_file = trace_file;
-
 	dsize = __get_data_size(&tk->tp, regs);
-	fbuffer.event =
-		trace_event_buffer_lock_reserve(&fbuffer.buffer, trace_file,
-					call->event.type,
-					sizeof(*entry) + tk->tp.size + dsize,
-					fbuffer.trace_ctx);
-	if (!fbuffer.event)
+
+	entry = trace_event_buffer_reserve(&fbuffer, trace_file,
+					   sizeof(*entry) + tk->tp.size + dsize);
+	if (!entry)
 		return;
 
 	fbuffer.regs = regs;
-- 
2.33.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/5] tracing/uprobes: Use trace_event_buffer_reserve() helper
  2021-11-30  2:39 [PATCH 0/5] tracing: Various updates Steven Rostedt
                   ` (3 preceding siblings ...)
  2021-11-30  2:39 ` [PATCH 4/5] tracing/kprobes: Do not open code event reserve logic Steven Rostedt
@ 2021-11-30  2:39 ` Steven Rostedt
  2021-11-30  4:40   ` Masami Hiramatsu
  4 siblings, 1 reply; 10+ messages in thread
From: Steven Rostedt @ 2021-11-30  2:39 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Masami Hiramatsu

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

To be consistent with kprobes and eprobes, use
trace_event_buffer_reserver() and trace_event_buffer_commit(). This will
ensure that any updates to trace events will also be implemented on uprobe
events.

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace_uprobe.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index f5f0039d31e5..177717c4e74a 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -949,8 +949,8 @@ static void __uprobe_trace_func(struct trace_uprobe *tu,
 				struct trace_event_file *trace_file)
 {
 	struct uprobe_trace_entry_head *entry;
+	struct trace_event_buffer fbuffer;
 	struct trace_buffer *buffer;
-	struct ring_buffer_event *event;
 	void *data;
 	int size, esize;
 	struct trace_event_call *call = trace_probe_event_call(&tu->tp);
@@ -965,12 +965,10 @@ static void __uprobe_trace_func(struct trace_uprobe *tu,
 
 	esize = SIZEOF_TRACE_ENTRY(is_ret_probe(tu));
 	size = esize + tu->tp.size + dsize;
-	event = trace_event_buffer_lock_reserve(&buffer, trace_file,
-						call->event.type, size, 0);
-	if (!event)
+	entry = trace_event_buffer_reserve(&fbuffer, trace_file, size);
+	if (!entry)
 		return;
 
-	entry = ring_buffer_event_data(event);
 	if (is_ret_probe(tu)) {
 		entry->vaddr[0] = func;
 		entry->vaddr[1] = instruction_pointer(regs);
@@ -982,7 +980,7 @@ static void __uprobe_trace_func(struct trace_uprobe *tu,
 
 	memcpy(data, ucb->buf, tu->tp.size + dsize);
 
-	event_trigger_unlock_commit(trace_file, buffer, event, entry, 0);
+	trace_event_buffer_commit(&fbuffer);
 }
 
 /* uprobe handler */
-- 
2.33.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/5] tracing/kprobes: Do not open code event reserve logic
  2021-11-30  2:39 ` [PATCH 4/5] tracing/kprobes: Do not open code event reserve logic Steven Rostedt
@ 2021-11-30  4:38   ` Masami Hiramatsu
  0 siblings, 0 replies; 10+ messages in thread
From: Masami Hiramatsu @ 2021-11-30  4:38 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-kernel, Ingo Molnar, Andrew Morton, Masami Hiramatsu

On Mon, 29 Nov 2021 21:39:49 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:

> From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
> 
> As kprobe events use trace_event_buffer_commit() to commit the event to
> the ftrace ring buffer, for consistency, it should use
> trace_event_buffer_reserve() to allocate it, as the two functions are
> related.

OK, I got the rule.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

BTW, what should be used for trace_event_buffer_lock_reserve() as a pair call?

Thanks!

> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> ---
>  kernel/trace/trace_kprobe.c | 25 +++++++------------------
>  1 file changed, 7 insertions(+), 18 deletions(-)
> 
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index 33272a7b6912..d10c01948e68 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -1383,17 +1383,11 @@ __kprobe_trace_func(struct trace_kprobe *tk, struct pt_regs *regs,
>  	if (trace_trigger_soft_disabled(trace_file))
>  		return;
>  
> -	fbuffer.trace_ctx = tracing_gen_ctx();
> -	fbuffer.trace_file = trace_file;
> -
>  	dsize = __get_data_size(&tk->tp, regs);
>  
> -	fbuffer.event =
> -		trace_event_buffer_lock_reserve(&fbuffer.buffer, trace_file,
> -					call->event.type,
> -					sizeof(*entry) + tk->tp.size + dsize,
> -					fbuffer.trace_ctx);
> -	if (!fbuffer.event)
> +	entry = trace_event_buffer_reserve(&fbuffer, trace_file,
> +					   sizeof(*entry) + tk->tp.size + dsize);
> +	if (!entry)
>  		return;
>  
>  	fbuffer.regs = regs;
> @@ -1430,16 +1424,11 @@ __kretprobe_trace_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
>  	if (trace_trigger_soft_disabled(trace_file))
>  		return;
>  
> -	fbuffer.trace_ctx = tracing_gen_ctx();
> -	fbuffer.trace_file = trace_file;
> -
>  	dsize = __get_data_size(&tk->tp, regs);
> -	fbuffer.event =
> -		trace_event_buffer_lock_reserve(&fbuffer.buffer, trace_file,
> -					call->event.type,
> -					sizeof(*entry) + tk->tp.size + dsize,
> -					fbuffer.trace_ctx);
> -	if (!fbuffer.event)
> +
> +	entry = trace_event_buffer_reserve(&fbuffer, trace_file,
> +					   sizeof(*entry) + tk->tp.size + dsize);
> +	if (!entry)
>  		return;
>  
>  	fbuffer.regs = regs;
> -- 
> 2.33.0


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 5/5] tracing/uprobes: Use trace_event_buffer_reserve() helper
  2021-11-30  2:39 ` [PATCH 5/5] tracing/uprobes: Use trace_event_buffer_reserve() helper Steven Rostedt
@ 2021-11-30  4:40   ` Masami Hiramatsu
  0 siblings, 0 replies; 10+ messages in thread
From: Masami Hiramatsu @ 2021-11-30  4:40 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-kernel, Ingo Molnar, Andrew Morton, Masami Hiramatsu

On Mon, 29 Nov 2021 21:39:50 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:

> From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
> 
> To be consistent with kprobes and eprobes, use
> trace_event_buffer_reserver() and trace_event_buffer_commit(). This will
> ensure that any updates to trace events will also be implemented on uprobe
> events.
> 

Ah, event_trigger_unlock_commit() is the pair commit function. I got it.
Anyway, the pair of trace_event_buffer_reserve() and trace_event_buffer_commit()
should be used.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you,

> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> ---
>  kernel/trace/trace_uprobe.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
> index f5f0039d31e5..177717c4e74a 100644
> --- a/kernel/trace/trace_uprobe.c
> +++ b/kernel/trace/trace_uprobe.c
> @@ -949,8 +949,8 @@ static void __uprobe_trace_func(struct trace_uprobe *tu,
>  				struct trace_event_file *trace_file)
>  {
>  	struct uprobe_trace_entry_head *entry;
> +	struct trace_event_buffer fbuffer;
>  	struct trace_buffer *buffer;
> -	struct ring_buffer_event *event;
>  	void *data;
>  	int size, esize;
>  	struct trace_event_call *call = trace_probe_event_call(&tu->tp);
> @@ -965,12 +965,10 @@ static void __uprobe_trace_func(struct trace_uprobe *tu,
>  
>  	esize = SIZEOF_TRACE_ENTRY(is_ret_probe(tu));
>  	size = esize + tu->tp.size + dsize;
> -	event = trace_event_buffer_lock_reserve(&buffer, trace_file,
> -						call->event.type, size, 0);
> -	if (!event)
> +	entry = trace_event_buffer_reserve(&fbuffer, trace_file, size);
> +	if (!entry)
>  		return;
>  
> -	entry = ring_buffer_event_data(event);
>  	if (is_ret_probe(tu)) {
>  		entry->vaddr[0] = func;
>  		entry->vaddr[1] = instruction_pointer(regs);
> @@ -982,7 +980,7 @@ static void __uprobe_trace_func(struct trace_uprobe *tu,
>  
>  	memcpy(data, ucb->buf, tu->tp.size + dsize);
>  
> -	event_trigger_unlock_commit(trace_file, buffer, event, entry, 0);
> +	trace_event_buffer_commit(&fbuffer);
>  }
>  
>  /* uprobe handler */
> -- 
> 2.33.0


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/5] tracing: Disable preemption when using the filter buffer
  2021-11-30  2:39 ` [PATCH 2/5] tracing: Disable preemption when using the filter buffer Steven Rostedt
@ 2021-11-30  5:02     ` kernel test robot
  0 siblings, 0 replies; 10+ messages in thread
From: kernel test robot @ 2021-11-30  5:02 UTC (permalink / raw)
  To: Steven Rostedt, linux-kernel
  Cc: kbuild-all, Ingo Molnar, Andrew Morton, Linux Memory Management List

Hi Steven,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on rostedt-trace/for-next]
[also build test WARNING on linux/master hnaz-mm/master linus/master v5.16-rc3 next-20211129]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Steven-Rostedt/tracing-Various-updates/20211130-104342
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git for-next
config: mips-allyesconfig (https://download.01.org/0day-ci/archive/20211130/202111301209.GarWNR3z-lkp@intel.com/config)
compiler: mips-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/1ac91c8764ae50601cd41dceb620205607ab59f6
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Steven-Rostedt/tracing-Various-updates/20211130-104342
        git checkout 1ac91c8764ae50601cd41dceb620205607ab59f6
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=mips SHELL=/bin/bash kernel/trace/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   kernel/trace/trace.c: In function 'trace_event_buffer_lock_reserve':
>> kernel/trace/trace.c:2769:21: warning: suggest parentheses around assignment used as truth value [-Wparentheses]
    2769 |                 if (entry = __this_cpu_read(trace_buffered_event)) {
         |                     ^~~~~
   kernel/trace/trace.c: In function 'trace_check_vprintf':
   kernel/trace/trace.c:3820:17: warning: function 'trace_check_vprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]
    3820 |                 trace_seq_vprintf(&iter->seq, iter->fmt, ap);
         |                 ^~~~~~~~~~~~~~~~~
   kernel/trace/trace.c:3887:17: warning: function 'trace_check_vprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]
    3887 |                 trace_seq_vprintf(&iter->seq, p, ap);
         |                 ^~~~~~~~~~~~~~~~~


vim +2769 kernel/trace/trace.c

  2736	
  2737	struct ring_buffer_event *
  2738	trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
  2739				  struct trace_event_file *trace_file,
  2740				  int type, unsigned long len,
  2741				  unsigned int trace_ctx)
  2742	{
  2743		struct ring_buffer_event *entry;
  2744		struct trace_array *tr = trace_file->tr;
  2745		int val;
  2746	
  2747		*current_rb = tr->array_buffer.buffer;
  2748	
  2749		if (!tr->no_filter_buffering_ref &&
  2750		    (trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED))) {
  2751			preempt_disable_notrace();
  2752			/*
  2753			 * Filtering is on, so try to use the per cpu buffer first.
  2754			 * This buffer will simulate a ring_buffer_event,
  2755			 * where the type_len is zero and the array[0] will
  2756			 * hold the full length.
  2757			 * (see include/linux/ring-buffer.h for details on
  2758			 *  how the ring_buffer_event is structured).
  2759			 *
  2760			 * Using a temp buffer during filtering and copying it
  2761			 * on a matched filter is quicker than writing directly
  2762			 * into the ring buffer and then discarding it when
  2763			 * it doesn't match. That is because the discard
  2764			 * requires several atomic operations to get right.
  2765			 * Copying on match and doing nothing on a failed match
  2766			 * is still quicker than no copy on match, but having
  2767			 * to discard out of the ring buffer on a failed match.
  2768			 */
> 2769			if (entry = __this_cpu_read(trace_buffered_event)) {
  2770				int max_len = PAGE_SIZE - struct_size(entry, array, 1);
  2771	
  2772				val = this_cpu_inc_return(trace_buffered_event_cnt);
  2773	
  2774				/*
  2775				 * Preemption is disabled, but interrupts and NMIs
  2776				 * can still come in now. If that happens after
  2777				 * the above increment, then it will have to go
  2778				 * back to the old method of allocating the event
  2779				 * on the ring buffer, and if the filter fails, it
  2780				 * will have to call ring_buffer_discard_commit()
  2781				 * to remove it.
  2782				 *
  2783				 * Need to also check the unlikely case that the
  2784				 * length is bigger than the temp buffer size.
  2785				 * If that happens, then the reserve is pretty much
  2786				 * guaranteed to fail, as the ring buffer currently
  2787				 * only allows events less than a page. But that may
  2788				 * change in the future, so let the ring buffer reserve
  2789				 * handle the failure in that case.
  2790				 */
  2791				if (val == 1 && likely(len <= max_len)) {
  2792					trace_event_setup(entry, type, trace_ctx);
  2793					entry->array[0] = len;
  2794					/* Return with preemption disabled */
  2795					return entry;
  2796				}
  2797				this_cpu_dec(trace_buffered_event_cnt);
  2798			}
  2799			/* __trace_buffer_lock_reserve() disables preemption */
  2800			preempt_enable_notrace();
  2801		}
  2802	
  2803		entry = __trace_buffer_lock_reserve(*current_rb, type, len,
  2804						    trace_ctx);
  2805		/*
  2806		 * If tracing is off, but we have triggers enabled
  2807		 * we still need to look at the event data. Use the temp_buffer
  2808		 * to store the trace event for the trigger to use. It's recursive
  2809		 * safe and will not be recorded anywhere.
  2810		 */
  2811		if (!entry && trace_file->flags & EVENT_FILE_FL_TRIGGER_COND) {
  2812			*current_rb = temp_buffer;
  2813			entry = __trace_buffer_lock_reserve(*current_rb, type, len,
  2814							    trace_ctx);
  2815		}
  2816		return entry;
  2817	}
  2818	EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve);
  2819	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/5] tracing: Disable preemption when using the filter buffer
@ 2021-11-30  5:02     ` kernel test robot
  0 siblings, 0 replies; 10+ messages in thread
From: kernel test robot @ 2021-11-30  5:02 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 6395 bytes --]

Hi Steven,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on rostedt-trace/for-next]
[also build test WARNING on linux/master hnaz-mm/master linus/master v5.16-rc3 next-20211129]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Steven-Rostedt/tracing-Various-updates/20211130-104342
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git for-next
config: mips-allyesconfig (https://download.01.org/0day-ci/archive/20211130/202111301209.GarWNR3z-lkp(a)intel.com/config)
compiler: mips-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/1ac91c8764ae50601cd41dceb620205607ab59f6
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Steven-Rostedt/tracing-Various-updates/20211130-104342
        git checkout 1ac91c8764ae50601cd41dceb620205607ab59f6
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=mips SHELL=/bin/bash kernel/trace/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   kernel/trace/trace.c: In function 'trace_event_buffer_lock_reserve':
>> kernel/trace/trace.c:2769:21: warning: suggest parentheses around assignment used as truth value [-Wparentheses]
    2769 |                 if (entry = __this_cpu_read(trace_buffered_event)) {
         |                     ^~~~~
   kernel/trace/trace.c: In function 'trace_check_vprintf':
   kernel/trace/trace.c:3820:17: warning: function 'trace_check_vprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]
    3820 |                 trace_seq_vprintf(&iter->seq, iter->fmt, ap);
         |                 ^~~~~~~~~~~~~~~~~
   kernel/trace/trace.c:3887:17: warning: function 'trace_check_vprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]
    3887 |                 trace_seq_vprintf(&iter->seq, p, ap);
         |                 ^~~~~~~~~~~~~~~~~


vim +2769 kernel/trace/trace.c

  2736	
  2737	struct ring_buffer_event *
  2738	trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
  2739				  struct trace_event_file *trace_file,
  2740				  int type, unsigned long len,
  2741				  unsigned int trace_ctx)
  2742	{
  2743		struct ring_buffer_event *entry;
  2744		struct trace_array *tr = trace_file->tr;
  2745		int val;
  2746	
  2747		*current_rb = tr->array_buffer.buffer;
  2748	
  2749		if (!tr->no_filter_buffering_ref &&
  2750		    (trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED))) {
  2751			preempt_disable_notrace();
  2752			/*
  2753			 * Filtering is on, so try to use the per cpu buffer first.
  2754			 * This buffer will simulate a ring_buffer_event,
  2755			 * where the type_len is zero and the array[0] will
  2756			 * hold the full length.
  2757			 * (see include/linux/ring-buffer.h for details on
  2758			 *  how the ring_buffer_event is structured).
  2759			 *
  2760			 * Using a temp buffer during filtering and copying it
  2761			 * on a matched filter is quicker than writing directly
  2762			 * into the ring buffer and then discarding it when
  2763			 * it doesn't match. That is because the discard
  2764			 * requires several atomic operations to get right.
  2765			 * Copying on match and doing nothing on a failed match
  2766			 * is still quicker than no copy on match, but having
  2767			 * to discard out of the ring buffer on a failed match.
  2768			 */
> 2769			if (entry = __this_cpu_read(trace_buffered_event)) {
  2770				int max_len = PAGE_SIZE - struct_size(entry, array, 1);
  2771	
  2772				val = this_cpu_inc_return(trace_buffered_event_cnt);
  2773	
  2774				/*
  2775				 * Preemption is disabled, but interrupts and NMIs
  2776				 * can still come in now. If that happens after
  2777				 * the above increment, then it will have to go
  2778				 * back to the old method of allocating the event
  2779				 * on the ring buffer, and if the filter fails, it
  2780				 * will have to call ring_buffer_discard_commit()
  2781				 * to remove it.
  2782				 *
  2783				 * Need to also check the unlikely case that the
  2784				 * length is bigger than the temp buffer size.
  2785				 * If that happens, then the reserve is pretty much
  2786				 * guaranteed to fail, as the ring buffer currently
  2787				 * only allows events less than a page. But that may
  2788				 * change in the future, so let the ring buffer reserve
  2789				 * handle the failure in that case.
  2790				 */
  2791				if (val == 1 && likely(len <= max_len)) {
  2792					trace_event_setup(entry, type, trace_ctx);
  2793					entry->array[0] = len;
  2794					/* Return with preemption disabled */
  2795					return entry;
  2796				}
  2797				this_cpu_dec(trace_buffered_event_cnt);
  2798			}
  2799			/* __trace_buffer_lock_reserve() disables preemption */
  2800			preempt_enable_notrace();
  2801		}
  2802	
  2803		entry = __trace_buffer_lock_reserve(*current_rb, type, len,
  2804						    trace_ctx);
  2805		/*
  2806		 * If tracing is off, but we have triggers enabled
  2807		 * we still need to look at the event data. Use the temp_buffer
  2808		 * to store the trace event for the trigger to use. It's recursive
  2809		 * safe and will not be recorded anywhere.
  2810		 */
  2811		if (!entry && trace_file->flags & EVENT_FILE_FL_TRIGGER_COND) {
  2812			*current_rb = temp_buffer;
  2813			entry = __trace_buffer_lock_reserve(*current_rb, type, len,
  2814							    trace_ctx);
  2815		}
  2816		return entry;
  2817	}
  2818	EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve);
  2819	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-11-30  5:02 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-30  2:39 [PATCH 0/5] tracing: Various updates Steven Rostedt
2021-11-30  2:39 ` [PATCH 1/5] tracing: Use __this_cpu_read() in trace_event_buffer_lock_reserver() Steven Rostedt
2021-11-30  2:39 ` [PATCH 2/5] tracing: Disable preemption when using the filter buffer Steven Rostedt
2021-11-30  5:02   ` kernel test robot
2021-11-30  5:02     ` kernel test robot
2021-11-30  2:39 ` [PATCH 3/5] tracing: Have eprobes use filtering logic of trace events Steven Rostedt
2021-11-30  2:39 ` [PATCH 4/5] tracing/kprobes: Do not open code event reserve logic Steven Rostedt
2021-11-30  4:38   ` Masami Hiramatsu
2021-11-30  2:39 ` [PATCH 5/5] tracing/uprobes: Use trace_event_buffer_reserve() helper Steven Rostedt
2021-11-30  4:40   ` Masami Hiramatsu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.