All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 5.10] tracing: Add tracing_reset_all_online_cpus_unlocked() function
@ 2023-06-15 20:49 Zheng Yejian
  2023-06-19  8:26 ` Greg KH
  0 siblings, 1 reply; 4+ messages in thread
From: Zheng Yejian @ 2023-06-15 20:49 UTC (permalink / raw)
  To: rostedt, gregkh
  Cc: mhiramat, linux-kernel, linux-trace-kernel, stable, zhengyejian1

From: "Steven Rostedt (Google)" <rostedt@goodmis.org>

commit e18eb8783ec4949adebc7d7b0fdb65f65bfeefd9 upstream.

Currently the tracing_reset_all_online_cpus() requires the
trace_types_lock held. But only one caller of this function actually has
that lock held before calling it, and the other just takes the lock so
that it can call it. More users of this function is needed where the lock
is not held.

Add a tracing_reset_all_online_cpus_unlocked() function for the one use
case that calls it without being held, and also add a lockdep_assert to
make sure it is held when called.

Then have tracing_reset_all_online_cpus() take the lock internally, such
that callers do not need to worry about taking it.

Link: https://lkml.kernel.org/r/20221123192741.658273220@goodmis.org

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Zheng Yejian <zhengyejian1@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
[this patch is pre-depended by be111ebd8868d4b7c041cb3c6102e1ae27d6dc1d
due to tracing_reset_all_online_cpus() should be called after taking lock]
Fixes: be111ebd8868 ("tracing: Free buffers when a used dynamic event is removed")
Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
---
 kernel/trace/trace.c              | 11 ++++++++++-
 kernel/trace/trace.h              |  1 +
 kernel/trace/trace_events.c       |  2 +-
 kernel/trace/trace_events_synth.c |  2 --
 4 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 482ec6606b7b..70526400e05c 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2178,10 +2178,12 @@ void tracing_reset_online_cpus(struct array_buffer *buf)
 }
 
 /* Must have trace_types_lock held */
-void tracing_reset_all_online_cpus(void)
+void tracing_reset_all_online_cpus_unlocked(void)
 {
 	struct trace_array *tr;
 
+	lockdep_assert_held(&trace_types_lock);
+
 	list_for_each_entry(tr, &ftrace_trace_arrays, list) {
 		if (!tr->clear_trace)
 			continue;
@@ -2193,6 +2195,13 @@ void tracing_reset_all_online_cpus(void)
 	}
 }
 
+void tracing_reset_all_online_cpus(void)
+{
+	mutex_lock(&trace_types_lock);
+	tracing_reset_all_online_cpus_unlocked();
+	mutex_unlock(&trace_types_lock);
+}
+
 /*
  * The tgid_map array maps from pid to tgid; i.e. the value stored at index i
  * is the tgid last observed corresponding to pid=i.
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 37f616bf5fa9..e5b505b5b7d0 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -725,6 +725,7 @@ int tracing_is_enabled(void);
 void tracing_reset_online_cpus(struct array_buffer *buf);
 void tracing_reset_current(int cpu);
 void tracing_reset_all_online_cpus(void);
+void tracing_reset_all_online_cpus_unlocked(void);
 int tracing_open_generic(struct inode *inode, struct file *filp);
 int tracing_open_generic_tr(struct inode *inode, struct file *filp);
 bool tracing_is_disabled(void);
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index bac13f24a96e..f8ed66f38175 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2661,7 +2661,7 @@ static void trace_module_remove_events(struct module *mod)
 	 * over from this module may be passed to the new module events and
 	 * unexpected results may occur.
 	 */
-	tracing_reset_all_online_cpus();
+	tracing_reset_all_online_cpus_unlocked();
 }
 
 static int trace_module_notify(struct notifier_block *self,
diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
index 18291ab35657..ee174de0b8f6 100644
--- a/kernel/trace/trace_events_synth.c
+++ b/kernel/trace/trace_events_synth.c
@@ -1363,7 +1363,6 @@ int synth_event_delete(const char *event_name)
 	mutex_unlock(&event_mutex);
 
 	if (mod) {
-		mutex_lock(&trace_types_lock);
 		/*
 		 * It is safest to reset the ring buffer if the module
 		 * being unloaded registered any events that were
@@ -1375,7 +1374,6 @@ int synth_event_delete(const char *event_name)
 		 * occur.
 		 */
 		tracing_reset_all_online_cpus();
-		mutex_unlock(&trace_types_lock);
 	}
 
 	return ret;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 5.10] tracing: Add tracing_reset_all_online_cpus_unlocked() function
  2023-06-15 20:49 [PATCH 5.10] tracing: Add tracing_reset_all_online_cpus_unlocked() function Zheng Yejian
@ 2023-06-19  8:26 ` Greg KH
  2023-06-19 13:38   ` Zheng Yejian
  0 siblings, 1 reply; 4+ messages in thread
From: Greg KH @ 2023-06-19  8:26 UTC (permalink / raw)
  To: Zheng Yejian; +Cc: rostedt, mhiramat, linux-kernel, linux-trace-kernel, stable

On Fri, Jun 16, 2023 at 04:49:31AM +0800, Zheng Yejian wrote:
> From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
> 
> commit e18eb8783ec4949adebc7d7b0fdb65f65bfeefd9 upstream.
> 
> Currently the tracing_reset_all_online_cpus() requires the
> trace_types_lock held. But only one caller of this function actually has
> that lock held before calling it, and the other just takes the lock so
> that it can call it. More users of this function is needed where the lock
> is not held.
> 
> Add a tracing_reset_all_online_cpus_unlocked() function for the one use
> case that calls it without being held, and also add a lockdep_assert to
> make sure it is held when called.
> 
> Then have tracing_reset_all_online_cpus() take the lock internally, such
> that callers do not need to worry about taking it.
> 
> Link: https://lkml.kernel.org/r/20221123192741.658273220@goodmis.org
> 
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Zheng Yejian <zhengyejian1@huawei.com>
> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
> [this patch is pre-depended by be111ebd8868d4b7c041cb3c6102e1ae27d6dc1d
> due to tracing_reset_all_online_cpus() should be called after taking lock]
> Fixes: be111ebd8868 ("tracing: Free buffers when a used dynamic event is removed")
> Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
> ---


What about for 5.15.y?  You can't apply a fix to just an older tree as
you will then have a regression when you update.

I'll drop this one from my queue, please resend a backport for all
relevent stable releases.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 5.10] tracing: Add tracing_reset_all_online_cpus_unlocked() function
  2023-06-19  8:26 ` Greg KH
@ 2023-06-19 13:38   ` Zheng Yejian
  0 siblings, 0 replies; 4+ messages in thread
From: Zheng Yejian @ 2023-06-19 13:38 UTC (permalink / raw)
  To: Greg KH; +Cc: rostedt, mhiramat, linux-kernel, linux-trace-kernel, stable

On 2023/6/19 16:26, Greg KH wrote:
> On Fri, Jun 16, 2023 at 04:49:31AM +0800, Zheng Yejian wrote:
>> From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
>>
>> commit e18eb8783ec4949adebc7d7b0fdb65f65bfeefd9 upstream.
>>
>> Currently the tracing_reset_all_online_cpus() requires the
>> trace_types_lock held. But only one caller of this function actually has
>> that lock held before calling it, and the other just takes the lock so
>> that it can call it. More users of this function is needed where the lock
>> is not held.
>>
>> Add a tracing_reset_all_online_cpus_unlocked() function for the one use
>> case that calls it without being held, and also add a lockdep_assert to
>> make sure it is held when called.
>>
>> Then have tracing_reset_all_online_cpus() take the lock internally, such
>> that callers do not need to worry about taking it.
>>
>> Link: https://lkml.kernel.org/r/20221123192741.658273220@goodmis.org
>>
>> Cc: Masami Hiramatsu <mhiramat@kernel.org>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Zheng Yejian <zhengyejian1@huawei.com>
>> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
>> [this patch is pre-depended by be111ebd8868d4b7c041cb3c6102e1ae27d6dc1d
>> due to tracing_reset_all_online_cpus() should be called after taking lock]
>> Fixes: be111ebd8868 ("tracing: Free buffers when a used dynamic event is removed")
>> Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
>> ---
> 
> 
> What about for 5.15.y?  You can't apply a fix to just an older tree as
> you will then have a regression when you update.
> 
> I'll drop this one from my queue, please resend a backport for all
> relevent stable releases.

Hi, greg,

I have resend the patch to relevent stable releases:
5.15.y: 
https://lore.kernel.org/all/20230620013052.1127047-1-zhengyejian1@huawei.com/
5.10.y: 
https://lore.kernel.org/all/20230620013104.1127100-1-zhengyejian1@huawei.com/
5.4.y: 
https://lore.kernel.org/all/20230620013113.1127152-1-zhengyejian1@huawei.com/

---

Thanks,
Zheng Yejian

> 
> thanks,
> 
> greg k-h


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 5.10] tracing: Add tracing_reset_all_online_cpus_unlocked() function
@ 2023-06-20  1:31 Zheng Yejian
  0 siblings, 0 replies; 4+ messages in thread
From: Zheng Yejian @ 2023-06-20  1:31 UTC (permalink / raw)
  To: gregkh, rostedt, mhiramat
  Cc: linux-kernel, linux-trace-kernel, stable, zhengyejian1

From: "Steven Rostedt (Google)" <rostedt@goodmis.org>

commit e18eb8783ec4949adebc7d7b0fdb65f65bfeefd9 upstream.

Currently the tracing_reset_all_online_cpus() requires the
trace_types_lock held. But only one caller of this function actually has
that lock held before calling it, and the other just takes the lock so
that it can call it. More users of this function is needed where the lock
is not held.

Add a tracing_reset_all_online_cpus_unlocked() function for the one use
case that calls it without being held, and also add a lockdep_assert to
make sure it is held when called.

Then have tracing_reset_all_online_cpus() take the lock internally, such
that callers do not need to worry about taking it.

Link: https://lkml.kernel.org/r/20221123192741.658273220@goodmis.org

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Zheng Yejian <zhengyejian1@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>

[Refers to commit message of be111ebd8868d4b7c041cb3c6102e1ae27d6dc1d,
this patch is pre-depended, and tracing_reset_all_online_cpus() should
be called after trace_types_lock is held as its comment describes.]
Fixes: be111ebd8868 ("tracing: Free buffers when a used dynamic event is removed")
Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
---
 kernel/trace/trace.c              | 11 ++++++++++-
 kernel/trace/trace.h              |  1 +
 kernel/trace/trace_events.c       |  2 +-
 kernel/trace/trace_events_synth.c |  2 --
 4 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 482ec6606b7b..70526400e05c 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2178,10 +2178,12 @@ void tracing_reset_online_cpus(struct array_buffer *buf)
 }
 
 /* Must have trace_types_lock held */
-void tracing_reset_all_online_cpus(void)
+void tracing_reset_all_online_cpus_unlocked(void)
 {
 	struct trace_array *tr;
 
+	lockdep_assert_held(&trace_types_lock);
+
 	list_for_each_entry(tr, &ftrace_trace_arrays, list) {
 		if (!tr->clear_trace)
 			continue;
@@ -2193,6 +2195,13 @@ void tracing_reset_all_online_cpus(void)
 	}
 }
 
+void tracing_reset_all_online_cpus(void)
+{
+	mutex_lock(&trace_types_lock);
+	tracing_reset_all_online_cpus_unlocked();
+	mutex_unlock(&trace_types_lock);
+}
+
 /*
  * The tgid_map array maps from pid to tgid; i.e. the value stored at index i
  * is the tgid last observed corresponding to pid=i.
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 37f616bf5fa9..e5b505b5b7d0 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -725,6 +725,7 @@ int tracing_is_enabled(void);
 void tracing_reset_online_cpus(struct array_buffer *buf);
 void tracing_reset_current(int cpu);
 void tracing_reset_all_online_cpus(void);
+void tracing_reset_all_online_cpus_unlocked(void);
 int tracing_open_generic(struct inode *inode, struct file *filp);
 int tracing_open_generic_tr(struct inode *inode, struct file *filp);
 bool tracing_is_disabled(void);
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index bac13f24a96e..f8ed66f38175 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2661,7 +2661,7 @@ static void trace_module_remove_events(struct module *mod)
 	 * over from this module may be passed to the new module events and
 	 * unexpected results may occur.
 	 */
-	tracing_reset_all_online_cpus();
+	tracing_reset_all_online_cpus_unlocked();
 }
 
 static int trace_module_notify(struct notifier_block *self,
diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
index 18291ab35657..ee174de0b8f6 100644
--- a/kernel/trace/trace_events_synth.c
+++ b/kernel/trace/trace_events_synth.c
@@ -1363,7 +1363,6 @@ int synth_event_delete(const char *event_name)
 	mutex_unlock(&event_mutex);
 
 	if (mod) {
-		mutex_lock(&trace_types_lock);
 		/*
 		 * It is safest to reset the ring buffer if the module
 		 * being unloaded registered any events that were
@@ -1375,7 +1374,6 @@ int synth_event_delete(const char *event_name)
 		 * occur.
 		 */
 		tracing_reset_all_online_cpus();
-		mutex_unlock(&trace_types_lock);
 	}
 
 	return ret;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-06-19 13:38 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-15 20:49 [PATCH 5.10] tracing: Add tracing_reset_all_online_cpus_unlocked() function Zheng Yejian
2023-06-19  8:26 ` Greg KH
2023-06-19 13:38   ` Zheng Yejian
2023-06-20  1:31 Zheng Yejian

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.