linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] tracing: Fix trace entry and trace common fields for preempt_lazy_count
@ 2020-02-21 15:35 Jiri Olsa
  2020-02-21 15:49 ` Steven Rostedt
  0 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2020-02-21 15:35 UTC (permalink / raw)
  To: linux-rt-users
  Cc: Thomas Gleixner, Juri Lelli, Sebastian Sewior,
	Arnaldo Carvalho de Melo, Steven Rostedt

When commit 65fd07df3588 added preempt_lazy_count into 'struct trace_entry'
it did not add 4 bytes padding. Also we need to update the common fields
for tracepoint, otherwise some tools (bpftrace) stop working due to missing
common fields.

Fixes: 65fd07df3588 ("x86: Support for lazy preemption")
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/trace_events.h | 2 ++
 kernel/trace/trace_events.c  | 3 +++
 2 files changed, 5 insertions(+)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index f3b1ef07e4a5..51a3f5188923 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -65,6 +65,8 @@ struct trace_entry {
 	unsigned short		migrate_disable;
 	unsigned short		padding;
 	unsigned char		preempt_lazy_count;
+	unsigned char		padding1;
+	unsigned short		padding2;
 };
 
 #define TRACE_EVENT_TYPE_MAX						\
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index accaae59a762..1fe37b7aeaff 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -183,6 +183,9 @@ static int trace_define_common_fields(void)
 	__common_field(int, pid);
 	__common_field(unsigned short, migrate_disable);
 	__common_field(unsigned short, padding);
+	__common_field(unsigned char,  preempt_lazy_count);
+	__common_field(unsigned char,  padding1);
+	__common_field(unsigned short, padding2);
 
 	return ret;
 }
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH] tracing: Fix trace entry and trace common fields for preempt_lazy_count
  2020-02-21 15:35 [PATCH] tracing: Fix trace entry and trace common fields for preempt_lazy_count Jiri Olsa
@ 2020-02-21 15:49 ` Steven Rostedt
  2020-02-21 16:10   ` Jiri Olsa
  0 siblings, 1 reply; 15+ messages in thread
From: Steven Rostedt @ 2020-02-21 15:49 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: linux-rt-users, Thomas Gleixner, Juri Lelli, Sebastian Sewior,
	Arnaldo Carvalho de Melo

On Fri, 21 Feb 2020 16:35:41 +0100
Jiri Olsa <jolsa@kernel.org> wrote:

> When commit 65fd07df3588 added preempt_lazy_count into 'struct trace_entry'
> it did not add 4 bytes padding. Also we need to update the common fields
> for tracepoint, otherwise some tools (bpftrace) stop working due to missing
> common fields.
> 
> Fixes: 65fd07df3588 ("x86: Support for lazy preemption")
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/linux/trace_events.h | 2 ++
>  kernel/trace/trace_events.c  | 3 +++
>  2 files changed, 5 insertions(+)
> 
> diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> index f3b1ef07e4a5..51a3f5188923 100644
> --- a/include/linux/trace_events.h
> +++ b/include/linux/trace_events.h
> @@ -65,6 +65,8 @@ struct trace_entry {
>  	unsigned short		migrate_disable;
>  	unsigned short		padding;
>  	unsigned char		preempt_lazy_count;
> +	unsigned char		padding1;
> +	unsigned short		padding2;


Wait! I don't have these changes in my tree, nor do I see them in
Linus's. This really bloats the trace events! This header is very
sensitive to size and just willy nilly adding to it is unacceptable.
It's like adding to the page_struct. This gets added to *every* event,
and a single byte added, causes 1M extra for a million events (very
common in tracing). It causes 1G extra for a billion events.

Let's find a better way to handle this.

-- Steve


>  };
>  
>  #define TRACE_EVENT_TYPE_MAX						\
> diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
> index accaae59a762..1fe37b7aeaff 100644
> --- a/kernel/trace/trace_events.c
> +++ b/kernel/trace/trace_events.c
> @@ -183,6 +183,9 @@ static int trace_define_common_fields(void)
>  	__common_field(int, pid);
>  	__common_field(unsigned short, migrate_disable);
>  	__common_field(unsigned short, padding);
> +	__common_field(unsigned char,  preempt_lazy_count);
> +	__common_field(unsigned char,  padding1);
> +	__common_field(unsigned short, padding2);
>  
>  	return ret;
>  }


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] tracing: Fix trace entry and trace common fields for preempt_lazy_count
  2020-02-21 15:49 ` Steven Rostedt
@ 2020-02-21 16:10   ` Jiri Olsa
  2020-02-21 16:21     ` Steven Rostedt
  0 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2020-02-21 16:10 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jiri Olsa, linux-rt-users, Thomas Gleixner, Juri Lelli,
	Sebastian Sewior, Arnaldo Carvalho de Melo

On Fri, Feb 21, 2020 at 10:49:22AM -0500, Steven Rostedt wrote:
> On Fri, 21 Feb 2020 16:35:41 +0100
> Jiri Olsa <jolsa@kernel.org> wrote:
> 
> > When commit 65fd07df3588 added preempt_lazy_count into 'struct trace_entry'
> > it did not add 4 bytes padding. Also we need to update the common fields
> > for tracepoint, otherwise some tools (bpftrace) stop working due to missing
> > common fields.
> > 
> > Fixes: 65fd07df3588 ("x86: Support for lazy preemption")
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  include/linux/trace_events.h | 2 ++
> >  kernel/trace/trace_events.c  | 3 +++
> >  2 files changed, 5 insertions(+)
> > 
> > diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> > index f3b1ef07e4a5..51a3f5188923 100644
> > --- a/include/linux/trace_events.h
> > +++ b/include/linux/trace_events.h
> > @@ -65,6 +65,8 @@ struct trace_entry {
> >  	unsigned short		migrate_disable;
> >  	unsigned short		padding;
> >  	unsigned char		preempt_lazy_count;
> > +	unsigned char		padding1;
> > +	unsigned short		padding2;
> 
> 
> Wait! I don't have these changes in my tree, nor do I see them in
> Linus's. This really bloats the trace events! This header is very
> sensitive to size and just willy nilly adding to it is unacceptable.
> It's like adding to the page_struct. This gets added to *every* event,
> and a single byte added, causes 1M extra for a million events (very
> common in tracing). It causes 1G extra for a billion events.

I'm on top of:
  git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git
  v5.4.19-rt11-rebase

> 
> Let's find a better way to handle this.

I can fix the bpftrace tool I guess, through it's not
so convenient the way it's used in it

jirka

> 
> -- Steve
> 
> 
> >  };
> >  
> >  #define TRACE_EVENT_TYPE_MAX						\
> > diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
> > index accaae59a762..1fe37b7aeaff 100644
> > --- a/kernel/trace/trace_events.c
> > +++ b/kernel/trace/trace_events.c
> > @@ -183,6 +183,9 @@ static int trace_define_common_fields(void)
> >  	__common_field(int, pid);
> >  	__common_field(unsigned short, migrate_disable);
> >  	__common_field(unsigned short, padding);
> > +	__common_field(unsigned char,  preempt_lazy_count);
> > +	__common_field(unsigned char,  padding1);
> > +	__common_field(unsigned short, padding2);
> >  
> >  	return ret;
> >  }
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] tracing: Fix trace entry and trace common fields for preempt_lazy_count
  2020-02-21 16:10   ` Jiri Olsa
@ 2020-02-21 16:21     ` Steven Rostedt
  2020-02-21 16:29       ` Jiri Olsa
  2020-02-21 16:37       ` Sebastian Andrzej Siewior
  0 siblings, 2 replies; 15+ messages in thread
From: Steven Rostedt @ 2020-02-21 16:21 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Jiri Olsa, linux-rt-users, Thomas Gleixner, Juri Lelli,
	Sebastian Sewior, Arnaldo Carvalho de Melo

On Fri, 21 Feb 2020 17:10:30 +0100
Jiri Olsa <jolsa@redhat.com> wrote:

> On Fri, Feb 21, 2020 at 10:49:22AM -0500, Steven Rostedt wrote:
> > On Fri, 21 Feb 2020 16:35:41 +0100
> > Jiri Olsa <jolsa@kernel.org> wrote:
> >   
> > > When commit 65fd07df3588 added preempt_lazy_count into 'struct trace_entry'
> > > it did not add 4 bytes padding. Also we need to update the common fields
> > > for tracepoint, otherwise some tools (bpftrace) stop working due to missing
> > > common fields.
> > > 
> > > Fixes: 65fd07df3588 ("x86: Support for lazy preemption")
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > >  include/linux/trace_events.h | 2 ++
> > >  kernel/trace/trace_events.c  | 3 +++
> > >  2 files changed, 5 insertions(+)
> > > 
> > > diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> > > index f3b1ef07e4a5..51a3f5188923 100644
> > > --- a/include/linux/trace_events.h
> > > +++ b/include/linux/trace_events.h
> > > @@ -65,6 +65,8 @@ struct trace_entry {
> > >  	unsigned short		migrate_disable;
> > >  	unsigned short		padding;
> > >  	unsigned char		preempt_lazy_count;
> > > +	unsigned char		padding1;
> > > +	unsigned short		padding2;  
> > 
> > 
> > Wait! I don't have these changes in my tree, nor do I see them in
> > Linus's. This really bloats the trace events! This header is very
> > sensitive to size and just willy nilly adding to it is unacceptable.
> > It's like adding to the page_struct. This gets added to *every* event,
> > and a single byte added, causes 1M extra for a million events (very
> > common in tracing). It causes 1G extra for a billion events.  
> 
> I'm on top of:
>   git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git
>   v5.4.19-rt11-rebase
> 

Ug, I now see it:

struct trace_entry {
	unsigned short		type;
	unsigned char		flags;
	unsigned char		preempt_count;
	int			pid;
	unsigned short		migrate_disable;
	unsigned short		padding;
	unsigned char		preempt_lazy_count;
};

Which adds a ton of bloat.

> > 
> > Let's find a better way to handle this.  
> 
> I can fix the bpftrace tool I guess, through it's not
> so convenient the way it's used in it

Not as inconvenient as dropping events due to wasted space in the ring
buffer. Note, this is attached to function tracing events. Any increase
here will cause more function events to be dropped.

Why is migrate disable a short? Is there going to be more that 256
nesting?

-- Steve

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] tracing: Fix trace entry and trace common fields for preempt_lazy_count
  2020-02-21 16:21     ` Steven Rostedt
@ 2020-02-21 16:29       ` Jiri Olsa
  2020-02-21 16:50         ` Steven Rostedt
  2020-02-21 16:37       ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2020-02-21 16:29 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jiri Olsa, linux-rt-users, Thomas Gleixner, Juri Lelli,
	Sebastian Sewior, Arnaldo Carvalho de Melo

On Fri, Feb 21, 2020 at 11:21:52AM -0500, Steven Rostedt wrote:
> On Fri, 21 Feb 2020 17:10:30 +0100
> Jiri Olsa <jolsa@redhat.com> wrote:
> 
> > On Fri, Feb 21, 2020 at 10:49:22AM -0500, Steven Rostedt wrote:
> > > On Fri, 21 Feb 2020 16:35:41 +0100
> > > Jiri Olsa <jolsa@kernel.org> wrote:
> > >   
> > > > When commit 65fd07df3588 added preempt_lazy_count into 'struct trace_entry'
> > > > it did not add 4 bytes padding. Also we need to update the common fields
> > > > for tracepoint, otherwise some tools (bpftrace) stop working due to missing
> > > > common fields.
> > > > 
> > > > Fixes: 65fd07df3588 ("x86: Support for lazy preemption")
> > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > > ---
> > > >  include/linux/trace_events.h | 2 ++
> > > >  kernel/trace/trace_events.c  | 3 +++
> > > >  2 files changed, 5 insertions(+)
> > > > 
> > > > diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> > > > index f3b1ef07e4a5..51a3f5188923 100644
> > > > --- a/include/linux/trace_events.h
> > > > +++ b/include/linux/trace_events.h
> > > > @@ -65,6 +65,8 @@ struct trace_entry {
> > > >  	unsigned short		migrate_disable;
> > > >  	unsigned short		padding;
> > > >  	unsigned char		preempt_lazy_count;
> > > > +	unsigned char		padding1;
> > > > +	unsigned short		padding2;  
> > > 
> > > 
> > > Wait! I don't have these changes in my tree, nor do I see them in
> > > Linus's. This really bloats the trace events! This header is very
> > > sensitive to size and just willy nilly adding to it is unacceptable.
> > > It's like adding to the page_struct. This gets added to *every* event,
> > > and a single byte added, causes 1M extra for a million events (very
> > > common in tracing). It causes 1G extra for a billion events.  
> > 
> > I'm on top of:
> >   git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git
> >   v5.4.19-rt11-rebase
> > 
> 
> Ug, I now see it:
> 
> struct trace_entry {
> 	unsigned short		type;
> 	unsigned char		flags;
> 	unsigned char		preempt_count;
> 	int			pid;
> 	unsigned short		migrate_disable;
> 	unsigned short		padding;
> 	unsigned char		preempt_lazy_count;
> };
> 
> Which adds a ton of bloat.
> 
> > > 
> > > Let's find a better way to handle this.  
> > 
> > I can fix the bpftrace tool I guess, through it's not
> > so convenient the way it's used in it
> 
> Not as inconvenient as dropping events due to wasted space in the ring
> buffer. Note, this is attached to function tracing events. Any increase
> here will cause more function events to be dropped.

sure, I'll probably fix it anyway, but there might be other broken tools ;-)

libtraceevent/perf is actualy ok with this, probably following the
offsets and sizes directly.. actualy bpftrace might be special case
because it creates C struct out of the fields, so there's gap between
common fields and the rest of the fields

jirka

> 
> Why is migrate disable a short? Is there going to be more that 256
> nesting?
> 
> -- Steve
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] tracing: Fix trace entry and trace common fields for preempt_lazy_count
  2020-02-21 16:21     ` Steven Rostedt
  2020-02-21 16:29       ` Jiri Olsa
@ 2020-02-21 16:37       ` Sebastian Andrzej Siewior
  2020-02-21 17:44         ` [PATCH RT] tracing: make preempt_lazy and migrate_disable counter smaller Sebastian Andrzej Siewior
  1 sibling, 1 reply; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-21 16:37 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jiri Olsa, Jiri Olsa, linux-rt-users, Thomas Gleixner,
	Juri Lelli, Arnaldo Carvalho de Melo

On 2020-02-21 11:21:52 [-0500], Steven Rostedt wrote:
…
> struct trace_entry {
> 	unsigned short		type;
> 	unsigned char		flags;
> 	unsigned char		preempt_count;
> 	int			pid;
> 	unsigned short		migrate_disable;
> 	unsigned short		padding;
> 	unsigned char		preempt_lazy_count;
> };
> 
> Which adds a ton of bloat.
> Why is migrate disable a short? Is there going to be more that 256
> nesting?

I don't think so. We go now and then > 10 and the trace file shows only
one digit. So I think that migrate_disable can use a byte field
here.
So we make migrate_disable and preempt_lazy_count a `char' type and keep
the `short' padding later on?

> -- Steve

Sebastian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] tracing: Fix trace entry and trace common fields for preempt_lazy_count
  2020-02-21 16:29       ` Jiri Olsa
@ 2020-02-21 16:50         ` Steven Rostedt
  0 siblings, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2020-02-21 16:50 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Jiri Olsa, linux-rt-users, Thomas Gleixner, Juri Lelli,
	Sebastian Sewior, Arnaldo Carvalho de Melo

On Fri, 21 Feb 2020 17:29:44 +0100
Jiri Olsa <jolsa@redhat.com> wrote:

> sure, I'll probably fix it anyway, but there might be other broken tools ;-)
> 
> libtraceevent/perf is actualy ok with this, probably following the

Which is the whole point of libtraceevent. Everything should be using
it.

> offsets and sizes directly.. actualy bpftrace might be special case
> because it creates C struct out of the fields, so there's gap between
> common fields and the rest of the fields

Hmm, hopefully this can be fixed, as I really don't want wasted space
in the ring buffers.

-- Steve


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH RT] tracing: make preempt_lazy and migrate_disable counter smaller
  2020-02-21 16:37       ` Sebastian Andrzej Siewior
@ 2020-02-21 17:44         ` Sebastian Andrzej Siewior
  2020-02-21 19:51           ` Steven Rostedt
  2020-02-21 20:49           ` Jiri Olsa
  0 siblings, 2 replies; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-21 17:44 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jiri Olsa, Jiri Olsa, linux-rt-users, Thomas Gleixner,
	Juri Lelli, Arnaldo Carvalho de Melo

The migrate_disable counter should not exceed 255 so it is enough to
store it in an 8bit field.
With this change we can move the `preempt_lazy_count' member into the
gap so the whole struct shrinks by 4 bytes to 12 bytes in total.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/trace_events.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index f3b1ef07e4a5f..c8b3f06cb5ed7 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -62,9 +62,9 @@ struct trace_entry {
 	unsigned char		flags;
 	unsigned char		preempt_count;
 	int			pid;
-	unsigned short		migrate_disable;
-	unsigned short		padding;
+	unsigned char		migrate_disable;
 	unsigned char		preempt_lazy_count;
+	unsigned short		padding;
 };
 
 #define TRACE_EVENT_TYPE_MAX						\
-- 
2.25.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH RT] tracing: make preempt_lazy and migrate_disable counter smaller
  2020-02-21 17:44         ` [PATCH RT] tracing: make preempt_lazy and migrate_disable counter smaller Sebastian Andrzej Siewior
@ 2020-02-21 19:51           ` Steven Rostedt
  2020-02-21 20:20             ` Sebastian Andrzej Siewior
  2020-02-21 20:49           ` Jiri Olsa
  1 sibling, 1 reply; 15+ messages in thread
From: Steven Rostedt @ 2020-02-21 19:51 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Jiri Olsa, Jiri Olsa, linux-rt-users, Thomas Gleixner,
	Juri Lelli, Arnaldo Carvalho de Melo

On Fri, 21 Feb 2020 18:44:19 +0100
Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> index f3b1ef07e4a5f..c8b3f06cb5ed7 100644
> --- a/include/linux/trace_events.h
> +++ b/include/linux/trace_events.h
> @@ -62,9 +62,9 @@ struct trace_entry {
>  	unsigned char		flags;
>  	unsigned char		preempt_count;
>  	int			pid;
> -	unsigned short		migrate_disable;
> -	unsigned short		padding;
> +	unsigned char		migrate_disable;
>  	unsigned char		preempt_lazy_count;
> +	unsigned short		padding;
>  };

Do we really need the padding here?

-- Steve

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH RT] tracing: make preempt_lazy and migrate_disable counter smaller
  2020-02-21 19:51           ` Steven Rostedt
@ 2020-02-21 20:20             ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-21 20:20 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jiri Olsa, Jiri Olsa, linux-rt-users, Thomas Gleixner,
	Juri Lelli, Arnaldo Carvalho de Melo

On 2020-02-21 14:51:48 [-0500], Steven Rostedt wrote:
> On Fri, 21 Feb 2020 18:44:19 +0100
> Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> 
> > diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> > index f3b1ef07e4a5f..c8b3f06cb5ed7 100644
> > --- a/include/linux/trace_events.h
> > +++ b/include/linux/trace_events.h
> > @@ -62,9 +62,9 @@ struct trace_entry {
> >  	unsigned char		flags;
> >  	unsigned char		preempt_count;
> >  	int			pid;
> > -	unsigned short		migrate_disable;
> > -	unsigned short		padding;
> > +	unsigned char		migrate_disable;
> >  	unsigned char		preempt_lazy_count;
> > +	unsigned short		padding;
> >  };
> 
> Do we really need the padding here?

Not really. The two bytes are consumed since there is __packed so that
padding makes it just more obvious. However that padding wasn't used in
the past. I can remove it if you want.
The struct can't be __packed to save additional two bytes, correct?

> -- Steve

Sebastian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH RT] tracing: make preempt_lazy and migrate_disable counter smaller
  2020-02-21 17:44         ` [PATCH RT] tracing: make preempt_lazy and migrate_disable counter smaller Sebastian Andrzej Siewior
  2020-02-21 19:51           ` Steven Rostedt
@ 2020-02-21 20:49           ` Jiri Olsa
  2020-02-24 10:01             ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2020-02-21 20:49 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Steven Rostedt, Jiri Olsa, linux-rt-users, Thomas Gleixner,
	Juri Lelli, Arnaldo Carvalho de Melo

On Fri, Feb 21, 2020 at 06:44:19PM +0100, Sebastian Andrzej Siewior wrote:
> The migrate_disable counter should not exceed 255 so it is enough to
> store it in an 8bit field.
> With this change we can move the `preempt_lazy_count' member into the
> gap so the whole struct shrinks by 4 bytes to 12 bytes in total.
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  include/linux/trace_events.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> index f3b1ef07e4a5f..c8b3f06cb5ed7 100644
> --- a/include/linux/trace_events.h
> +++ b/include/linux/trace_events.h
> @@ -62,9 +62,9 @@ struct trace_entry {
>  	unsigned char		flags;
>  	unsigned char		preempt_count;
>  	int			pid;
> -	unsigned short		migrate_disable;
> -	unsigned short		padding;
> +	unsigned char		migrate_disable;
>  	unsigned char		preempt_lazy_count;
> +	unsigned short		padding;

also we still need to put preempt_lazy_count as common field:

	__common_field(unsigned char,  preempt_lazy_count);

jirka


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH RT] tracing: make preempt_lazy and migrate_disable counter smaller
  2020-02-21 20:49           ` Jiri Olsa
@ 2020-02-24 10:01             ` Sebastian Andrzej Siewior
  2020-02-24 11:54               ` Jiri Olsa
  0 siblings, 1 reply; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-24 10:01 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Steven Rostedt, Jiri Olsa, linux-rt-users, Thomas Gleixner,
	Juri Lelli, Arnaldo Carvalho de Melo

On 2020-02-21 21:49:57 [+0100], Jiri Olsa wrote:
> On Fri, Feb 21, 2020 at 06:44:19PM +0100, Sebastian Andrzej Siewior wrote:
> > The migrate_disable counter should not exceed 255 so it is enough to
> > store it in an 8bit field.
> > With this change we can move the `preempt_lazy_count' member into the
> > gap so the whole struct shrinks by 4 bytes to 12 bytes in total.
> > 
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > ---
> >  include/linux/trace_events.h | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> > index f3b1ef07e4a5f..c8b3f06cb5ed7 100644
> > --- a/include/linux/trace_events.h
> > +++ b/include/linux/trace_events.h
> > @@ -62,9 +62,9 @@ struct trace_entry {
> >  	unsigned char		flags;
> >  	unsigned char		preempt_count;
> >  	int			pid;
> > -	unsigned short		migrate_disable;
> > -	unsigned short		padding;
> > +	unsigned char		migrate_disable;
> >  	unsigned char		preempt_lazy_count;
> > +	unsigned short		padding;
> 
> also we still need to put preempt_lazy_count as common field:
> 
> 	__common_field(unsigned char,  preempt_lazy_count);

so I need to update the struct as I did (and drop the padding field like
Steven suggested) and then do the "same" change to
"trace_define_common_fields" ?

> 
> jirka

Sebastian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH RT] tracing: make preempt_lazy and migrate_disable counter smaller
  2020-02-24 10:01             ` Sebastian Andrzej Siewior
@ 2020-02-24 11:54               ` Jiri Olsa
  2020-02-24 17:01                 ` [PATCH RT v2] " Sebastian Andrzej Siewior
  0 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2020-02-24 11:54 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Steven Rostedt, Jiri Olsa, linux-rt-users, Thomas Gleixner,
	Juri Lelli, Arnaldo Carvalho de Melo

On Mon, Feb 24, 2020 at 11:01:04AM +0100, Sebastian Andrzej Siewior wrote:
> On 2020-02-21 21:49:57 [+0100], Jiri Olsa wrote:
> > On Fri, Feb 21, 2020 at 06:44:19PM +0100, Sebastian Andrzej Siewior wrote:
> > > The migrate_disable counter should not exceed 255 so it is enough to
> > > store it in an 8bit field.
> > > With this change we can move the `preempt_lazy_count' member into the
> > > gap so the whole struct shrinks by 4 bytes to 12 bytes in total.
> > > 
> > > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > > ---
> > >  include/linux/trace_events.h | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> > > index f3b1ef07e4a5f..c8b3f06cb5ed7 100644
> > > --- a/include/linux/trace_events.h
> > > +++ b/include/linux/trace_events.h
> > > @@ -62,9 +62,9 @@ struct trace_entry {
> > >  	unsigned char		flags;
> > >  	unsigned char		preempt_count;
> > >  	int			pid;
> > > -	unsigned short		migrate_disable;
> > > -	unsigned short		padding;
> > > +	unsigned char		migrate_disable;
> > >  	unsigned char		preempt_lazy_count;
> > > +	unsigned short		padding;
> > 
> > also we still need to put preempt_lazy_count as common field:
> > 
> > 	__common_field(unsigned char,  preempt_lazy_count);
> 
> so I need to update the struct as I did (and drop the padding field like
> Steven suggested) and then do the "same" change to
> "trace_define_common_fields" ?

yep, AFAICS that's it

thanks,
jirka


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH RT v2] tracing: make preempt_lazy and migrate_disable counter smaller
  2020-02-24 11:54               ` Jiri Olsa
@ 2020-02-24 17:01                 ` Sebastian Andrzej Siewior
  2020-03-09 12:30                   ` Jiri Olsa
  0 siblings, 1 reply; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-24 17:01 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Steven Rostedt, Jiri Olsa, linux-rt-users, Thomas Gleixner,
	Juri Lelli, Arnaldo Carvalho de Melo

The migrate_disable counter should not exceed 255 so it is enough to
store it in an 8bit field.
With this change we can move the `preempt_lazy_count' member into the
gap so the whole struct shrinks by 4 bytes to 12 bytes in total.
Remove the `padding' field, it is not needed.
Update the tracing fields in trace_define_common_fields() (it was
missing the preempt_lazy_count field).

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
v1…v2:
  - drop the `padding' member
  - update the members in trace_define_common_fields() to match struct
    trace_entry.

 include/linux/trace_events.h | 3 +--
 kernel/trace/trace_events.c  | 4 ++--
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index f3b1ef07e4a5f..10bbf986bc1d5 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -62,8 +62,7 @@ struct trace_entry {
 	unsigned char		flags;
 	unsigned char		preempt_count;
 	int			pid;
-	unsigned short		migrate_disable;
-	unsigned short		padding;
+	unsigned char		migrate_disable;
 	unsigned char		preempt_lazy_count;
 };
 
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index accaae59a7626..8bfb187cce65a 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -181,8 +181,8 @@ static int trace_define_common_fields(void)
 	__common_field(unsigned char, flags);
 	__common_field(unsigned char, preempt_count);
 	__common_field(int, pid);
-	__common_field(unsigned short, migrate_disable);
-	__common_field(unsigned short, padding);
+	__common_field(unsigned char, migrate_disable);
+	__common_field(unsigned char, preempt_lazy_count);
 
 	return ret;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH RT v2] tracing: make preempt_lazy and migrate_disable counter smaller
  2020-02-24 17:01                 ` [PATCH RT v2] " Sebastian Andrzej Siewior
@ 2020-03-09 12:30                   ` Jiri Olsa
  0 siblings, 0 replies; 15+ messages in thread
From: Jiri Olsa @ 2020-03-09 12:30 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Steven Rostedt, Jiri Olsa, linux-rt-users, Thomas Gleixner,
	Juri Lelli, Arnaldo Carvalho de Melo

On Mon, Feb 24, 2020 at 06:01:01PM +0100, Sebastian Andrzej Siewior wrote:
> The migrate_disable counter should not exceed 255 so it is enough to
> store it in an 8bit field.
> With this change we can move the `preempt_lazy_count' member into the
> gap so the whole struct shrinks by 4 bytes to 12 bytes in total.
> Remove the `padding' field, it is not needed.
> Update the tracing fields in trace_define_common_fields() (it was
> missing the preempt_lazy_count field).
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
> v1…v2:
>   - drop the `padding' member
>   - update the members in trace_define_common_fields() to match struct
>     trace_entry.

seems ok to me.. not sure we care, but just wanted to point out
that there's a 'not described gap' in the sched_wakeup's format
file and probably in other formats as well:

	# cat /sys/kernel/debug/tracing/events/sched/sched_wakeup/format
	name: sched_wakeup
	ID: 310
	format:
		field:unsigned short common_type;       offset:0;       size:2; signed:0;
		field:unsigned char common_flags;       offset:2;       size:1; signed:0;
		field:unsigned char common_preempt_count;       offset:3;       size:1; signed:0;
		field:int common_pid;   offset:4;       size:4; signed:1;
		field:unsigned char common_migrate_disable;     offset:8;       size:1; signed:0;
		field:unsigned char common_preempt_lazy_count;  offset:9;       size:1; signed:0;

		field:char comm[16];    offset:12;      size:16;        signed:1;
		field:pid_t pid;        offset:28;      size:4; signed:1;
		field:int prio; offset:32;      size:4; signed:1;
		field:int success;      offset:36;      size:4; signed:1;
		field:int target_cpu;   offset:40;      size:4; signed:1;


there's "common_preempt_lazy_count" field on offset 9 with size 1:
	common_preempt_lazy_count;  offset:9;       size:1;

followed by "comm" field on offset 12:
	field:char comm[16];    offset:12;      size:16;        signed:1;


which makes 2 bytes gap in between, that might confuse some applications
like bpftrace ;-) however there's still enough data to workaround that,
so I'm ok with that

thanks,
jirka


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-03-09 12:30 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-21 15:35 [PATCH] tracing: Fix trace entry and trace common fields for preempt_lazy_count Jiri Olsa
2020-02-21 15:49 ` Steven Rostedt
2020-02-21 16:10   ` Jiri Olsa
2020-02-21 16:21     ` Steven Rostedt
2020-02-21 16:29       ` Jiri Olsa
2020-02-21 16:50         ` Steven Rostedt
2020-02-21 16:37       ` Sebastian Andrzej Siewior
2020-02-21 17:44         ` [PATCH RT] tracing: make preempt_lazy and migrate_disable counter smaller Sebastian Andrzej Siewior
2020-02-21 19:51           ` Steven Rostedt
2020-02-21 20:20             ` Sebastian Andrzej Siewior
2020-02-21 20:49           ` Jiri Olsa
2020-02-24 10:01             ` Sebastian Andrzej Siewior
2020-02-24 11:54               ` Jiri Olsa
2020-02-24 17:01                 ` [PATCH RT v2] " Sebastian Andrzej Siewior
2020-03-09 12:30                   ` Jiri Olsa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).