* [PATCH] sched: task_struct: Fill unconditional hole induced by sched_entity
@ 2021-09-24 2:54 Kees Cook
2021-10-06 4:48 ` Kees Cook
0 siblings, 1 reply; 4+ messages in thread
From: Kees Cook @ 2021-09-24 2:54 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Kees Cook, Ingo Molnar, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, linux-kernel, linux-hardening
With struct sched_entity before the other sched entities, its alignment
won't induce a struct hole. This saves 64 bytes in defconfig task_struct:
Before:
...
unsigned int rt_priority; /* 120 4 */
/* XXX 4 bytes hole, try to pack */
/* --- cacheline 2 boundary (128 bytes) --- */
const struct sched_class * sched_class; /* 128 8 */
/* XXX 56 bytes hole, try to pack */
/* --- cacheline 3 boundary (192 bytes) --- */
struct sched_entity se __attribute__((__aligned__(64))); /* 192 448 */
/* --- cacheline 10 boundary (640 bytes) --- */
struct sched_rt_entity rt; /* 640 48 */
struct sched_dl_entity dl __attribute__((__aligned__(8))); /* 688 224 */
/* --- cacheline 14 boundary (896 bytes) was 16 bytes ago --- */
After:
...
unsigned int rt_priority; /* 120 4 */
/* XXX 4 bytes hole, try to pack */
/* --- cacheline 2 boundary (128 bytes) --- */
struct sched_entity se __attribute__((__aligned__(64))); /* 128 448 */
/* --- cacheline 9 boundary (576 bytes) --- */
struct sched_rt_entity rt; /* 576 48 */
struct sched_dl_entity dl __attribute__((__aligned__(8))); /* 624 224 */
/* --- cacheline 13 boundary (832 bytes) was 16 bytes ago --- */
Summary diff:
- /* size: 7040, cachelines: 110, members: 188 */
+ /* size: 6976, cachelines: 109, members: 188 */
Signed-off-by: Kees Cook <keescook@chromium.org>
---
include/linux/sched.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 39039ce8ac4c..27ed1d40028f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -775,10 +775,10 @@ struct task_struct {
int normal_prio;
unsigned int rt_priority;
- const struct sched_class *sched_class;
struct sched_entity se;
struct sched_rt_entity rt;
struct sched_dl_entity dl;
+ const struct sched_class *sched_class;
#ifdef CONFIG_SCHED_CORE
struct rb_node core_node;
--
2.30.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] sched: task_struct: Fill unconditional hole induced by sched_entity
2021-09-24 2:54 [PATCH] sched: task_struct: Fill unconditional hole induced by sched_entity Kees Cook
@ 2021-10-06 4:48 ` Kees Cook
2021-10-06 9:27 ` Peter Zijlstra
0 siblings, 1 reply; 4+ messages in thread
From: Kees Cook @ 2021-10-06 4:48 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Thomas Gleixner, Borislav Petkov
Cc: x86, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, linux-kernel, linux-hardening
On Thu, Sep 23, 2021 at 07:54:50PM -0700, Kees Cook wrote:
> With struct sched_entity before the other sched entities, its alignment
> won't induce a struct hole. This saves 64 bytes in defconfig task_struct:
Friendly ping. Can someone snag this for -tip please?
Thanks!
-Kees
>
> Before:
> ...
> unsigned int rt_priority; /* 120 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> /* --- cacheline 2 boundary (128 bytes) --- */
> const struct sched_class * sched_class; /* 128 8 */
>
> /* XXX 56 bytes hole, try to pack */
>
> /* --- cacheline 3 boundary (192 bytes) --- */
> struct sched_entity se __attribute__((__aligned__(64))); /* 192 448 */
> /* --- cacheline 10 boundary (640 bytes) --- */
> struct sched_rt_entity rt; /* 640 48 */
> struct sched_dl_entity dl __attribute__((__aligned__(8))); /* 688 224 */
> /* --- cacheline 14 boundary (896 bytes) was 16 bytes ago --- */
>
> After:
> ...
> unsigned int rt_priority; /* 120 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> /* --- cacheline 2 boundary (128 bytes) --- */
> struct sched_entity se __attribute__((__aligned__(64))); /* 128 448 */
> /* --- cacheline 9 boundary (576 bytes) --- */
> struct sched_rt_entity rt; /* 576 48 */
> struct sched_dl_entity dl __attribute__((__aligned__(8))); /* 624 224 */
> /* --- cacheline 13 boundary (832 bytes) was 16 bytes ago --- */
>
> Summary diff:
> - /* size: 7040, cachelines: 110, members: 188 */
> + /* size: 6976, cachelines: 109, members: 188 */
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> include/linux/sched.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 39039ce8ac4c..27ed1d40028f 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -775,10 +775,10 @@ struct task_struct {
> int normal_prio;
> unsigned int rt_priority;
>
> - const struct sched_class *sched_class;
> struct sched_entity se;
> struct sched_rt_entity rt;
> struct sched_dl_entity dl;
> + const struct sched_class *sched_class;
>
> #ifdef CONFIG_SCHED_CORE
> struct rb_node core_node;
> --
> 2.30.2
>
--
Kees Cook
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] sched: task_struct: Fill unconditional hole induced by sched_entity
2021-10-06 4:48 ` Kees Cook
@ 2021-10-06 9:27 ` Peter Zijlstra
2021-10-06 16:31 ` Kees Cook
0 siblings, 1 reply; 4+ messages in thread
From: Peter Zijlstra @ 2021-10-06 9:27 UTC (permalink / raw)
To: Kees Cook
Cc: Ingo Molnar, Thomas Gleixner, Borislav Petkov, x86, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Daniel Bristot de Oliveira, linux-kernel,
linux-hardening
On Tue, Oct 05, 2021 at 09:48:51PM -0700, Kees Cook wrote:
> On Thu, Sep 23, 2021 at 07:54:50PM -0700, Kees Cook wrote:
> > With struct sched_entity before the other sched entities, its alignment
> > won't induce a struct hole. This saves 64 bytes in defconfig task_struct:
>
> Friendly ping. Can someone snag this for -tip please?
Hurpmf... if only we had like perf driven pahole output :/
Picked it up, we'll see what if anything hurts.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] sched: task_struct: Fill unconditional hole induced by sched_entity
2021-10-06 9:27 ` Peter Zijlstra
@ 2021-10-06 16:31 ` Kees Cook
0 siblings, 0 replies; 4+ messages in thread
From: Kees Cook @ 2021-10-06 16:31 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Ingo Molnar, Thomas Gleixner, Borislav Petkov, x86, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Daniel Bristot de Oliveira, linux-kernel,
linux-hardening
On Wed, Oct 06, 2021 at 11:27:03AM +0200, Peter Zijlstra wrote:
> On Tue, Oct 05, 2021 at 09:48:51PM -0700, Kees Cook wrote:
> > On Thu, Sep 23, 2021 at 07:54:50PM -0700, Kees Cook wrote:
> > > With struct sched_entity before the other sched entities, its alignment
> > > won't induce a struct hole. This saves 64 bytes in defconfig task_struct:
> >
> > Friendly ping. Can someone snag this for -tip please?
>
> Hurpmf... if only we had like perf driven pahole output :/
Normally I wouldn't even make suggestions in task_struct given the high
variability due to CONFIG options, but this case was pretty universal.
> Picked it up, we'll see what if anything hurts.
Thanks! *fingers crossed*
--
Kees Cook
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-10-06 16:31 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-24 2:54 [PATCH] sched: task_struct: Fill unconditional hole induced by sched_entity Kees Cook
2021-10-06 4:48 ` Kees Cook
2021-10-06 9:27 ` Peter Zijlstra
2021-10-06 16:31 ` Kees Cook
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).