On Wed, Aug 24, 2022 at 11:47 PM Michal Hocko wrote: > > On Thu 25-08-22 00:05:05, Shakeel Butt wrote: > > With memcg v2 enabled, memcg->memory.usage is a very hot member for > > the workloads doing memcg charging on multiple CPUs concurrently. > > Particularly the network intensive workloads. In addition, there is a > > false cache sharing between memory.usage and memory.high on the charge > > path. This patch moves the usage into a separate cacheline and move all > > the read most fields into separate cacheline. > > > > To evaluate the impact of this optimization, on a 72 CPUs machine, we > > ran the following workload in a three level of cgroup hierarchy. > > > > $ netserver -6 > > # 36 instances of netperf with following params > > $ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K > > > > Results (average throughput of netperf): > > Without (6.0-rc1) 10482.7 Mbps > > With patch 12413.7 Mbps (18.4% improvement) > > > > With the patch, the throughput improved by 18.4%. > > > > One side-effect of this patch is the increase in the size of struct > > mem_cgroup. For example with this patch on 64 bit build, the size of > > struct mem_cgroup increased from 4032 bytes to 4416 bytes. However for > > the performance improvement, this additional size is worth it. In > > addition there are opportunities to reduce the size of struct > > mem_cgroup like deprecation of kmem and tcpmem page counters and > > better packing. > > > > Signed-off-by: Shakeel Butt > > Reported-by: kernel test robot > > Reviewed-by: Feng Tang > > Acked-by: Soheil Hassas Yeganeh > > Acked-by: Roman Gushchin > > Acked-by: Michal Hocko > Thanks. > One nit below > > > --- > > Changes since v1: > > - Updated the commit message > > - Make struct page_counter cache align. > > > > include/linux/page_counter.h | 35 +++++++++++++++++++++++------------ > > 1 file changed, 23 insertions(+), 12 deletions(-) > > > > diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h > > index 679591301994..78a1c934e416 100644 > > --- a/include/linux/page_counter.h > > +++ b/include/linux/page_counter.h > > @@ -3,15 +3,26 @@ > > #define _LINUX_PAGE_COUNTER_H > > > > #include > > +#include > > #include > > #include > > > > +#if defined(CONFIG_SMP) > > +struct pc_padding { > > + char x[0]; > > +} ____cacheline_internodealigned_in_smp; > > +#define PC_PADDING(name) struct pc_padding name > > +#else > > +#define PC_PADDING(name) > > +#endif > > + > > struct page_counter { > > + /* > > + * Make sure 'usage' does not share cacheline with any other field. The > > + * memcg->memory.usage is a hot member of struct mem_cgroup. > > + */ > > atomic_long_t usage; > > - unsigned long min; > > - unsigned long low; > > - unsigned long high; > > - unsigned long max; > > + PC_PADDING(_pad1_); > > > > /* effective memory.min and memory.min usage tracking */ > > unsigned long emin; > > @@ -23,18 +34,18 @@ struct page_counter { > > atomic_long_t low_usage; > > atomic_long_t children_low_usage; > > > > - /* legacy */ > > unsigned long watermark; > > unsigned long failcnt; > > These two are also touched in the charging path so we could squeeze them > into the same cache line as usage. > > 0-day machinery was quite good at hitting noticeable regression anytime > we have changed layout so let's see what they come up with after this > patch ;) I will try this locally first (after some cleanups) to see if there is any positive or negative impact and report here. > -- > Michal Hocko > SUSE Labs