linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Feng Tang <feng.tang@intel.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com>,
	Waiman Long <longman@redhat.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Shakeel Butt <shakeelb@google.com>,
	Chris Down <chris@chrisdown.name>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Roman Gushchin <guro@fb.com>, Tejun Heo <tj@kernel.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Yafang Shao <laoar.shao@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>,
	lkp@lists.01.org, lkp@intel.com, zhengjun.xing@intel.com,
	ying.huang@intel.com
Subject: Re: [LKP] Re: [mm/memcg] bd0b230fe1: will-it-scale.per_process_ops -22.7% regression
Date: Fri, 20 Nov 2020 22:30:12 +0800	[thread overview]
Message-ID: <20201120143012.GB103521@shbuild999.sh.intel.com> (raw)
In-Reply-To: <20201120131944.GP3200@dhcp22.suse.cz>

On Fri, Nov 20, 2020 at 02:19:44PM +0100, Michal Hocko wrote:
> On Fri 20-11-20 19:44:24, Feng Tang wrote:
> > On Fri, Nov 13, 2020 at 03:34:36PM +0800, Feng Tang wrote:
> > > On Thu, Nov 12, 2020 at 03:16:54PM +0100, Michal Hocko wrote:
> > > > > > > I add one phony page_counter after the union and re-test, the regression
> > > > > > > reduced to -1.2%. It looks like the regression caused by the data structure
> > > > > > > layout change.
> > > > > > 
> > > > > > Thanks for double checking. Could you try to cache align the
> > > > > > page_counter struct? If that helps then we should figure which counters
> > > > > > acks against each other by adding the alignement between the respective
> > > > > > counters. 
> > > > > 
> > > > > We tried below patch to make the 'page_counter' aligned.
> > > > >   
> > > > >   diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h
> > > > >   index bab7e57..9efa6f7 100644
> > > > >   --- a/include/linux/page_counter.h
> > > > >   +++ b/include/linux/page_counter.h
> > > > >   @@ -26,7 +26,7 @@ struct page_counter {
> > > > >    	/* legacy */
> > > > >    	unsigned long watermark;
> > > > >    	unsigned long failcnt;
> > > > >   -};
> > > > >   +} ____cacheline_internodealigned_in_smp;
> > > > >    
> > > > > and with it, the -22.7% peformance change turns to a small -1.7%, which
> > > > > confirms the performance bump is caused by the change to data alignment.
> > > > > 
> > > > > After the patch, size of 'page_counter' increases from 104 bytes to 128
> > > > > bytes, and the size of 'mem_cgroup' increases from 2880 bytes to 3008
> > > > > bytes(with our kernel config). Another major data structure which
> > > > > contains 'page_counter' is 'hugetlb_cgroup', whose size will change
> > > > > from 912B to 1024B.
> > > > > 
> > > > > Should we make these page_counters aligned to reduce cacheline conflict?
> > > > 
> > > > I would rather focus on a more effective mem_cgroup layout. It is very
> > > > likely that we are just stumbling over two counters here.
> > > > 
> > > > Could you try to add cache alignment of counters after memory and see
> > > > which one makes the difference? I do not expect memsw to be the one
> > > > because that one is used together with the main counter. But who knows
> > > > maybe the way it crosses the cache line has the exact effect. Hard to
> > > > tell without other numbers.
> > > 
> > > I added some alignments change around the 'memsw', but neither of them can
> > > restore the -22.7%. Following are some log showing how the alignments
> > > are:
> > > 
> > > tl: memcg=0x7cd1000 memory=0x7cd10d0 memsw=0x7cd1140 kmem=0x7cd11b0 tcpmem=0x7cd1220
> > > t2: memcg=0x7cd0000 memory=0x7cd00d0 memsw=0x7cd0140 kmem=0x7cd01c0 tcpmem=0x7cd0230
> > > 
> > > So both of the 'memsw' are aligned, but t2's 'kmem' is aligned while
> > > t1's is not.
> > > 
> > > I will check more on the perf data about detailed hotspots.
> > 
> > Some more check updates about it:
> > 
> > Waiman's patch is effectively removing one 'struct page_counter' between
> > 'memory' and "memsw'. And the mem_cgroup is: 
> > 
> > struct mem_cgroup {
> > 
> > 	...
> > 
> > 	struct page_counter memory;		/* Both v1 & v2 */
> > 
> > 	union {
> > 		struct page_counter swap;	/* v2 only */
> > 		struct page_counter memsw;	/* v1 only */
> > 	};
> > 
> > 	/* Legacy consumer-oriented counters */
> > 	struct page_counter kmem;		/* v1 only */
> > 	struct page_counter tcpmem;		/* v1 only */
> > 
> > 	...
> > 	...
> > 
> > 	MEMCG_PADDING(_pad1_);
> > 
> > 	atomic_t		moving_account;
> > 	struct task_struct	*move_lock_task;
> > 	
> > 	...
> > };
> > 
> > 
> > I do experiments by inserting a 'page_counter' between 'memory'
> > and the 'MEMCG_PADDING(_pad1_)', no matter where I put it, the
> > benchmark result can be recovered from 145K to 185K, which is
> > really confusing, as adding a 'page_counter' right before the
> > '_pad1_' doesn't change cache alignment of any members.
> 
> Have you checked the result of pahole before and after your modification
> whether something stands out?

I can not find any abnormal thing. (I attached pahole log for 2 kernels,
one's head commit is Waiman's patch, the other adds a page_counter before
the '_pad1_')

> Btw. is this reproducible an different CPU models?

This is a Haswell 4S platform. I've tried on Cascadelake 2S and 4S , which
has -7.7% and -4.2% regression, though the perf data shows the similar
changing trend.

Thanks,
Feng

>
> -- 
> Michal Hocko
> SUSE Labs

  reply	other threads:[~2020-11-20 14:30 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-02  9:15 [mm/memcg] bd0b230fe1: will-it-scale.per_process_ops -22.7% regression kernel test robot
2020-11-02  9:27 ` Michal Hocko
2020-11-02  9:53   ` [LKP] " Rong Chen
2020-11-02 10:02     ` Michal Hocko
2020-11-04  1:20       ` Xing Zhengjun
2020-11-04  2:46         ` Waiman Long
2020-11-04  8:15         ` Michal Hocko
2020-11-12 12:28           ` Feng Tang
2020-11-12 14:16             ` Michal Hocko
2020-11-12 16:43               ` Waiman Long
2020-11-13  7:39                 ` Feng Tang
2020-11-13  7:34               ` Feng Tang
2020-11-20 11:44                 ` Feng Tang
2020-11-20 13:19                   ` Michal Hocko
2020-11-20 14:30                     ` Feng Tang [this message]
2020-11-25  6:24                   ` Feng Tang
2020-11-26  1:34                     ` Waiman Long
2020-11-26 17:39                     ` Linus Torvalds
2020-11-30  8:48                     ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201120143012.GB103521@shbuild999.sh.intel.com \
    --to=feng.tang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=chris@chrisdown.name \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=laoar.shao@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=lkp@lists.01.org \
    --cc=longman@redhat.com \
    --cc=mhocko@suse.com \
    --cc=shakeelb@google.com \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vdavydov.dev@gmail.com \
    --cc=ying.huang@intel.com \
    --cc=zhengjun.xing@intel.com \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).