All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Huang Ying <ying.huang@intel.com>
Cc: LKML <linux-kernel@vger.kernel.org>, LKP ML <lkp@01.org>
Subject: Re: [LKP] [mm] 3484b2de949: -46.2% aim7.jobs-per-min
Date: Thu, 5 Mar 2015 10:26:09 +0000	[thread overview]
Message-ID: <20150305102609.GS3087@suse.de> (raw)
In-Reply-To: <1425533699.6711.48.camel@intel.com>

On Thu, Mar 05, 2015 at 01:34:59PM +0800, Huang Ying wrote:
> Hi, Mel,
> 
> On Sat, 2015-02-28 at 15:30 +0800, Huang Ying wrote:
> > On Sat, 2015-02-28 at 01:46 +0000, Mel Gorman wrote:
> > > On Fri, Feb 27, 2015 at 03:21:36PM +0800, Huang Ying wrote:
> > > > FYI, we noticed the below changes on
> > > > 
> > > > git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
> > > > commit 3484b2de9499df23c4604a513b36f96326ae81ad ("mm: rearrange zone fields into read-only, page alloc, statistics and page reclaim lines")
> > > > 
> > > > The perf cpu-cycles for spinlock (zone->lock) increased a lot.  I suspect there are some cache ping-pong or false sharing.
> > > > 
> > > 
> > > Are you sure about this result? I ran similar tests here and found that
> > > there was a major regression introduced near there but it was commit
> > > 05b843012335 ("mm: memcontrol: use root_mem_cgroup res_counter") that
> > > cause the problem and it was later reverted.  On local tests on a 4-node
> > > machine, commit 3484b2de9499df23c4604a513b36f96326ae81ad was within 1%
> > > of the previous commit and well within the noise.
> > 
> > After applying the below debug patch, the performance regression
> > restored.  So I think we can root cause this regression to be cache line
> > alignment related issue?
> > 
> > If my understanding were correct, after the 3484b2de94, lock and low
> > address area free_area are in the same cache line, so that the cache
> > line of the lock and the low address area of free_area will be switched
> > between MESI "E" and "S" state because it is written in one CPU (page
> > allocating with free_area) and frequently read (spinning on lock) in
> > another CPU.
> 
> What do you think about this?
> 

My attention is occupied by the automatic NUMA regression at the moment
but I haven't forgotten this. Even with the high client count, I was not
able to reproduce this so it appears to depend on the number of CPUs
available to stress the allocator enough to bypass the per-cpu allocator
enough to contend heavily on the zone lock. I'm hoping to think of a
better alternative than adding more padding and increasing the cache
footprint of the allocator but so far I haven't thought of a good
alternative. Moving the lock to the end of the freelists would probably
address the problem but still increases the footprint for order-0
allocations by a cache line.

-- 
Mel Gorman
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: lkp@lists.01.org
Subject: Re: [mm] 3484b2de949: -46.2% aim7.jobs-per-min
Date: Thu, 05 Mar 2015 10:26:09 +0000	[thread overview]
Message-ID: <20150305102609.GS3087@suse.de> (raw)
In-Reply-To: <1425533699.6711.48.camel@intel.com>

[-- Attachment #1: Type: text/plain, Size: 2474 bytes --]

On Thu, Mar 05, 2015 at 01:34:59PM +0800, Huang Ying wrote:
> Hi, Mel,
> 
> On Sat, 2015-02-28 at 15:30 +0800, Huang Ying wrote:
> > On Sat, 2015-02-28 at 01:46 +0000, Mel Gorman wrote:
> > > On Fri, Feb 27, 2015 at 03:21:36PM +0800, Huang Ying wrote:
> > > > FYI, we noticed the below changes on
> > > > 
> > > > git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
> > > > commit 3484b2de9499df23c4604a513b36f96326ae81ad ("mm: rearrange zone fields into read-only, page alloc, statistics and page reclaim lines")
> > > > 
> > > > The perf cpu-cycles for spinlock (zone->lock) increased a lot.  I suspect there are some cache ping-pong or false sharing.
> > > > 
> > > 
> > > Are you sure about this result? I ran similar tests here and found that
> > > there was a major regression introduced near there but it was commit
> > > 05b843012335 ("mm: memcontrol: use root_mem_cgroup res_counter") that
> > > cause the problem and it was later reverted.  On local tests on a 4-node
> > > machine, commit 3484b2de9499df23c4604a513b36f96326ae81ad was within 1%
> > > of the previous commit and well within the noise.
> > 
> > After applying the below debug patch, the performance regression
> > restored.  So I think we can root cause this regression to be cache line
> > alignment related issue?
> > 
> > If my understanding were correct, after the 3484b2de94, lock and low
> > address area free_area are in the same cache line, so that the cache
> > line of the lock and the low address area of free_area will be switched
> > between MESI "E" and "S" state because it is written in one CPU (page
> > allocating with free_area) and frequently read (spinning on lock) in
> > another CPU.
> 
> What do you think about this?
> 

My attention is occupied by the automatic NUMA regression at the moment
but I haven't forgotten this. Even with the high client count, I was not
able to reproduce this so it appears to depend on the number of CPUs
available to stress the allocator enough to bypass the per-cpu allocator
enough to contend heavily on the zone lock. I'm hoping to think of a
better alternative than adding more padding and increasing the cache
footprint of the allocator but so far I haven't thought of a good
alternative. Moving the lock to the end of the freelists would probably
address the problem but still increases the footprint for order-0
allocations by a cache line.

-- 
Mel Gorman
SUSE Labs

  reply	other threads:[~2015-03-05 10:26 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-27  7:21 [LKP] [mm] 3484b2de949: -46.2% aim7.jobs-per-min Huang Ying
2015-02-27  7:21 ` Huang Ying
2015-02-27 11:53 ` [LKP] " Mel Gorman
2015-02-27 11:53   ` Mel Gorman
2015-02-28  1:24   ` [LKP] " Huang Ying
2015-02-28  1:24     ` Huang Ying
2015-02-28  7:57   ` [LKP] " Huang Ying
2015-02-28  7:57     ` Huang Ying
2015-02-28  1:46 ` [LKP] " Mel Gorman
2015-02-28  1:46   ` Mel Gorman
2015-02-28  2:30   ` [LKP] " Huang Ying
2015-02-28  2:30     ` Huang Ying
2015-02-28  2:42     ` [LKP] " Huang Ying
2015-02-28  2:42       ` Huang Ying
2015-02-28  7:30   ` [LKP] " Huang Ying
2015-02-28  7:30     ` Huang Ying
2015-03-05  5:34     ` [LKP] " Huang Ying
2015-03-05  5:34       ` Huang Ying
2015-03-05 10:26       ` Mel Gorman [this message]
2015-03-05 10:26         ` Mel Gorman
2015-03-23  8:46         ` [LKP] " Huang Ying
2015-03-23  8:46           ` Huang Ying
2015-03-25 10:54           ` [LKP] " Mel Gorman
2015-03-25 10:54             ` Mel Gorman
2015-03-27  8:49             ` [LKP] " Huang Ying
2015-03-27  8:49               ` Huang Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150305102609.GS3087@suse.de \
    --to=mgorman@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@01.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.