linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Ye Xiaolong <xiaolong.ye@intel.com>,
	Vladimir Davydov <vdavydov@virtuozzo.com>,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH rebase] mm: fix vm-scalability regression in cgroup-aware workingset code
Date: Mon, 27 Jun 2016 15:05:28 +0200	[thread overview]
Message-ID: <20160627130527.GK31799@dhcp22.suse.cz> (raw)
In-Reply-To: <20160624175101.GA3024@cmpxchg.org>

[Sorry for a late reply]

On Fri 24-06-16 13:51:01, Johannes Weiner wrote:
> This is a rebased version on top of mmots sans the nodelru stuff.
> 
> ---
> 
> 23047a96d7cf ("mm: workingset: per-cgroup cache thrash detection")
> added a page->mem_cgroup lookup to the cache eviction, refault, and
> activation paths, as well as locking to the activation path, and the
> vm-scalability tests showed a regression of -23%. While the test in
> question is an artificial worst-case scenario that doesn't occur in
> real workloads - reading two sparse files in parallel at full CPU
> speed just to hammer the LRU paths - there is still some optimizations
> that can be done in those paths.
> 
> Inline the lookup functions to eliminate calls. Also, page->mem_cgroup
> doesn't need to be stabilized when counting an activation; we merely
> need to hold the RCU lock to prevent the memcg from being freed.
> 
> This cuts down on overhead quite a bit:
> 
> 23047a96d7cfcfca 063f6715e77a7be5770d6081fe
> ---------------- --------------------------
>          %stddev     %change         %stddev
>              \          |                \
>   21621405 +- 0%     +11.3%   24069657 +- 2%  vm-scalability.throughput
> 
> Reported-by: Ye Xiaolong <xiaolong.ye@intel.com>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

Acked-by: Michal Hocko <mhocko@suse.com>

Minor note below

> +static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
> +{

I guess rcu_read_lock_held() here would be appropriate

> +	return READ_ONCE(page->mem_cgroup);
> +}
-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2016-06-27 13:05 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-22 18:20 [PATCH] mm: fix vm-scalability regression in cgroup-aware workingset code Johannes Weiner
2016-06-24 17:51 ` [PATCH rebase] " Johannes Weiner
2016-06-27 13:05   ` Michal Hocko [this message]
2016-07-07 19:40     ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160627130527.GK31799@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=vdavydov@virtuozzo.com \
    --cc=xiaolong.ye@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).