All of lore.kernel.org
 help / color / mirror / Atom feed
From: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
To: Daniel Jordan <daniel.m.jordan@oracle.com>,
	Alex Shi <alex.shi@linux.alibaba.com>,
	cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Tejun Heo <tj@kernel.org>, Michal Hocko <mhocko@kernel.org>,
	Hugh Dickins <hughd@google.com>
Subject: Re: [PATCH 00/14] per memcg lru_lock
Date: Mon, 26 Aug 2019 11:39:49 +0300	[thread overview]
Message-ID: <d5256ebf-8314-8c24-a7ed-e170b7d39b61@yandex-team.ru> (raw)
In-Reply-To: <b776032e-eabb-64ff-8aee-acc2b3711717@oracle.com>

On 22/08/2019 18.20, Daniel Jordan wrote:
> On 8/22/19 7:56 AM, Alex Shi wrote:
>> 在 2019/8/22 上午2:00, Daniel Jordan 写道:
>>>    https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice>
>>> It's also synthetic but it stresses lru_lock more than just anon alloc/free.  It hits the page activate path, which is where we see this 
>>> lock in our database, and if enough memory is configured lru_lock also gets stressed during reclaim, similar to [1].
>>
>> Thanks for the sharing, this patchset can not help the [1] case, since it's just relief the per container lock contention now.
> 
> I should've been clearer.  [1] is meant as an example of someone suffering from lru_lock during reclaim.  Wouldn't your series help 
> per-memcg reclaim?
> 
>> Yes, readtwice case could be more sensitive for this lru_lock changes in containers. I may try to use it in container with some tuning. 
>> But anyway, aim9 is also pretty good to show the problem and solutions. :)
>>>
>>> It'd be better though, as Michal suggests, to use the real workload that's causing problems.  Where are you seeing contention?
>>
>> We repeatly create or delete a lot of different containers according to servers load/usage, so normal workload could cause lots of pages 
>> alloc/remove. 
> 
> I think numbers from that scenario would help your case.
> 
>> aim9 could reflect part of scenarios. I don't know the DB scenario yet.
> 
> We see it during DB shutdown when each DB process frees its memory (zap_pte_range -> mark_page_accessed).  But that's a different thing, 
> clearly Not This Series.
> 
>>>> With this patch series, lruvec->lru_lock show no contentions
>>>>           &(&lruvec->lru_l...          8          0               0       0               0               0
>>>>
>>>> and aim9 page_test/brk_test performance increased 5%~50%.
>>>
>>> Where does the 50% number come in?  The numbers below seem to only show ~4% boost.
>>After splitting lru-locks present per-cpu page-vectors works no so well
because they mixes pages from different cgroups.

pagevec_lru_move_fn and friends need better implementation:
either sorting pages or splitting vectores in per-lruvec basis.
>> the Setddev/CoeffVar case has about 50% performance increase. one of container's mmtests result as following:
>>
>> Stddev    page_test      245.15 (   0.00%)      189.29 (  22.79%)
>> Stddev    brk_test      1258.60 (   0.00%)      629.16 (  50.01%)
>> CoeffVar  page_test        0.71 (   0.00%)        0.53 (  26.05%)
>> CoeffVar  brk_test         1.32 (   0.00%)        0.64 (  51.14%)
> 
> Aha.  50% decrease in stdev.
> 

After splitting lru-locks present per-cpu page-vectors works
no so well because they mix pages from different cgroups.

pagevec_lru_move_fn and friends need better implementation:
either sorting pages or splitting vectores in per-lruvec basis.

  reply	other threads:[~2019-08-26  8:39 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-20  9:48 [PATCH 00/14] per memcg lru_lock Alex Shi
2019-08-20  9:48 ` [PATCH 01/14] mm/lru: move pgdat lru_lock into lruvec Alex Shi
2019-08-20  9:48   ` Alex Shi
2019-08-20 13:40   ` Matthew Wilcox
2019-08-20 13:40     ` Matthew Wilcox
2019-08-20 14:11     ` Alex Shi
2019-08-20 14:11       ` Alex Shi
2019-08-20  9:48 ` [PATCH 02/14] lru/memcg: move the lruvec->pgdat sync out lru_lock Alex Shi
2019-08-20  9:48 ` [PATCH 03/14] lru/memcg: using per lruvec lock in un/lock_page_lru Alex Shi
2019-08-26  8:30   ` Konstantin Khlebnikov
2019-08-26 14:16     ` Alex Shi
2019-08-20  9:48 ` [PATCH 04/14] lru/compaction: use per lruvec lock in isolate_migratepages_block Alex Shi
2019-08-20  9:48 ` [PATCH 05/14] lru/huge_page: use per lruvec lock in __split_huge_page Alex Shi
2019-08-20  9:48 ` [PATCH 06/14] lru/mlock: using per lruvec lock in munlock Alex Shi
2019-08-20  9:48 ` [PATCH 07/14] lru/swap: using per lruvec lock in page_cache_release Alex Shi
2019-08-20  9:48 ` [PATCH 08/14] lru/swap: uer lruvec lock in activate_page Alex Shi
2019-08-20  9:48 ` [PATCH 09/14] lru/swap: uer per lruvec lock in pagevec_lru_move_fn Alex Shi
2019-08-20  9:48 ` [PATCH 10/14] lru/swap: use per lruvec lock in release_pages Alex Shi
2019-08-20  9:48 ` [PATCH 11/14] lru/vmscan: using per lruvec lock in lists shrinking Alex Shi
2019-08-20  9:48 ` [PATCH 12/14] lru/vmscan: use pre lruvec lock in check_move_unevictable_pages Alex Shi
2019-08-20  9:48 ` [PATCH 13/14] lru/vmscan: using per lruvec lru_lock in get_scan_count Alex Shi
2019-08-20  9:48 ` [PATCH 14/14] mm/lru: fix the comments of lru_lock Alex Shi
2019-08-20  9:48   ` Alex Shi
2019-08-20 14:00   ` Matthew Wilcox
2019-08-20 14:00     ` Matthew Wilcox
2019-08-20 14:21     ` Alex Shi
2019-08-20 14:21       ` Alex Shi
2019-08-20 10:45 ` [PATCH 00/14] per memcg lru_lock Michal Hocko
2019-08-20 16:48   ` Shakeel Butt
2019-08-20 16:48     ` Shakeel Butt
2019-08-20 18:24     ` Hugh Dickins
2019-08-20 18:24       ` Hugh Dickins
2019-08-21  1:21       ` Alex Shi
2019-08-21  2:00       ` Alex Shi
2019-08-24  1:59         ` Hugh Dickins
2019-08-24  1:59           ` Hugh Dickins
2019-08-26 14:35           ` Alex Shi
2019-08-21 18:00 ` Daniel Jordan
2019-08-22 11:56   ` Alex Shi
2019-08-22 15:20     ` Daniel Jordan
2019-08-26  8:39       ` Konstantin Khlebnikov [this message]
2019-08-26 14:22         ` Alex Shi
2019-08-26 14:25       ` Alex Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d5256ebf-8314-8c24-a7ed-e170b7d39b61@yandex-team.ru \
    --to=khlebnikov@yandex-team.ru \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linux.alibaba.com \
    --cc=cgroups@vger.kernel.org \
    --cc=daniel.m.jordan@oracle.com \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.