All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hugh Dickins <hughd@google.com>
To: Alex Shi <alex.shi@linux.alibaba.com>
Cc: akpm@linux-foundation.org, mgorman@techsingularity.net,
	tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru,
	daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com,
	willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	cgroups@vger.kernel.org, shakeelb@google.com,
	iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com
Subject: Re: [PATCH v11 00/16] per memcg lru lock
Date: Sun, 7 Jun 2020 21:15:21 -0700 (PDT)	[thread overview]
Message-ID: <alpine.LSU.2.11.2006072100390.2001@eggly.anvils> (raw)
In-Reply-To: <1590663658-184131-1-git-send-email-alex.shi@linux.alibaba.com>

On Thu, 28 May 2020, Alex Shi wrote:

> This is a new version which bases on linux-next 
> 
> Johannes Weiner has suggested:
> "So here is a crazy idea that may be worth exploring:
> 
> Right now, pgdat->lru_lock protects both PageLRU *and* the lruvec's
> linked list.
> 
> Can we make PageLRU atomic and use it to stabilize the lru_lock
> instead, and then use the lru_lock only serialize list operations?
> ..."
> 
> With new memcg charge path and this solution, we could isolate
> LRU pages to exclusive visit them in compaction, page migration, reclaim,
> memcg move_accunt, huge page split etc scenarios while keeping pages' 
> memcg stable. Then possible to change per node lru locking to per memcg
> lru locking. As to pagevec_lru_move_fn funcs, it would be safe to let
> pages remain on lru list, lru lock could guard them for list integrity.
> 
> The patchset includes 3 parts:
> 1, some code cleanup and minimum optimization as a preparation.
> 2, use TestCleanPageLRU as page isolation's precondition
> 3, replace per node lru_lock with per memcg per node lru_lock
> 
> The 3rd part moves per node lru_lock into lruvec, thus bring a lru_lock for
> each of memcg per node. So on a large machine, each of memcg don't
> have to suffer from per node pgdat->lru_lock competition. They could go
> fast with their self lru_lock
> 
> Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104
> containers on a 2s * 26cores * HT box with a modefied case:
> https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice
> 
> With this patchset, the readtwice performance increased about 80%
> in concurrent containers.
> 
> Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this
> idea 8 years ago, and others who give comments as well: Daniel Jordan, 
> Mel Gorman, Shakeel Butt, Matthew Wilcox etc.
> 
> Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu,
> and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks!
> 
> 
> Alex Shi (14):
>   mm/vmscan: remove unnecessary lruvec adding
>   mm/page_idle: no unlikely double check for idle page counting
>   mm/compaction: correct the comments of compact_defer_shift
>   mm/compaction: rename compact_deferred as compact_should_defer
>   mm/thp: move lru_add_page_tail func to huge_memory.c
>   mm/thp: clean up lru_add_page_tail
>   mm/thp: narrow lru locking
>   mm/memcg: add debug checking in lock_page_memcg
>   mm/lru: introduce TestClearPageLRU
>   mm/compaction: do page isolation first in compaction
>   mm/mlock: reorder isolation sequence during munlock
>   mm/lru: replace pgdat lru_lock with lruvec lock
>   mm/lru: introduce the relock_page_lruvec function
>   mm/pgdat: remove pgdat lru_lock
> 
> Hugh Dickins (2):
>   mm/vmscan: use relock for move_pages_to_lru
>   mm/lru: revise the comments of lru_lock
> 
>  Documentation/admin-guide/cgroup-v1/memcg_test.rst |  15 +-
>  Documentation/admin-guide/cgroup-v1/memory.rst     |   8 +-
>  Documentation/trace/events-kmem.rst                |   2 +-
>  Documentation/vm/unevictable-lru.rst               |  22 +--
>  include/linux/compaction.h                         |   4 +-
>  include/linux/memcontrol.h                         |  92 +++++++++++
>  include/linux/mm_types.h                           |   2 +-
>  include/linux/mmzone.h                             |   6 +-
>  include/linux/page-flags.h                         |   1 +
>  include/linux/swap.h                               |   4 +-
>  include/trace/events/compaction.h                  |   2 +-
>  mm/compaction.c                                    | 104 ++++++++-----
>  mm/filemap.c                                       |   4 +-
>  mm/huge_memory.c                                   |  51 +++++--
>  mm/memcontrol.c                                    |  87 ++++++++++-
>  mm/mlock.c                                         |  93 ++++++------
>  mm/mmzone.c                                        |   1 +
>  mm/page_alloc.c                                    |   1 -
>  mm/page_idle.c                                     |   8 -
>  mm/rmap.c                                          |   2 +-
>  mm/swap.c                                          | 112 ++++----------
>  mm/swap_state.c                                    |   6 +-
>  mm/vmscan.c                                        | 168 +++++++++++----------
>  mm/workingset.c                                    |   4 +-
>  24 files changed, 487 insertions(+), 312 deletions(-)

Hi Alex,

I didn't get to try v10 at all, waited until Johannes's preparatory
memcg swap cleanup was in mmotm; but I have spent a while thrashing
this v11, and can happily report that it is much better than v9 etc:
I believe this memcg lru_lock work will soon be ready for v5.9.

I've not yet found any flaw at the swapping end, but fixes are needed
for isolate_migratepages_block() and mem_cgroup_move_account(): I've
got a series of 4 fix patches to send you (I guess two to fold into
existing patches of yours, and two to keep as separate from me).

I haven't yet written the patch descriptions, will return to that
tomorrow.  I expect you will be preparing a v12 rebased on v5.8-rc1
or v5.8-rc2, and will be able to include these fixes in that.

Tomorrow...
Hugh

WARNING: multiple messages have this Message-ID (diff)
From: Hugh Dickins <hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
To: Alex Shi <alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org>
Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	mgorman-3eNAlZScCAx27rWaFMvyedHuzzzSOjJt@public.gmane.org,
	tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	khlebnikov-XoJtRXgx1JseBXzfvpsJ4g@public.gmane.org,
	daniel.m.jordan-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org,
	yang.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org,
	willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org,
	lkp-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	iamjoonsoo.kim-Hm3cg6mZ9cc@public.gmane.org,
	richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org
Subject: Re: [PATCH v11 00/16] per memcg lru lock
Date: Sun, 7 Jun 2020 21:15:21 -0700 (PDT)	[thread overview]
Message-ID: <alpine.LSU.2.11.2006072100390.2001@eggly.anvils> (raw)
In-Reply-To: <1590663658-184131-1-git-send-email-alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org>

On Thu, 28 May 2020, Alex Shi wrote:

> This is a new version which bases on linux-next 
> 
> Johannes Weiner has suggested:
> "So here is a crazy idea that may be worth exploring:
> 
> Right now, pgdat->lru_lock protects both PageLRU *and* the lruvec's
> linked list.
> 
> Can we make PageLRU atomic and use it to stabilize the lru_lock
> instead, and then use the lru_lock only serialize list operations?
> ..."
> 
> With new memcg charge path and this solution, we could isolate
> LRU pages to exclusive visit them in compaction, page migration, reclaim,
> memcg move_accunt, huge page split etc scenarios while keeping pages' 
> memcg stable. Then possible to change per node lru locking to per memcg
> lru locking. As to pagevec_lru_move_fn funcs, it would be safe to let
> pages remain on lru list, lru lock could guard them for list integrity.
> 
> The patchset includes 3 parts:
> 1, some code cleanup and minimum optimization as a preparation.
> 2, use TestCleanPageLRU as page isolation's precondition
> 3, replace per node lru_lock with per memcg per node lru_lock
> 
> The 3rd part moves per node lru_lock into lruvec, thus bring a lru_lock for
> each of memcg per node. So on a large machine, each of memcg don't
> have to suffer from per node pgdat->lru_lock competition. They could go
> fast with their self lru_lock
> 
> Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104
> containers on a 2s * 26cores * HT box with a modefied case:
> https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice
> 
> With this patchset, the readtwice performance increased about 80%
> in concurrent containers.
> 
> Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this
> idea 8 years ago, and others who give comments as well: Daniel Jordan, 
> Mel Gorman, Shakeel Butt, Matthew Wilcox etc.
> 
> Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu,
> and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks!
> 
> 
> Alex Shi (14):
>   mm/vmscan: remove unnecessary lruvec adding
>   mm/page_idle: no unlikely double check for idle page counting
>   mm/compaction: correct the comments of compact_defer_shift
>   mm/compaction: rename compact_deferred as compact_should_defer
>   mm/thp: move lru_add_page_tail func to huge_memory.c
>   mm/thp: clean up lru_add_page_tail
>   mm/thp: narrow lru locking
>   mm/memcg: add debug checking in lock_page_memcg
>   mm/lru: introduce TestClearPageLRU
>   mm/compaction: do page isolation first in compaction
>   mm/mlock: reorder isolation sequence during munlock
>   mm/lru: replace pgdat lru_lock with lruvec lock
>   mm/lru: introduce the relock_page_lruvec function
>   mm/pgdat: remove pgdat lru_lock
> 
> Hugh Dickins (2):
>   mm/vmscan: use relock for move_pages_to_lru
>   mm/lru: revise the comments of lru_lock
> 
>  Documentation/admin-guide/cgroup-v1/memcg_test.rst |  15 +-
>  Documentation/admin-guide/cgroup-v1/memory.rst     |   8 +-
>  Documentation/trace/events-kmem.rst                |   2 +-
>  Documentation/vm/unevictable-lru.rst               |  22 +--
>  include/linux/compaction.h                         |   4 +-
>  include/linux/memcontrol.h                         |  92 +++++++++++
>  include/linux/mm_types.h                           |   2 +-
>  include/linux/mmzone.h                             |   6 +-
>  include/linux/page-flags.h                         |   1 +
>  include/linux/swap.h                               |   4 +-
>  include/trace/events/compaction.h                  |   2 +-
>  mm/compaction.c                                    | 104 ++++++++-----
>  mm/filemap.c                                       |   4 +-
>  mm/huge_memory.c                                   |  51 +++++--
>  mm/memcontrol.c                                    |  87 ++++++++++-
>  mm/mlock.c                                         |  93 ++++++------
>  mm/mmzone.c                                        |   1 +
>  mm/page_alloc.c                                    |   1 -
>  mm/page_idle.c                                     |   8 -
>  mm/rmap.c                                          |   2 +-
>  mm/swap.c                                          | 112 ++++----------
>  mm/swap_state.c                                    |   6 +-
>  mm/vmscan.c                                        | 168 +++++++++++----------
>  mm/workingset.c                                    |   4 +-
>  24 files changed, 487 insertions(+), 312 deletions(-)

Hi Alex,

I didn't get to try v10 at all, waited until Johannes's preparatory
memcg swap cleanup was in mmotm; but I have spent a while thrashing
this v11, and can happily report that it is much better than v9 etc:
I believe this memcg lru_lock work will soon be ready for v5.9.

I've not yet found any flaw at the swapping end, but fixes are needed
for isolate_migratepages_block() and mem_cgroup_move_account(): I've
got a series of 4 fix patches to send you (I guess two to fold into
existing patches of yours, and two to keep as separate from me).

I haven't yet written the patch descriptions, will return to that
tomorrow.  I expect you will be preparing a v12 rebased on v5.8-rc1
or v5.8-rc2, and will be able to include these fixes in that.

Tomorrow...
Hugh

  parent reply	other threads:[~2020-06-08  4:15 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-28 11:00 [PATCH v11 00/16] per memcg lru lock Alex Shi
2020-05-28 11:00 ` [PATCH v11 01/16] mm/vmscan: remove unnecessary lruvec adding Alex Shi
2020-05-28 11:00 ` [PATCH v11 02/16] mm/page_idle: no unlikely double check for idle page counting Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-05-28 11:00 ` [PATCH v11 03/16] mm/compaction: correct the comments of compact_defer_shift Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-05-28 11:00 ` [PATCH v11 04/16] mm/compaction: rename compact_deferred as compact_should_defer Alex Shi
2020-05-28 11:00 ` [PATCH v11 05/16] mm/thp: move lru_add_page_tail func to huge_memory.c Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-05-28 11:00 ` [PATCH v11 06/16] mm/thp: clean up lru_add_page_tail Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-05-28 11:00 ` [PATCH v11 07/16] mm/thp: narrow lru locking Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-05-28 11:00 ` [PATCH v11 08/16] mm/memcg: add debug checking in lock_page_memcg Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-05-28 11:00 ` [PATCH v11 09/16] mm/lru: introduce TestClearPageLRU Alex Shi
2020-05-28 11:00 ` [PATCH v11 10/16] mm/compaction: do page isolation first in compaction Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-05-28 11:00 ` [PATCH v11 11/16] mm/mlock: reorder isolation sequence during munlock Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-05-28 11:00 ` [PATCH v11 12/16] mm/lru: replace pgdat lru_lock with lruvec lock Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-05-28 11:00 ` [PATCH v11 13/16] mm/lru: introduce the relock_page_lruvec function Alex Shi
2020-05-28 11:00 ` [PATCH v11 14/16] mm/vmscan: use relock for move_pages_to_lru Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-05-28 11:00 ` [PATCH v11 15/16] mm/pgdat: remove pgdat lru_lock Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-05-28 11:00 ` [PATCH v11 16/16] mm/lru: revise the comments of lru_lock Alex Shi
2020-05-28 11:00   ` Alex Shi
2020-06-08  4:15 ` Hugh Dickins [this message]
2020-06-08  4:15   ` [PATCH v11 00/16] per memcg lru lock Hugh Dickins
2020-06-08  4:15   ` Hugh Dickins
2020-06-08  6:13   ` Alex Shi
2020-06-08  6:13     ` Alex Shi
2020-06-10  3:22     ` Hugh Dickins
2020-06-10  3:22       ` Hugh Dickins
2020-06-10  3:22       ` Hugh Dickins
2020-06-11  6:06       ` Alex Shi
2020-06-11  6:06         ` Alex Shi
2020-06-11 22:09         ` Hugh Dickins
2020-06-11 22:09           ` Hugh Dickins
2020-06-11 22:09           ` Hugh Dickins
2020-06-12 10:43           ` Alex Shi
2020-06-12 10:43             ` Alex Shi
2020-06-16  6:14           ` Alex Shi
2020-06-16  6:14             ` Alex Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LSU.2.11.2006072100390.2001@eggly.anvils \
    --to=hughd@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linux.alibaba.com \
    --cc=cgroups@vger.kernel.org \
    --cc=daniel.m.jordan@oracle.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=khlebnikov@yandex-team.ru \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lkp@intel.com \
    --cc=mgorman@techsingularity.net \
    --cc=richard.weiyang@gmail.com \
    --cc=shakeelb@google.com \
    --cc=tj@kernel.org \
    --cc=willy@infradead.org \
    --cc=yang.shi@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.