linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Daniel Jordan <daniel.m.jordan@oracle.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	cgroups@vger.kernel.org
Cc: aaron.lu@intel.com, ak@linux.intel.com,
	akpm@linux-foundation.org, dave.dice@oracle.com,
	dave.hansen@linux.intel.com, hannes@cmpxchg.org,
	levyossi@icloud.com, ldufour@linux.vnet.ibm.com,
	mgorman@techsingularity.net, mhocko@kernel.org,
	Pavel.Tatashin@microsoft.com, steven.sistare@oracle.com,
	tim.c.chen@intel.com, vdavydov.dev@gmail.com,
	ying.huang@intel.com
Subject: Re: [RFC PATCH v2 0/8] lru_lock scalability and SMP list functions
Date: Fri, 19 Oct 2018 13:35:11 +0200	[thread overview]
Message-ID: <2705c814-a6b8-0b14-7ea8-790325833d95@suse.cz> (raw)
In-Reply-To: <20180911004240.4758-1-daniel.m.jordan@oracle.com>

On 9/11/18 2:42 AM, Daniel Jordan wrote:
> Hi,
> 
> This is a work-in-progress of what I presented at LSF/MM this year[0] to
> greatly reduce contention on lru_lock, allowing it to scale on large systems.
> 
> This is completely different from the lru_lock series posted last January[1].
> 
> I'm hoping for feedback on the overall design and general direction as I do
> more real-world performance testing and polish the code.  Is this a workable
> approach?
> 
>                                         Thanks,
>                                           Daniel
> 
> ---
> 
> Summary:  lru_lock can be one of the hottest locks in the kernel on big
> systems.  It guards too much state, so introduce new SMP-safe list functions to
> allow multiple threads to operate on the LRUs at once.  The SMP list functions
> are provided in a standalone API that can be used in other parts of the kernel.
> When lru_lock and zone->lock are both fixed, the kernel can do up to 73.8% more
> page faults per second on a 44-core machine.
> 
> ---
> 
> On large systems, lru_lock can become heavily contended in memory-intensive
> workloads such as decision support, applications that manage their memory
> manually by allocating and freeing pages directly from the kernel, and
> workloads with short-lived processes that force many munmap and exit
> operations.  lru_lock also inhibits scalability in many of the MM paths that
> could be parallelized, such as freeing pages during exit/munmap and inode
> eviction.

Interesting, I would have expected isolate_lru_pages() to be the main
culprit, as the comment says:

 * For pagecache intensive workloads, this function is the hottest
 * spot in the kernel (apart from copy_*_user functions).

It also says "Some of the functions that shrink the lists perform better
by taking out a batch of pages and working on them outside the LRU
lock." Makes me wonder why isolate_lru_pages() also doesn't cut the list
first instead of doing per-page list_move() (and perhaps also prefetch
batch of struct pages outside the lock first? Could be doable with some
care hopefully).

> The problem is that lru_lock is too big of a hammer.  It guards all the LRUs in
> a pgdat's lruvec, needlessly serializing add-to-front, add-to-tail, and delete
> operations that are done on disjoint parts of an LRU, or even completely
> different LRUs.
> 
> This RFC series, developed in collaboration with Yossi Lev and Dave Dice,
> offers a two-part solution to this problem.
> 
> First, three new list functions are introduced to allow multiple threads to
> operate on the same linked list simultaneously under certain conditions, which
> are spelled out in more detail in code comments and changelogs.  The functions
> are smp_list_del, smp_list_splice, and smp_list_add, and do the same things as
> their non-SMP-safe counterparts.  These primitives may be used elsewhere in the
> kernel as the need arises; for example, in the page allocator free lists to
> scale zone->lock[2], or in file system LRUs[3].
> 
> Second, lru_lock is converted from a spinlock to a rwlock.  The idea is to
> repurpose rwlock as a two-mode lock, where callers take the lock in shared
> (i.e. read) mode for code using the SMP list functions, and exclusive (i.e.
> write) mode for existing code that expects exclusive access to the LRUs.
> Multiple threads are allowed in under the read lock, of course, and they use
> the SMP list functions to synchronize amongst themselves.
> 
> The rwlock is scaffolding to facilitate the transition from big-hammer lru_lock
> as it exists today to just using the list locking primitives and getting rid of
> lru_lock entirely.  Such an approach allows incremental conversion of lru_lock
> writers until everything uses the SMP list functions and takes the lock in
> shared mode, at which point lru_lock can just go away.

Yeah I guess that will need more care, e.g. I think smp_list_del() can
break any thread doing just a read-only traversal as it can end up with
an entry that's been deleted and its next/prev poisoned. It's a bit
counterintuitive that "read lock" is now enough for selected modify
operations, while read-only traversal would need a write lock.

> This RFC series is incomplete.  More, and more realistic, performance
> numbers are needed; for now, I show only will-it-scale/page_fault1.
> Also, there are extensions I'd like to make to the locking scheme to
> handle certain lru_lock paths--in particular, those where multiple
> threads may delete the same node from an LRU.  The SMP list functions
> now handle only removal of _adjacent_ nodes from an LRU.  Finally, the
> diffstat should become more supportive after I remove some of the code
> duplication in patch 6 by converting the rest of the per-CPU pagevec
> code in mm/swap.c to use the SMP list functions.

  parent reply	other threads:[~2018-10-19 11:35 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-11  0:42 [RFC PATCH v2 0/8] lru_lock scalability and SMP list functions Daniel Jordan
2018-09-11  0:42 ` [RFC PATCH v2 1/8] mm, memcontrol.c: make memcg lru stats thread-safe without lru_lock Daniel Jordan
2018-09-11 16:32   ` Laurent Dufour
2018-09-12 13:28     ` Daniel Jordan
2018-09-11  0:42 ` [RFC PATCH v2 2/8] mm: make zone_reclaim_stat updates thread-safe Daniel Jordan
2018-09-11 16:40   ` Laurent Dufour
2018-09-12 13:30     ` Daniel Jordan
2018-09-11  0:42 ` [RFC PATCH v2 3/8] mm: convert lru_lock from a spinlock_t to a rwlock_t Daniel Jordan
2018-09-11  0:59 ` [RFC PATCH v2 4/8] mm: introduce smp_list_del for concurrent list entry removals Daniel Jordan
2018-09-11  0:59 ` [RFC PATCH v2 5/8] mm: enable concurrent LRU removals Daniel Jordan
2018-09-11  0:59 ` [RFC PATCH v2 6/8] mm: splice local lists onto the front of the LRU Daniel Jordan
2018-09-11  0:59 ` [RFC PATCH v2 7/8] mm: introduce smp_list_splice to prepare for concurrent LRU adds Daniel Jordan
2018-09-11  0:59 ` [RFC PATCH v2 8/8] mm: enable " Daniel Jordan
2018-10-19 11:35 ` Vlastimil Babka [this message]
2018-10-19 15:35   ` [RFC PATCH v2 0/8] lru_lock scalability and SMP list functions Daniel Jordan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2705c814-a6b8-0b14-7ea8-790325833d95@suse.cz \
    --to=vbabka@suse.cz \
    --cc=Pavel.Tatashin@microsoft.com \
    --cc=aaron.lu@intel.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=daniel.m.jordan@oracle.com \
    --cc=dave.dice@oracle.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=ldufour@linux.vnet.ibm.com \
    --cc=levyossi@icloud.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=steven.sistare@oracle.com \
    --cc=tim.c.chen@intel.com \
    --cc=vdavydov.dev@gmail.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).