All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hao Lee <haolee.swjtu@gmail.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>, Linux MM <linux-mm@kvack.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	vdavydov.dev@gmail.com, Shakeel Butt <shakeelb@google.com>,
	cgroups@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] mm: reduce spinlock contention in release_pages()
Date: Thu, 25 Nov 2021 08:02:38 +0000	[thread overview]
Message-ID: <20211125080238.GA7356@haolee.io> (raw)
In-Reply-To: <YZ8DZHERun6Fej2P@casper.infradead.org>

On Thu, Nov 25, 2021 at 03:30:44AM +0000, Matthew Wilcox wrote:
> On Thu, Nov 25, 2021 at 11:24:02AM +0800, Hao Lee wrote:
> > On Thu, Nov 25, 2021 at 12:31 AM Michal Hocko <mhocko@suse.com> wrote:
> > > We do batch currently so no single task should be
> > > able to monopolize the cpu for too long. Why this is not sufficient?
> > 
> > uncharge and unref indeed take advantage of the batch process, but
> > del_from_lru needs more time to complete. Several tasks will contend
> > spinlock in the loop if nr is very large.
> 
> Is SWAP_CLUSTER_MAX too large?  Or does your architecture's spinlock
> implementation need to be fixed?
> 

My testing server is x86_64 with 5.16-rc2. The spinlock should be normal.

I think lock_batch is not the point. lock_batch only break spinning time
into small parts, but it doesn't reduce spinning time. The thing may get
worse if lock_batch is very small.

Here is an example about two tasks contending spinlock. Let's assume each
task need a total of 4 seconds in critical section to complete its work.

Example1:

lock_batch = x

task A      taskB
hold 4s     wait 4s
            hold 4s

total waiting time is 4s.



Example2:

if lock_batch = x/2

task A      taskB
hold 2s     wait 2s
wait 2s     hold 2s
hold 2s     wait 2s
            hold 2s

total waiting time is 6s.


The above theoretical example can also be proved by a test using usemem.

# ./usemem -j 4096 -n 20 10g -s 5

lock_batch=SWAP_CLUSTER_MAX          59.50% native_queued_spin_lock_slowpath
lock_batch=SWAP_CLUSTER_MAX/4        69.95% native_queued_spin_lock_slowpath
lock_batch=SWAP_CLUSTER_MAX/16       82.22% native_queued_spin_lock_slowpath

Nonetheless, enlarging lock_batch can't improve performance obviously
though it won't make it worse, and it's not a good idea to hold a
irq-disabled spinlock for long time.


If cond_reched() will break the task fairness, the only way I can think
of is doing uncharge and unref if the current task can't get the
spinlock. This will reduce the wasted cpu cycles, although the
performance gain is still small (about 4%). However, this way will hurt
batch processing in uncharge(). Maybe there exist a better way...

diff --git a/mm/swap.c b/mm/swap.c
index e8c9dc6d0377..8a947f8d0aaa 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -960,8 +960,16 @@ void release_pages(struct page **pages, int nr)
 		if (PageLRU(page)) {
 			struct lruvec *prev_lruvec = lruvec;
 
-			lruvec = folio_lruvec_relock_irqsave(folio, lruvec,
+			lruvec = folio_lruvec_tryrelock_irqsave(folio, lruvec,
 									&flags);
+			if (!lruvec) {
+				mem_cgroup_uncharge_list(&pages_to_free);
+				free_unref_page_list(&pages_to_free);
+				INIT_LIST_HEAD(&pages_to_free);
+				lruvec = folio_lruvec_relock_irqsave(folio,
+							lruvec, &flags);
+			}
+
 			if (prev_lruvec != lruvec)
 				lock_batch = 0;

  reply	other threads:[~2021-11-25  8:06 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-24 15:19 [PATCH] mm: reduce spinlock contention in release_pages() Hao Lee
2021-11-24 15:19 ` Hao Lee
2021-11-24 15:57 ` Matthew Wilcox
2021-11-24 15:57   ` Matthew Wilcox
2021-11-25  3:13   ` Hao Lee
2021-11-25  3:13     ` Hao Lee
2021-11-24 16:31 ` Michal Hocko
2021-11-24 16:31   ` Michal Hocko
2021-11-25  3:24   ` Hao Lee
2021-11-25  3:24     ` Hao Lee
2021-11-25  3:30     ` Matthew Wilcox
2021-11-25  3:30       ` Matthew Wilcox
2021-11-25  8:02       ` Hao Lee [this message]
2021-11-25 10:01         ` Michal Hocko
2021-11-25 12:31           ` Hao Lee
2021-11-25 14:18             ` Michal Hocko
2021-11-25 14:18               ` Michal Hocko
2021-11-26  6:50               ` Hao Lee
2021-11-26  6:50                 ` Hao Lee
2021-11-26 10:46                 ` Michal Hocko
2021-11-26 10:46                   ` Michal Hocko
2021-11-26 16:26                   ` Hao Lee
2021-11-26 16:26                     ` Hao Lee
2021-11-29  8:39                     ` Michal Hocko
2021-11-29 13:23                       ` Matthew Wilcox
2021-11-29 13:23                         ` Matthew Wilcox
2021-11-29 13:39                         ` Michal Hocko
2021-11-29 13:39                           ` Michal Hocko
2021-11-25 18:04         ` Matthew Wilcox
2021-11-25 18:04           ` Matthew Wilcox
2021-11-26  6:54           ` Hao Lee
2021-11-26  6:54             ` Hao Lee

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211125080238.GA7356@haolee.io \
    --to=haolee.swjtu@gmail.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=shakeelb@google.com \
    --cc=vdavydov.dev@gmail.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.