From: Hao Lee <haolee.swjtu@gmail.com> To: Michal Hocko <mhocko@suse.com> Cc: Linux MM <linux-mm@kvack.org>, Johannes Weiner <hannes@cmpxchg.org>, vdavydov.dev@gmail.com, Shakeel Butt <shakeelb@google.com>, cgroups@vger.kernel.org, LKML <linux-kernel@vger.kernel.org> Subject: Re: [PATCH] mm: reduce spinlock contention in release_pages() Date: Thu, 25 Nov 2021 11:24:02 +0800 [thread overview] Message-ID: <CA+PpKPmy-u_BxYMCQOFyz78t2+3uM6nR9mQeX+MPyH6H2tOOHA@mail.gmail.com> (raw) In-Reply-To: <YZ5o/VmU59evp65J@dhcp22.suse.cz> On Thu, Nov 25, 2021 at 12:31 AM Michal Hocko <mhocko@suse.com> wrote: > > On Wed 24-11-21 15:19:15, Hao Lee wrote: > > When several tasks are terminated simultaneously, lots of pages will be > > released, which can cause severe spinlock contention. Other tasks which > > are running on the same core will be seriously affected. We can yield > > cpu to fix this problem. > > How does this actually address the problem? You are effectivelly losing > fairness completely. Got it. Thanks! > We do batch currently so no single task should be > able to monopolize the cpu for too long. Why this is not sufficient? uncharge and unref indeed take advantage of the batch process, but del_from_lru needs more time to complete. Several tasks will contend spinlock in the loop if nr is very large. We can notice a transient peak of sys% reflecting this, and perf can also report spinlock slowpath takes too much time. This scenario is not rare, especially when containers are destroyed simultaneously and other latency critical tasks may be affected by this problem. So I want to figure out a way to deal with it. Thanks. > > > diff --git a/mm/swap.c b/mm/swap.c > > index e8c9dc6d0377..91850d51a5a5 100644 > > --- a/mm/swap.c > > +++ b/mm/swap.c > > @@ -960,8 +960,14 @@ void release_pages(struct page **pages, int nr) > > if (PageLRU(page)) { > > struct lruvec *prev_lruvec = lruvec; > > > > - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, > > +retry: > > + lruvec = folio_lruvec_tryrelock_irqsave(folio, lruvec, > > &flags); > > + if (!lruvec) { > > + cond_resched(); > > + goto retry; > > + } > > + > > if (prev_lruvec != lruvec) > > lock_batch = 0; > > > > -- > > 2.31.1 > > -- > Michal Hocko > SUSE Labs
WARNING: multiple messages have this Message-ID (diff)
From: Hao Lee <haolee.swjtu-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> To: Michal Hocko <mhocko-IBi9RG/b67k@public.gmane.org> Cc: Linux MM <linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>, Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>, vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, LKML <linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org> Subject: Re: [PATCH] mm: reduce spinlock contention in release_pages() Date: Thu, 25 Nov 2021 11:24:02 +0800 [thread overview] Message-ID: <CA+PpKPmy-u_BxYMCQOFyz78t2+3uM6nR9mQeX+MPyH6H2tOOHA@mail.gmail.com> (raw) In-Reply-To: <YZ5o/VmU59evp65J-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org> On Thu, Nov 25, 2021 at 12:31 AM Michal Hocko <mhocko-IBi9RG/b67k@public.gmane.org> wrote: > > On Wed 24-11-21 15:19:15, Hao Lee wrote: > > When several tasks are terminated simultaneously, lots of pages will be > > released, which can cause severe spinlock contention. Other tasks which > > are running on the same core will be seriously affected. We can yield > > cpu to fix this problem. > > How does this actually address the problem? You are effectivelly losing > fairness completely. Got it. Thanks! > We do batch currently so no single task should be > able to monopolize the cpu for too long. Why this is not sufficient? uncharge and unref indeed take advantage of the batch process, but del_from_lru needs more time to complete. Several tasks will contend spinlock in the loop if nr is very large. We can notice a transient peak of sys% reflecting this, and perf can also report spinlock slowpath takes too much time. This scenario is not rare, especially when containers are destroyed simultaneously and other latency critical tasks may be affected by this problem. So I want to figure out a way to deal with it. Thanks. > > > diff --git a/mm/swap.c b/mm/swap.c > > index e8c9dc6d0377..91850d51a5a5 100644 > > --- a/mm/swap.c > > +++ b/mm/swap.c > > @@ -960,8 +960,14 @@ void release_pages(struct page **pages, int nr) > > if (PageLRU(page)) { > > struct lruvec *prev_lruvec = lruvec; > > > > - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, > > +retry: > > + lruvec = folio_lruvec_tryrelock_irqsave(folio, lruvec, > > &flags); > > + if (!lruvec) { > > + cond_resched(); > > + goto retry; > > + } > > + > > if (prev_lruvec != lruvec) > > lock_batch = 0; > > > > -- > > 2.31.1 > > -- > Michal Hocko > SUSE Labs
next prev parent reply other threads:[~2021-11-25 3:30 UTC|newest] Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-11-24 15:19 [PATCH] mm: reduce spinlock contention in release_pages() Hao Lee 2021-11-24 15:19 ` Hao Lee 2021-11-24 15:57 ` Matthew Wilcox 2021-11-24 15:57 ` Matthew Wilcox 2021-11-25 3:13 ` Hao Lee 2021-11-25 3:13 ` Hao Lee 2021-11-24 16:31 ` Michal Hocko 2021-11-24 16:31 ` Michal Hocko 2021-11-25 3:24 ` Hao Lee [this message] 2021-11-25 3:24 ` Hao Lee 2021-11-25 3:30 ` Matthew Wilcox 2021-11-25 3:30 ` Matthew Wilcox 2021-11-25 8:02 ` Hao Lee 2021-11-25 10:01 ` Michal Hocko 2021-11-25 12:31 ` Hao Lee 2021-11-25 14:18 ` Michal Hocko 2021-11-25 14:18 ` Michal Hocko 2021-11-26 6:50 ` Hao Lee 2021-11-26 6:50 ` Hao Lee 2021-11-26 10:46 ` Michal Hocko 2021-11-26 10:46 ` Michal Hocko 2021-11-26 16:26 ` Hao Lee 2021-11-26 16:26 ` Hao Lee 2021-11-29 8:39 ` Michal Hocko 2021-11-29 13:23 ` Matthew Wilcox 2021-11-29 13:23 ` Matthew Wilcox 2021-11-29 13:39 ` Michal Hocko 2021-11-29 13:39 ` Michal Hocko 2021-11-25 18:04 ` Matthew Wilcox 2021-11-25 18:04 ` Matthew Wilcox 2021-11-26 6:54 ` Hao Lee 2021-11-26 6:54 ` Hao Lee
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=CA+PpKPmy-u_BxYMCQOFyz78t2+3uM6nR9mQeX+MPyH6H2tOOHA@mail.gmail.com \ --to=haolee.swjtu@gmail.com \ --cc=cgroups@vger.kernel.org \ --cc=hannes@cmpxchg.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mhocko@suse.com \ --cc=shakeelb@google.com \ --cc=vdavydov.dev@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.