All of lore.kernel.org
 help / color / mirror / Atom feed
From: Suren Baghdasaryan <surenb@google.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Nicolas Saenz Julienne <nsaenzju@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Michal Hocko <mhocko@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>
Subject: Re: [RFC PATCH 0/6] Drain remote per-cpu directly
Date: Mon, 25 Apr 2022 23:30:03 -0700	[thread overview]
Message-ID: <CAJuCfpG6fU-AfB+_RfRHwq8-i3=H4RF5UmHeuH+70xz8m61k9g@mail.gmail.com> (raw)
In-Reply-To: <CAJuCfpH=U9E-46wZgKEYQZVKjw=2qq_KXDRF=tqDBvqpR+BBbg@mail.gmail.com>

On Mon, Apr 25, 2022 at 7:49 PM Suren Baghdasaryan <surenb@google.com> wrote:
>
> On Wed, Apr 20, 2022 at 2:59 AM Mel Gorman <mgorman@techsingularity.net> wrote:
> >
> > This series has the same intent as Nicolas' series "mm/page_alloc: Remote
> > per-cpu lists drain support" -- avoid interference of a high priority
> > task due to a workqueue item draining per-cpu page lists. While many
> > workloads can tolerate a brief interruption, it may be cause a real-time
> > task runnning on a NOHZ_FULL CPU to miss a deadline and at minimum,
> > the draining in non-deterministic.
> >
> > Currently an IRQ-safe local_lock protects the page allocator per-cpu lists.
> > The local_lock on its own prevents migration and the IRQ disabling protects
> > from corruption due to an interrupt arriving while a page allocation is
> > in progress. The locking is inherently unsafe for remote access unless
> > the CPU is hot-removed.
> >
> > This series adjusts the locking. A spin-lock is added to struct
> > per_cpu_pages to protect the list contents while local_lock_irq continues
> > to prevent migration and IRQ reentry. This allows a remote CPU to safely
> > drain a remote per-cpu list.
> >
> > This series is a partial series. Follow-on work would allow the
> > local_irq_save to be converted to a local_irq to avoid IRQs being
> > disabled/enabled in most cases. However, there are enough corner cases
> > that it deserves a series on its own separated by one kernel release and
> > the priority right now is to avoid interference of high priority tasks.
> >
> > Patch 1 is a cosmetic patch to clarify when page->lru is storing buddy pages
> >         and when it is storing per-cpu pages.
> >
> > Patch 2 shrinks per_cpu_pages to make room for a spin lock. Strictly speaking
> >         this is not necessary but it avoids per_cpu_pages consuming another
> >         cache line.
> >
> > Patch 3 is a preparation patch to avoid code duplication.
> >
> > Patch 4 is a simple micro-optimisation that improves code flow necessary for
> >         a later patch to avoid code duplication.
> >
> > Patch 5 uses a spin_lock to protect the per_cpu_pages contents while still
> >         relying on local_lock to prevent migration, stabilise the pcp
> >         lookup and prevent IRQ reentrancy.
> >
> > Patch 6 remote drains per-cpu pages directly instead of using a workqueue.
>
> This quite possibly solves the issue I was trying to fix in
> https://lore.kernel.org/all/20220225012819.1807147-1-surenb@google.com.
> I will give it a try and see how it looks.

My test shows sizable improvement for the worst case drain_all_pages
duration. Before the change I caught cases when a drain_local_pages_wq
in the workqueue was delayed by 100+ms (not even counting
drain_local_pages_wq execution time itself). With this patchset the
worst time I was able to record for drain_all_pages duration was 17ms.

> Thanks!
>
> >
> >  include/linux/mm_types.h |   5 +
> >  include/linux/mmzone.h   |  12 +-
> >  mm/page_alloc.c          | 333 ++++++++++++++++++++++++---------------
> >  3 files changed, 222 insertions(+), 128 deletions(-)
> >
> > --
> > 2.34.1
> >
> >

      reply	other threads:[~2022-04-26  6:30 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-20  9:59 [RFC PATCH 0/6] Drain remote per-cpu directly Mel Gorman
2022-04-20  9:59 ` [PATCH 1/6] mm/page_alloc: Add page->buddy_list and page->pcp_list Mel Gorman
2022-04-20 20:43   ` Matthew Wilcox
2022-04-21  8:38     ` Mel Gorman
2022-04-20  9:59 ` [PATCH 2/6] mm/page_alloc: Use only one PCP list for THP-sized allocations Mel Gorman
2022-04-20  9:59 ` [PATCH 3/6] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Mel Gorman
2022-04-20  9:59 ` [PATCH 4/6] mm/page_alloc: Remove unnecessary page == NULL check in rmqueue Mel Gorman
2022-04-20  9:59 ` [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-04-20 14:02   ` Hillf Danton
2022-04-20 14:35     ` Nicolas Saenz Julienne
2022-04-26 16:42   ` Nicolas Saenz Julienne
2022-04-26 16:48     ` Vlastimil Babka
2022-04-29  9:13     ` Mel Gorman
2022-04-26 19:24   ` Minchan Kim
2022-04-29  9:05     ` Mel Gorman
2022-04-20  9:59 ` [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-04-25 22:58 ` [RFC PATCH 0/6] Drain remote per-cpu directly Minchan Kim
2022-04-26 11:06   ` Nicolas Saenz Julienne
2022-04-27 15:21     ` Marcelo Tosatti
2022-04-26  2:49 ` Suren Baghdasaryan
2022-04-26  6:30   ` Suren Baghdasaryan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJuCfpG6fU-AfB+_RfRHwq8-i3=H4RF5UmHeuH+70xz8m61k9g@mail.gmail.com' \
    --to=surenb@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=nsaenzju@redhat.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.