From: Suren Baghdasaryan <surenb@google.com>
To: Minchan Kim <minchan@kernel.org>
Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@suse.com,
peterz@infradead.org, guro@fb.com, shakeelb@google.com,
timmurray@google.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, kernel-team@android.com
Subject: Re: [PATCH 1/1] mm: count time in drain_all_pages during direct reclaim as memory pressure
Date: Sun, 20 Feb 2022 08:52:38 -0800 [thread overview]
Message-ID: <CAJuCfpF6xDzxU7JHva34F_PRwm9qXJa7a98OEuWfwJ21cMJe-Q@mail.gmail.com> (raw)
In-Reply-To: <YhGN7nhqRMuEC5Rg@google.com>
On Sat, Feb 19, 2022 at 4:40 PM Minchan Kim <minchan@kernel.org> wrote:
>
> On Sat, Feb 19, 2022 at 09:49:40AM -0800, Suren Baghdasaryan wrote:
> > When page allocation in direct reclaim path fails, the system will
> > make one attempt to shrink per-cpu page lists and free pages from
> > high alloc reserves. Draining per-cpu pages into buddy allocator can
> > be a very slow operation because it's done using workqueues and the
> > task in direct reclaim waits for all of them to finish before
>
> Yes, drain_all_pages is serious slow(100ms - 150ms on Android)
> especially when CPUs are fully packed. It was also spotted in CMA
> allocation even when there was on no memory pressure.
Thanks for the input, Minchan!
In my tests I've seen 50-60ms delays in a single drain_all_pages but I
can imagine there are cases worse than these.
>
> > proceeding. Currently this time is not accounted as psi memory stall.
>
> Good spot.
>
> >
> > While testing mobile devices under extreme memory pressure, when
> > allocations are failing during direct reclaim, we notices that psi
> > events which would be expected in such conditions were not triggered.
> > After profiling these cases it was determined that the reason for
> > missing psi events was that a big chunk of time spent in direct
> > reclaim is not accounted as memory stall, therefore psi would not
> > reach the levels at which an event is generated. Further investigation
> > revealed that the bulk of that unaccounted time was spent inside
> > drain_all_pages call.
> >
> > Annotate drain_all_pages and unreserve_highatomic_pageblock during
> > page allocation failure in the direct reclaim path so that delays
> > caused by these calls are accounted as memory stall.
> >
> > Reported-by: Tim Murray <timmurray@google.com>
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > ---
> > mm/page_alloc.c | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 3589febc6d31..7fd0d392b39b 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4639,8 +4639,12 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> > * Shrink them and try again
> > */
> > if (!page && !drained) {
> > + unsigned long pflags;
> > +
> > + psi_memstall_enter(&pflags);
> > unreserve_highatomic_pageblock(ac, false);
> > drain_all_pages(NULL);
> > + psi_memstall_leave(&pflags);
>
> Instead of annotating the specific drain_all_pages, how about
> moving the annotation from __perform_reclaim to
> __alloc_pages_direct_reclaim?
I'm fine with that approach too. Let's wait for Johannes' input before
I make any changes.
Thanks,
Suren.
next prev parent reply other threads:[~2022-02-20 16:52 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-19 17:49 [PATCH 1/1] mm: count time in drain_all_pages during direct reclaim as memory pressure Suren Baghdasaryan
2022-02-20 0:40 ` Minchan Kim
2022-02-20 16:52 ` Suren Baghdasaryan [this message]
2022-02-23 18:54 ` Johannes Weiner
2022-02-23 19:06 ` Suren Baghdasaryan
2022-02-23 19:42 ` Suren Baghdasaryan
2022-02-21 8:55 ` Michal Hocko
2022-02-21 10:41 ` Petr Mladek
2022-02-21 19:13 ` Suren Baghdasaryan
2022-02-21 19:09 ` Suren Baghdasaryan
2022-02-22 19:47 ` Tim Murray
2022-02-23 0:15 ` Suren Baghdasaryan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJuCfpF6xDzxU7JHva34F_PRwm9qXJa7a98OEuWfwJ21cMJe-Q@mail.gmail.com \
--to=surenb@google.com \
--cc=akpm@linux-foundation.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@android.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=minchan@kernel.org \
--cc=peterz@infradead.org \
--cc=shakeelb@google.com \
--cc=timmurray@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).