All of lore.kernel.org
 help / color / mirror / Atom feed
From: Baptiste Lepers <baptiste.lepers@gmail.com>
To: mgorman@techsingularity.net, akpm@linux-foundation.org,
	 dhowells@redhat.com, linux-mm@kvack.org, hannes@cmpxchg.org
Subject: Lock overhead in shrink_inactive_list / Slow page reclamation
Date: Fri, 11 Jan 2019 16:52:17 +1100	[thread overview]
Message-ID: <CABdVr8R2y9B+2zzSAT_Ve=BQCa+F+E9_kVH+C28DGpkeQitiog@mail.gmail.com> (raw)

Hello,

We have a performance issue with the page cache. One of our workload
spends more than 50% of it's time in the lru_locks called by
shrink_inactive_list in mm/vmscan.c.

The workload is simple but stresses the page cache a lot: a big file
is mmaped and multiple threads stream chunks of the file; the chunks
sizes range from a few KB to a few MB. The file is about 1TB and is
stored on a very fast SSD (2.6GB/s bandwidth). Our machine has 64GB of
RAM. We rely on the page cache to cache data, but obviously pages have
to be reclaimed quite often to put new data. The workload is *read
only* so we would expect page reclamation to be fast, but it's not. In
some workloads the page cache only reclaims pages at 500-600MB/s.

We have tried to play with fadvise to speed up page reclamation (e.g.,
using the DONTNEED flag) but that didn't help.

Increasing the value of SWAP_CLUSTER_MAX to 256UL helped (as suggested
here https://lkml.org/lkml/2015/7/6/440), but we are still spending
most of the time waiting for the page cache to reclaim pages.
Increasing the value to more than 256 doesn't help -- the
shrink_inactive_list function is never reclaiming more than a few
hundred pages at a time. (I don't know why, and I'm not sure how to
profile why this is the case, but I'm willing to spend time to debug
the issue if you have ideas.)

Any idea of anything else we could try to speed up page reclamation?

Thanks,
Baptiste.


             reply	other threads:[~2019-01-11  5:52 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-11  5:52 Baptiste Lepers [this message]
2019-01-11 13:59 ` Lock overhead in shrink_inactive_list / Slow page reclamation Michal Hocko
2019-01-11 17:53   ` Daniel Jordan
2019-01-13 23:12     ` Baptiste Lepers
2019-01-13 23:12       ` Baptiste Lepers
2019-01-14  7:06       ` Michal Hocko
2019-01-14  7:25         ` Baptiste Lepers
2019-01-14  7:25           ` Baptiste Lepers
2019-01-14  7:44           ` Michal Hocko
2019-01-14 15:22       ` Kirill Tkhai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABdVr8R2y9B+2zzSAT_Ve=BQCa+F+E9_kVH+C28DGpkeQitiog@mail.gmail.com' \
    --to=baptiste.lepers@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=dhowells@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.