linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Matthew Wilcox <willy@infradead.org>
Cc: Linux-MM <linux-mm@kvack.org>, NeilBrown <neilb@suse.de>,
	Theodore Ts'o <tytso@mit.edu>,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	"Darrick J . Wong" <djwong@kernel.org>,
	Michal Hocko <mhocko@suse.com>,
	Dave Chinner <david@fromorbit.com>,
	Rik van Riel <riel@surriel.com>, Vlastimil Babka <vbabka@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH 0/5] Remove dependency on congestion_wait in mm/
Date: Mon, 20 Sep 2021 13:50:58 +0100	[thread overview]
Message-ID: <20210920125058.GI3959@techsingularity.net> (raw)
In-Reply-To: <YUhztA8TmplTluyQ@casper.infradead.org>

On Mon, Sep 20, 2021 at 12:42:44PM +0100, Matthew Wilcox wrote:
> On Mon, Sep 20, 2021 at 09:54:31AM +0100, Mel Gorman wrote:
> > This has been lightly tested only and the testing was useless as the
> > relevant code was not executed. The workload configurations I had that
> > used to trigger these corner cases no longer work (yey?) and I'll need
> > to implement a new synthetic workload. If someone is aware of a realistic
> > workload that forces reclaim activity to the point where reclaim stalls
> > then kindly share the details.
> 
> The stereeotypical "stalling on I/O" problem is to plug in one of the
> crap USB drives you were given at a trade show and simply
> 	dd if=/dev/zero of=/dev/sdb
> 	sync
> 

The test machines are 1500KM away so plugging in a USB stick but worst
comes to the worst, I could test it on a laptop. I considered using the
IO controller but I'm not sure that would throttle background writeback.
I dismissed doing this for a few reasons though -- the dirtying should
be rate limited based on the speed of the BDI so it will not necessarily
trigger the condition. It also misses the other interesting cases --
throttling due to excessive isolation and throttling due to failing to
make progress.

I've prototyped a synthetic case that uses 4..(NR_CPUS*4) workers. 1
worker measures mmap/munmap latency. 1 worker under fio is randomly reading
files. The remaining workers are split between fio doing random write IO
on separate files and anonymous memory hogs reading large mappings every
5 seconds. The aggregate WSS is approximately totalmem*2 split between 60%
anon and 40% file-backed (40% to be 2xdirty_ratio). After a warmup period
based on the writeback speed, it runs for 5 minutes per number of workers.

The primary metric of "goodness" will be the mmap latency because it's
the smallest worker that should be able to make quick progress and I
want to see how much it is interfered with during reclaim. I'll be
graphing the throttling times to see what processes get throttled and
for how long.

I was hoping though that there was a canonical realistic case that the
FS people use to stress the paths where the allocator fails to return
memory.  While my synthetic workload *might* work to trigger the cases,
I would prefer to have something that can compare this basic approach
with anything that is more clever.

Similarly, it would be nice to have a reasonable test case that phase
changes what memory is hot while there is heavy IO in the background to
detect whether the hot WSS is being properly protected. I used to use
memcached and a heavy writer to simulate this but it's weak because there
is no phase change so it's poor at evaluating vmscan.

> You can also set up qemu to have extremely slow I/O performance:
> https://serverfault.com/questions/675704/extremely-slow-qemu-storage-performance-with-qcow2-images
> 

Similar problem to the slow USB case, it's only catching one part of the
picture except now I have to worry about differences that are related
to the VM configuration (e.g. pinning virtual CPUs to physical CPUs
and replicating topology). Fine for a functional test, not so fine for
measuring if the patch is any good performance-wise.

-- 
Mel Gorman
SUSE Labs


  reply	other threads:[~2021-09-20 12:51 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-20  8:54 [RFC PATCH 0/5] Remove dependency on congestion_wait in mm/ Mel Gorman
2021-09-20  8:54 ` [PATCH 1/5] mm/vmscan: Throttle reclaim until some writeback completes if congested Mel Gorman
2021-09-20 23:19   ` NeilBrown
2021-09-21 11:12     ` Mel Gorman
2021-09-21 21:27       ` NeilBrown
2021-09-21  0:13   ` NeilBrown
2021-09-21 10:58     ` Mel Gorman
2021-09-21 21:40       ` NeilBrown
2021-09-22  6:04       ` Dave Chinner
2021-09-22  8:03         ` Mel Gorman
2021-09-22 12:16   ` Hillf Danton
2021-09-22 14:13     ` Mel Gorman
2021-09-20  8:54 ` [PATCH 2/5] mm/vmscan: Throttle reclaim and compaction when too may pages are isolated Mel Gorman
2021-09-20 23:27   ` NeilBrown
2021-09-21 11:03     ` Mel Gorman
2021-09-21 18:45   ` Yang Shi
2021-09-22  8:11     ` Mel Gorman
2021-09-20  8:54 ` [PATCH 3/5] mm/vmscan: Throttle reclaim when no progress is being made Mel Gorman
2021-09-20 23:31   ` NeilBrown
2021-09-21 11:16     ` Mel Gorman
2021-09-21 21:46       ` NeilBrown
2021-09-22  9:21         ` Mel Gorman
2021-09-20  8:54 ` [PATCH 4/5] mm/writeback: Throttle based on page writeback instead of congestion Mel Gorman
2021-09-20  8:54 ` [PATCH 5/5] mm/page_alloc: Remove the throttling logic from the page allocator Mel Gorman
2021-09-20 11:42 ` [RFC PATCH 0/5] Remove dependency on congestion_wait in mm/ Matthew Wilcox
2021-09-20 12:50   ` Mel Gorman [this message]
2021-09-20 14:11     ` David Sterba
2021-09-21 11:18       ` Mel Gorman
2021-09-20 19:51   ` Mel Gorman
2021-09-21 20:46 ` Dave Chinner
2021-09-22 17:52   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210920125058.GI3959@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=adilger.kernel@dilger.ca \
    --cc=corbet@lwn.net \
    --cc=david@fromorbit.com \
    --cc=djwong@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=neilb@suse.de \
    --cc=riel@surriel.com \
    --cc=tytso@mit.edu \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).