linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Dave Chinner <david@fromorbit.com>
Cc: Mel Gorman <mgorman@techsingularity.net>, Jan Kara <jack@suse.cz>,
	Michal Hocko <mhocko@kernel.org>,
	lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, Mel Gorman <mgorman@suse.de>
Subject: Re: [Lsf-pc] [LSF/MM TOPIC] Congestion
Date: Thu, 6 Feb 2020 16:08:53 -0800	[thread overview]
Message-ID: <20200207000853.GD8731@bombadil.infradead.org> (raw)
In-Reply-To: <20200206231928.GA21953@dread.disaster.area>

On Fri, Feb 07, 2020 at 10:19:28AM +1100, Dave Chinner wrote:
> But detecting an abundance dirty pages/inodes on the LRU doesn't
> really solve the problem of determining if and/or how long we should
> wait for IO before we try to free more objects. There is no problem
> with having lots of dirty pages/inodes on the LRU as long as the IO
> subsystem keeps up with the rate at which reclaim is asking them to
> be written back via async mechanisms (bdi writeback, metadata
> writeback, etc).
> 
> The problem comes when we cannot make efficient progress cleaning
> pages/inodes on the LRU because the IO subsystem is overloaded and
> cannot clean pages/inodes any faster. At this point, we have to wait
> for the IO subsystem to make progress and without feedback from the
> IO subsystem, we have no idea how fast that progress is made. Hence
> we have no idea how long we need to wait before trying to reclaim
> again. i.e. the answer can be different depending on hardware
> behaviour, not just the current instantaneous reclaim and IO state.
> 
> That's the fundamental problem we need to solve, and realistically
> it can only be done with some level of feedback from the IO
> subsystem.

That triggered a memory for me.  Jeremy Kerr presented a paper at LCA2006
on a different model where the device driver pulls dirty things from the VM
rather than having the VM push dirty things to the device driver.  It was
prototyped in K42 rather than Linux, but the idea might be useful.

http://jk.ozlabs.org/projects/k42/
http://jk.ozlabs.org/projects/k42/device-driven-IO-lca06.pdf


  reply	other threads:[~2020-02-07  0:08 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-31 12:59 [LSF/MM TOPIC] Congestion Matthew Wilcox
2020-01-04  9:09 ` Dave Chinner
2020-01-06 11:55 ` [Lsf-pc] " Michal Hocko
2020-01-06 23:21   ` Dave Chinner
2020-01-07  8:23     ` Chris Murphy
2020-01-07 11:53       ` Michal Hocko
2020-01-07 20:12         ` Chris Murphy
2020-01-07 11:53     ` Michal Hocko
2020-01-09 11:07     ` Jan Kara
2020-01-09 23:00       ` Dave Chinner
2020-02-05 16:05         ` Mel Gorman
2020-02-06 23:19           ` Dave Chinner
2020-02-07  0:08             ` Matthew Wilcox [this message]
2020-02-13  3:18               ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200207000853.GD8731@bombadil.infradead.org \
    --to=willy@infradead.org \
    --cc=david@fromorbit.com \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mgorman@suse.de \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).