linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Phillips <phillips@phunq.net>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org, Peter Zijlstra <peterz@infradead.org>
Subject: Re: [RFC] [PATCH] A clean approach to writeout throttling
Date: Wed, 5 Dec 2007 22:21:44 -0800	[thread overview]
Message-ID: <200712052221.45409.phillips@phunq.net> (raw)
In-Reply-To: <20071205172446.ea54e4d3.akpm@linux-foundation.org>

On Wednesday 05 December 2007 17:24, Andrew Morton wrote:
> On Wed, 5 Dec 2007 16:03:01 -0800 Daniel Phillips <phillips@phunq.net> wrote:
> > ...a block device these days may not be just a single 
> > device, but may be a stack of devices connected together by a generic 
> > mechanism such as device mapper, or a hardcoded stack such as 
> > multi-disk or network block device.  It is necessary to consider the 
> > resource requirements of the stack as a whole _before_ letting a 
> > transfer proceed into any layer of the stack, otherwise deadlock on 
> > many partially completed transfers becomes a possibility.  For this 
> > reason, the bio throttling is only implemented at the initial, highest 
> > level submission of the bio to the block layer and not for any recursive 
> > submission of the same bio to a lower level block device in a stack.
> > 
> > This in turn has rather far reaching implications: the top level device 
> > in a stack must take care of inspecting the entire stack in order to 
> > determine how to calculate its resource requirements, thus becoming
> > the boss device for the entire stack.  Though this intriguing idea could 
> > easily become the cause of endless design work and many thousands of 
> > lines of fancy code, today I sidestep the question entirely using 
> > the "just provide lots of reserve" strategy.  Horrifying as it may seem 
> > to some, this is precisely the strategy that Linux has used in the 
> > context of resource management in general, from the very beginning and 
> > likely continuing for quite some time into the future  My strongly held 
> > opinion in this matter is that we need to solve the real, underlying 
> > problems definitively with nice code before declaring the opening of 
> > fancy patch season.  So I am leaving further discussion of automatic 
> > resource discovery algorithms and the like out of this post.
> 
> Rather than asking the stack "how much memory will this request consume"
> you could instead ask "how much memory are you currently using".
> 
> ie: on entry to the stack, do 
> 
> 	current->account_block_allocations = 1;
> 	make_request(...);
> 	rq->used_memory += current->pages_used_for_block_allocations;
> 
> and in the page allocator do
> 
> 	if (!in_interrupt() && current->account_block_allocations)
> 		current->pages_used_for_block_allocations++;
> 
> and then somehow handle deallocation too ;)

Ah, and how do you ensure that you do not deadlock while making this
inquiry?  Perhaps send a dummy transaction down the pipe?  Even so,
deadlock is possible, quite evidently so in the real life example I have
at hand.

Yours is essentially one of the strategies I had in mind, the other major
one being simply to examine the whole stack, which presupposes some
as-yet-nonexistant kernel wide method of representing block device
stacks in all there glorious possible topology variations.

> The basic idea being to know in real time how much memory a particular
> block stack is presently using.  Then, on entry to that stack, if the
> stack's current usage is too high, wait for it to subside.

We do not wait for high block device resource usage to subside before
submitting more requests.  The improvement you suggest is aimed at
automatically determining resource requirements by sampling a
running system, rather than requiring a programmer to determine them
arduously by hand.  Something like automatically determining a
workable locking strategy by analyzing running code, wouldn't that be
a treat?  I will hope for one of those under my tree at Christmas.

More practically, I can see a debug mode implemented along the lines
you describe where we automatically detect that a writeout path has
violated its covenant as expressed by its throttle_metric.
 
> otoh we already have mechanisms for limiting the number of requests in
> flight.  This is approximately proportional to the amount of memory which
> was allocated to service those requests.  Why not just use that?

Two reasons.  The minor one is that device mapper bypasses that
mechanism (no elevator) and the major one is that number of requests
does not map well to the amount of resources consumed.  In ddsnap for
example, the amount of memory used by the userspace ddsnapd is
roughly linear vs the number of pages transferred, not the number of
requests.

> > @@ -3221,6 +3221,13 @@ static inline void __generic_make_reques
> >  	if (bio_check_eod(bio, nr_sectors))
> >  		goto end_io;
> >  
> > +	if (q && q->metric && !bio->bi_queue) {
> > +		int need = bio->bi_throttle = q->metric(bio);
> > +		bio->bi_queue = q;
> > +		/* FIXME: potential race if atomic_sub is called in the middle of condition check */
> > +		wait_event_interruptible(q->throttle_wait, atomic_read(&q->available) >= need);
> 
> This will fall straight through if signal_pending() and (I assume) bad
> stuff will happen.  uninterruptible sleep, methinks.

Yes, as a first order repair.  To be done properly I need to express this
in terms of the guts of wait_event_*, and get rid of that race, maybe that
changes the equation.

It would be nice if threads didn't get stuck in D state here, though
_interruptible is probably the wrong idea, we should instead ensure that
whatever is being waited on must respond to, e.g., SIGKILL.  This at the
limits of my scheduler knowledge, l would appreciate a better
suggestion.  I do detest hang in D state with SIGKILL immunity, which
behavior unfortunately does not seem all that rare.

Daniel

  reply	other threads:[~2007-12-06  6:22 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-12-06  0:03 [RFC] [PATCH] A clean approach to writeout throttling Daniel Phillips
2007-12-06  1:24 ` Andrew Morton
2007-12-06  6:21   ` Daniel Phillips [this message]
2007-12-06  7:31     ` Andrew Morton
2007-12-06  9:48       ` Daniel Phillips
2007-12-06 11:55         ` Andrew Morton
2007-12-06 15:52           ` Rik van Riel
2007-12-06 17:34             ` Andrew Morton
2007-12-06 17:48               ` Rik van Riel
2007-12-06 20:04           ` Daniel Phillips
2007-12-06 20:27             ` Andrew Morton
2007-12-06 21:27               ` Daniel Phillips
2007-12-06 21:53     ` Bill Davidsen
2007-12-07  0:04       ` Daniel Phillips
2007-12-07  0:29         ` Andrew Morton
2007-12-07  7:13           ` Daniel Phillips
2007-12-10  9:20             ` Daniel Phillips
2007-12-10 10:47 ` Jens Axboe
2007-12-10 11:23   ` [RFC] [PATCH] A clean aEvgeniy pproach " Daniel Phillips
2007-12-10 11:41     ` Jens Axboe
2007-12-10 12:13       ` Daniel Phillips
2007-12-10 12:16         ` Jens Axboe
2007-12-10 12:27           ` Daniel Phillips
2007-12-10 12:32             ` Jens Axboe
2007-12-10 13:04               ` Daniel Phillips
2007-12-10 13:19                 ` Jens Axboe
2007-12-10 13:26                   ` Daniel Phillips
2007-12-10 13:30                     ` Jens Axboe
2007-12-10 13:43                       ` Daniel Phillips
2007-12-10 13:53                         ` Jens Axboe
2007-12-10 14:17                           ` Daniel Phillips
2007-12-11 13:15                             ` Jens Axboe
2007-12-11 19:38                               ` Daniel Phillips
2007-12-11 20:01                                 ` Jens Axboe
2007-12-11 20:11                                   ` Daniel Phillips
2007-12-11 20:07                               ` Daniel Phillips
2007-12-10 11:33   ` [RFC] [PATCH] A clean approach " Daniel Phillips
2007-12-10 21:31 ` Jonathan Corbet
2007-12-10 22:06   ` Pekka Enberg
2007-12-11  4:21   ` Daniel Phillips

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200712052221.45409.phillips@phunq.net \
    --to=phillips@phunq.net \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).