All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <jens.axboe@oracle.com>
To: Corrado Zoccolo <czoccolo@gmail.com>
Cc: Aaron Carroll <aaronc@cse.unsw.edu.au>,
	Linux-Kernel <linux-kernel@vger.kernel.org>
Subject: Re: Reduce latencies for syncronous writes and high I/O priority requests in deadline IO scheduler
Date: Fri, 24 Apr 2009 08:39:44 +0200	[thread overview]
Message-ID: <20090424063944.GA4593@kernel.dk> (raw)
In-Reply-To: <4e5e476b0904230910r685e8300oa2323e8985c97a00@mail.gmail.com>

On Thu, Apr 23 2009, Corrado Zoccolo wrote:
> >> * minimum batch timespan (time quantum): partners with fifo_batch to
> >> improve throughput, by sending more consecutive requests together. A
> >> given number of requests will not always take the same time (due to
> >> amount of seek needed), therefore fifo_batch must be tuned for worst
> >> cases, while in best cases, having longer batches would give a
> >> throughput boost.
> >> * batch start request is chosen fifo_batch/3 requests before the
> >> expired one, to improve fairness for requests with lower start sector,
> >> that otherwise have higher probability to miss a deadline than
> >> mid-sector requests.
> >
> > I don't like the rest of it.  I use deadline because it's a simple,
> > no surprises, no bullshit scheduler with reasonably good performance
> > in all situations.  Is there some reason why CFQ won't work for you?
> 
> I actually like CFQ, and use it almost everywhere, and switch to
> deadline only when submitting an heavy-duty workload (having a SysRq
> combination to switch I/O schedulers could sometimes be very handy).
> 
> However, on SSDs it's not optimal, so I'm developing this to overcome
> those limitations.

I find your solution quite confusing - the statement is that it CFQ
isn't optimal on SSD, so you modify deadline? ;-)

Most of the "CFQ doesn't work well on SSD" statements are mostly wrong.
Now, you seem to have done some testing, so when you say that you
probably have actually done some testing that tells you that this is the
case. But lets attempt to fix that issue, then!

One thing you pointed out is that CFQ doesn't treat the device as a
"real" SSD unless it does queueing. This is very much on purpose, for
two reasons:

1) I have never seen a non-queueing SSD that actually performs well for
   reads-vs-write situations, so CFQ still does idling for those.
2) It's a problem that is going away. SSD that are coming out today and
   in the future WILL definitely do queuing. We can attribute most of
   the crap behaviour to the lacking jmicron flash controller, which
   also has a crappy SATA interface.

What I am worried about in the future is even faster SSD devices. CFQ is
already down a percent or two when we are doing 100k iops and such, this
problem will only get worse. So I'm very much interested in speeding up
CFQ for such devices, which I think will mainly be slimming down the IO
path and bypassing much of the (unneeded) complexity for them. The last
thing I want is to have to tell people to use deadline or noop on SSD
devices.

> In the meantime, I wanted to overcome also deadline limitations, i.e.
> the high latencies on fsync/fdatasync.

This is very much something you could pull out of the patchset and we
could include without much questioning.

-- 
Jens Axboe


  parent reply	other threads:[~2009-04-24  6:39 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-22 21:07 Reduce latencies for syncronous writes and high I/O priority requests in deadline IO scheduler Corrado Zoccolo
2009-04-23 11:18 ` Paolo Ciarrocchi
2009-04-23 11:28 ` Jens Axboe
2009-04-23 15:57   ` Corrado Zoccolo
2009-04-23 11:52 ` Aaron Carroll
2009-04-23 12:13   ` Jens Axboe
2009-04-23 16:10   ` Corrado Zoccolo
2009-04-23 23:30     ` Aaron Carroll
2009-04-24  6:13       ` Corrado Zoccolo
2009-04-24  6:39     ` Jens Axboe [this message]
2009-04-24 16:07       ` Corrado Zoccolo
2009-04-24 21:37         ` Corrado Zoccolo
2009-04-26 12:43           ` Corrado Zoccolo
2009-05-01 19:30             ` Corrado Zoccolo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090424063944.GA4593@kernel.dk \
    --to=jens.axboe@oracle.com \
    --cc=aaronc@cse.unsw.edu.au \
    --cc=czoccolo@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.