From: Jens Axboe <axboe@suse.de>
To: Valdis.Kletnieks@vt.edu
Cc: "Marc E. Fiuczynski" <mef@CS.Princeton.EDU>,
Peter Williams <pwil3058@bigpond.net.au>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Con Kolivas <kernel@kolivas.org>, Chris Han <xiphux@gmail.com>
Subject: Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...
Date: Fri, 21 Jan 2005 15:11:36 +0100 [thread overview]
Message-ID: <20050121141136.GG2790@suse.de> (raw)
In-Reply-To: <200501201751.j0KHpvdQ030760@turing-police.cc.vt.edu>
On Thu, Jan 20 2005, Valdis.Kletnieks@vt.edu wrote:
> On Thu, 20 Jan 2005 11:14:48 EST, "Marc E. Fiuczynski" said:
> > Peter, thank you for maintaining Con's plugsched code in light of Linus' and
> > Ingo's prior objections to this idea. On the one hand, I partially agree
> > with Linus&Ingo's prior views that when there is only one scheduler that the
> > rest of the world + dog will focus on making it better. On the other hand,
> > having a clean framework that lets developers in a clean way plug in new
> > schedulers is quite useful.
> >
> > Linus & Ingo, it would be good to have an indepth discussion on this topic.
> > I'd argue that the Linux kernel NEEDS a clean pluggable scheduling
> > framework.
>
> Is this something that would benefit from several trips around the -mm
> series?
>
> ISTR that we started with one disk elevator, and now we have 3 or 4
> that are selectable on the fly after some banging around in -mm. (And
> yes, I realize that the only reason we can change the elevator on the
> fly is because it can switch from the current to the 'stupid FIFO
> none' elevator and thence to the new one, which wouldn't really work
> for the CPU scheduler....)
I don't think you can compare the two. Yes they are both schedulers, but
that's about where the 'similarity' stops. The CPU scheduler must be
really fast, overhead must be kept to a minimum. For a disk scheduler,
we can affort to burn cpu cycles to increase the io performance. The
extra abstraction required to fully modularize the cpu scheduler would
come at a non-zero cost as well, but I bet it would have a larger impact
there. I doubt you could measure the difference in the disk scheduler.
There are vast differences between io storage devices, that is why we
have different io schedulers. I made those modular so that the desktop
user didn't have to incur the cost of having 4 schedulers when he only
really needs one.
> All the arguments that support having more than one elevator apply
> equally well to the CPU scheduler....
Not at all, imho. It's two completely different problems.
--
Jens Axboe
next prev parent reply other threads:[~2005-01-21 14:12 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-01-20 1:23 [ANNOUNCE][RFC] plugsched-2.0 patches Peter Williams
2005-01-20 1:58 ` Kasper Sandberg
2005-01-20 16:14 ` Marc E. Fiuczynski
2005-01-20 17:51 ` Valdis.Kletnieks
2005-01-21 14:11 ` Jens Axboe [this message]
2005-01-21 16:29 ` Marc E. Fiuczynski
2005-01-21 16:43 ` Con Kolivas
2005-01-21 21:20 ` Peter Williams
2005-01-21 2:38 ` Peter Williams
2005-01-21 2:50 ` Marc E. Fiuczynski
2005-01-21 15:16 ` [ckrm-tech] " Shailabh Nagar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20050121141136.GG2790@suse.de \
--to=axboe@suse.de \
--cc=Valdis.Kletnieks@vt.edu \
--cc=kernel@kolivas.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mef@CS.Princeton.EDU \
--cc=pwil3058@bigpond.net.au \
--cc=xiphux@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).