linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Martin Dalecki <dalecki@evision-ventures.com>
To: Neil Conway <nconway.list@ukaea.org.uk>
Cc: Anton Altaparmakov <aia21@cantab.net>,
	Alan Cox <alan@lxorguk.ukuu.org.uk>,
	Russell King <rmk@arm.linux.org.uk>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] 2.5.15 IDE 61
Date: Wed, 15 May 2002 13:02:24 +0200	[thread overview]
Message-ID: <3CE24040.4050001@evision-ventures.com> (raw)
In-Reply-To: <E177dYp-00083c-00@the-village.bc.nu> <5.1.0.14.2.20020514202811.01fcc1d0@pop.cus.cam.ac.uk> <3CE22B2B.5080506@evision-ventures.com> <3CE24A34.CEA0CAE0@ukaea.org.uk>

Uz.ytkownik Neil Conway napisa?:
> Martin Dalecki wrote:
> 
>>The only problem is that having a shared lock between two queues apparently
>>doesn't imply that the queues are behaving atomic on the request level
>>among each others.
> 
> 
> Correct - both queues can be active with I/O in flight at the same
> time.  But think about it: if this weren't the case, then the older
> kernels (using global io_request_lock) would have had to serialize ALL
> I/O, one request-queue active at a time, for every single
> block-device...
> 
> 
>>Apparenty the "sublimation" of the hwgroup and overall cleanup of
>>data structures, just made many people awake and be aware of problems which
>>where there already for a very very long time...
> 
> 
> I'm not quite sure which problems you mean.  The busy flag prevents any
> clash. (But sure, if you change to per-device queues AND you ditch the
> busy flag you're screwed.) Where is the problem?

The problem is that with the busy flag on we are wasting quite
a significant amount of CPU time spinning around it for no good...

Right now I'm quite confident about the idea of just having
two queues ata_queue and atapi_queue on the channel possible
shared among two channels. This will make the CPU cycle waste
occur only in the case of channels where ATA and ATAPI devices
are mixed as master and slave. This will work becouse the
only really crucial queue property which has to be device type
specific is the hardsect size.
Quite the same design as in, well, FreeBSD.
As a second step we could do just the following - block one
of the queues if the second one is active right now... This should
be simpler than doing it on the per device level by the busy flag and
wasting CPU time around it.


  reply	other threads:[~2002-05-15 12:05 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-05-14  9:49 [PATCH] 2.5.15 IDE 61 Neil Conway
2002-05-14  8:52 ` Martin Dalecki
2002-05-14 10:12   ` Neil Conway
2002-05-14  9:30     ` Martin Dalecki
2002-05-14 11:10       ` Neil Conway
2002-05-14 10:21         ` Martin Dalecki
2002-05-14 11:38           ` Russell King
2002-05-14 10:49             ` Martin Dalecki
2002-05-14 12:10             ` Alan Cox
2002-05-14 11:11               ` Martin Dalecki
2002-05-14 12:47                 ` Alan Cox
2002-05-14 12:30                   ` Martin Dalecki
2002-05-15 14:43                 ` Pavel Machek
2002-05-14 12:00               ` Russell King
2002-05-14 11:03                 ` Martin Dalecki
2002-05-14 13:03               ` Neil Conway
2002-05-14 13:27                 ` Andre Hedrick
2002-05-14 14:45                 ` Alan Cox
2002-05-14 14:30                   ` Martin Dalecki
2002-05-14 16:20                     ` Neil Conway
2002-05-14 16:32                       ` Jens Axboe
2002-05-14 16:47                         ` Neil Conway
2002-05-14 16:51                           ` Jens Axboe
2002-05-15 11:37                             ` Neil Conway
2002-05-14 22:51                           ` Mike Fedyk
2002-05-14 16:26                     ` Jens Axboe
2002-05-14 19:34                     ` Anton Altaparmakov
2002-05-15  6:16                       ` Jens Axboe
2002-05-15  8:32                         ` Anton Altaparmakov
2002-05-15  9:42                           ` Martin Dalecki
2002-05-15  9:32                       ` Martin Dalecki
2002-05-15 11:44                         ` Neil Conway
2002-05-15 11:02                           ` Martin Dalecki [this message]
2002-05-15 13:10                             ` Alan Cox
2002-05-15 13:34                               ` Neil Conway
2002-05-15 13:04                                 ` Martin Dalecki
2002-05-15 14:08                               ` benh
2002-05-15 16:40                         ` Denis Vlasenko
2002-05-15 11:55                           ` Neil Conway
2002-05-17  7:07                             ` Mike Fedyk
2002-05-17 11:06                               ` Neil Conway
2002-05-17 10:12                                 ` Martin Dalecki
2002-05-14 16:03                   ` Neil Conway
2002-05-14 16:46                     ` Alan Cox
2002-05-14 12:52       ` Daniela Engert
  -- strict thread matches above, loose matches on Subject: below --
2002-05-06  3:53 Linux-2.5.14 Linus Torvalds
2002-05-13  9:48 ` [PATCH] 2.5.15 IDE 61 Martin Dalecki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3CE24040.4050001@evision-ventures.com \
    --to=dalecki@evision-ventures.com \
    --cc=aia21@cantab.net \
    --cc=alan@lxorguk.ukuu.org.uk \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nconway.list@ukaea.org.uk \
    --cc=rmk@arm.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).