linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait  /notify + callback chains
@ 2001-02-01 14:44 bsuparna
  2001-02-01 15:09 ` Christoph Hellwig
  0 siblings, 1 reply; 201+ messages in thread
From: bsuparna @ 2001-02-01 14:44 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: linux-kernel, kiobuf-io-devel


>Hi,
>
>On Thu, Feb 01, 2001 at 10:25:22AM +0530, bsuparna@in.ibm.com wrote:
>>
>> >We _do_ need the ability to stack completion events, but as far as the
>> >kiobuf work goes, my current thoughts are to do that by stacking
>> >lightweight "clone" kiobufs.
>>
>> Would that work with stackable filesystems ?
>
>Only if the filesystems were using VFS interfaces which used kiobufs.
>Right now, the only filesystem using kiobufs is XFS, and it only
>passes them down to the block device layer, not to other filesystems.

That would require the vfs interfaces themselves (address space
readpage/writepage ops) to take kiobufs as arguments, instead of struct
page *  . That's not the case right now, is it ?
A filter filesystem would be layered over XFS to take this example.
So right now a filter filesystem only sees the struct page * and passes
this along. Any completion event stacking has to be applied with reference
to this.


>> Being able to track the children of a kiobuf would help with I/O
>> cancellation (e.g. to pull sub-ios off their request queues if I/O
>> cancellation for the parent kiobuf was issued). Not essential, I guess,
in
>> general, but useful in some situations.
>
>What exactly is the justification for IO cancellation?  It really
>upsets the normal flow of control through the IO stack to have
>voluntary cancellation semantics.

One reason that I saw is that if the results of an i/o are no longer
required due to some condition (e.g. aio cancellation situations, or if the
process that issued the i/o gets killed), then this avoids the unnecessary
disk i/o, if the request hadn't been scheduled as yet.

Too remote a requirement ? If the capability/support doesn't exist at the
driver level I guess its difficult.

--Stephen

_______________________________________________
Kiobuf-io-devel mailing list
Kiobuf-io-devel@lists.sourceforge.net
http://lists.sourceforge.net/lists/listinfo/kiobuf-io-devel



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 201+ messages in thread
* Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait  /notify + callback chains
@ 2001-02-06 13:50 bsuparna
  2001-02-06 14:07 ` Jens Axboe
  0 siblings, 1 reply; 201+ messages in thread
From: bsuparna @ 2001-02-06 13:50 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: linux-kernel, kiobuf-io-devel, Alan Cox, Christoph Hellwig, Andi Kleen


>Hi,
>
>On Mon, Feb 05, 2001 at 08:01:45PM +0530, bsuparna@in.ibm.com wrote:
>>
>> >It's the very essence of readahead that we wake up the earlier buffers
>> >as soon as they become available, without waiting for the later ones
>> >to complete, so we _need_ this multiple completion concept.
>>
>> I can understand this in principle, but when we have a single request
going
>> down to the device that actually fills in multiple buffers, do we get
>> notified (interrupted) by the device before all the data in that request
>> got transferred ?
>
>It depends on the device driver.  Different controllers will have
>different maximum transfer size.  For IDE, for example, we get wakeups
>all over the place.  For SCSI, it depends on how many scatter-gather
>entries the driver can push into a single on-the-wire request.  Exceed
>that limit and the driver is forced to open a new scsi mailbox, and
>you get independent completion signals for each such chunk.

I see. I remember Jens Axboe mentioning something like this with IDE.
So, in this case, you want every such chunk to check if its completed
filling up a buffer and then trigger a wakeup on that ?
But, does this also mean that in such a case combining requests beyond this
limit doesn't really help ? (Reordering requests to get contiguity would
help of course in terms of seek times, I guess, but not merging beyond this
limit)

>> >Which is exactly why we have one kiobuf per higher-level buffer, and
>> >we chain together kiobufs when we need to for a long request, but we
>> >still get the independent completion notifiers.
>>
>> As I mentioned above, the alternative is to have the i/o completion
related
>> linkage information within the wakeup structures instead. That way, it
>> doesn't matter to the lower level driver what higher level structure we
>> have above (maybe buffer heads, may be page cache structures, may be
>> kiobufs). We only chain together memory descriptors for the buffers
during
>> the io.
>
>You forgot IO failures: it is essential, once the IO completes, to
>know exactly which higher-level structures completed successfully and
>which did not.  The low-level drivers have to have access to the
>independent completion notifications for this to work.
>
No, I didn't forget IO failures; just that I expect the wait structure
containing the wakeup function to be embedded in a cev structure that
contains a pointer to the wait_queue_head field in the higher level
structure. The rest is for the wakeup function to interpret (it can always
access the other fields in the higher level structure - just like
list_entry() does)

Later I realized that instead of having multiple wakeup functions queued on
the low level structures wait queue, its perhaps better to just sort of
turn the cev_wait structure upside down (entry on the lower level
structure's queue should link to the parent entries instead).




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 201+ messages in thread
[parent not found: <CA2569EA.00506BBC.00@d73mta05.au.ibm.com>]
* Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait  /notify + callback chains
@ 2001-02-05 14:31 bsuparna
  0 siblings, 0 replies; 201+ messages in thread
From: bsuparna @ 2001-02-05 14:31 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: linux-kernel, kiobuf-io-devel, Alan Cox, Christoph Hellwig, Andi Kleen



>Hi,
>
>On Sun, Feb 04, 2001 at 06:54:58PM +0530, bsuparna@in.ibm.com wrote:
>>
>> Can't we define a kiobuf structure as just this ? A combination of a
>> frag_list and a page_list ?
>

>Then all code which needs to accept an arbitrary kiobuf needs to be
>able to parse both --- ugh.
>

Making this a little more explicit to help analyse tradeoffs:

/* Memory descriptor portion of a kiobuf - this is something that may get
passed around between layers and subsystems */
struct kio_mdesc {
     int nr_frags;
     struct frag *frag_list;
     int nr_pages;
     struct page **page_list;
     /* list follows */
};

For block i/o requiring #1 type descriptors, the list could have allocated
extra space for:
struct kio_type1_ext {
     struct frag frag;
     struct page *pages[NUM_STATIC_PAGES];
}

For n/w i/o or cases requiring  #2 type descriptors, the list could have
allocated extra space for:

struct kio_type2_ext {
     struct frag frags[NUM_STATIC_FRAGS];
     struct page *page[NUM_STATIC_FRAGS];
}


struct  kiobuf {
     int            status;
     wait_queue_head_t   waitq;
     struct kio_mdesc    mdesc;
     /* list follows - leaves room for allocation for mem descs, completion
sub structs etc */
}

Code that accepts an arbitrary kiobuf needs to do the following :
     process the fragments one by one
          - type #1 case, only one fragment would typically be there, but
processing it would involve crossing all pages in the page list
               So extra processing vs a kiobuf with single <offset, len>
pair, involves the following:
                    dereferencing the frag_list pointer
                    checking the nr_frags field
          - type #2 case, the number of fragments would be equal to or
greater than number of pages, so processing will typically go over each
fragments and thus cross each page in the list one by one
               So extra processing vs a kiobuf with per-page <offset, len>
pairs, involves
                    deferencing the page list entry (involves computing the
page-index in the page_list from the offset value)
                    check if offset+len doesn't fall outside the page


Boils down to approx one extra dereference and one comparison  per kiobuf
for the common cases (have I missed something critical ?)  vs the most
optimized choices of descriptors for those cases.

In terms of resource consumption (extra bytes taken up), two fields extra
per kiobuf chain (e.g. nr_frags and frag_list pointer when it comes to #1),
i.e. a total of 8 bytes, for the common cases vs the most optimized choice
of structures for those cases.

This seems to be more uniformly balanced across #1 and #2 cases, than an
<offset, len> for every page, as well as an overall <offset, len>. But,
then, come to think of it, since the need for lightweight structures is
greater in the case of #2, should the point of balance (if at all we want
to find one) be tilted towards #2 ?

On the other hand, since having a common structure does involve extra bytes
and cycles, if there are very few situations where we need both #1 and #2 -
conversion only at subsystem boundaries like i2o does may turn out to be
better.

Oh well ...


>> BTW, We could have a higher level io container that includes a <status>
>> field and a <wait_queue_head> to take care of i/o completion
>
>IO completion requirements are much more complex.  Think of disk
>readahead: we can create a single request struct for an IO of a
>hundred buffer heads, and as the device driver satisfies that request,
>it wakes up the buffer heads as it goes.  There is a separete
>completion notification for every single buffer head in the chain.
>
I understand the requirement of independent completion notifiers for higher
level buffers/other structures, since they are indeed independently usable
structures. That was one aspect that I thought I was being able to address
in the cev_wait design based on wait_queue wakeup functions.
The way it would work is that there would be multiple wakeup functions
registered on the container for the big request, each wakeup function being
responsible for waking up a higher level buffer. This way, the linkage
information is actually external to the buffer structures (which seems
reasonable, since it is only required while the i/o is happening, unless
there is another reason to keep a more lasting association)

>It's the very essence of readahead that we wake up the earlier buffers
>as soon as they become available, without waiting for the later ones
>to complete, so we _need_ this multiple completion concept.
>

I can understand this in principle, but when we have a single request going
down to the device that actually fills in multiple buffers, do we get
notified (interrupted) by the device before all the data in that request
got transferred ? I mean, how do we know that some buffers have become
available until the overall device request has completed (unless of course
the request actually gets broken up at this level and completed bit by
bit).


>Which is exactly why we have one kiobuf per higher-level buffer, and
>we chain together kiobufs when we need to for a long request, but we
>still get the independent completion notifiers.

As I mentioned above, the alternative is to have the i/o completion related
linkage information within the wakeup structures instead. That way, it
doesn't matter to the lower level driver what higher level structure we
have above (maybe buffer heads, may be page cache structures, may be
kiobufs). We only chain together memory descriptors for the buffers during
the io.

>
>Cheers,
> Stephen



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 201+ messages in thread
[parent not found: <CA2569E9.004A4E23.00@d73mta05.au.ibm.com>]
* Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait  /notify + callback chains
@ 2001-02-04 13:24 bsuparna
  0 siblings, 0 replies; 201+ messages in thread
From: bsuparna @ 2001-02-04 13:24 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: linux-kernel, kiobuf-io-devel, Alan Cox, Christoph Hellwig, Andi Kleen


>Hi,
>
>On Fri, Feb 02, 2001 at 12:51:35PM +0100, Christoph Hellwig wrote:
>> >
>> > If I have a page vector with a single offset/length pair, I can build
>> > a new header with the same vector and modified offset/length to split
>> > the vector in two without copying it.
>>
>> You just say in the higher-level structure ignore from x to y even if
>> they have an offset in their own vector.
>
>Exactly --- and so you end up with something _much_ uglier, because
>you end up with all sorts of combinations of length/offset fields all
>over the place.
>
>This is _precisely_ the mess I want to avoid.
>
>Cheers,
> Stephen

It appears that we are coming across 2 kinds of requirements for kiobuf
vectors - and quite a bit of debate centering around that.

1. In the block device i/o world, where large i/os may be involved, we'd
like to be able to describe chunks/fragments that contain multiple pages;
which is why it  make sense to have a single <offset, length> pair for the
entire set of pages in a kiobuf, rather than having to deal with per page
offset/len fields.

2. In the networking world, we deal with smaller fragments (for protocol
headers and stuff, and small packets) ideally chained together, typically
not page aligned, with the ability to extend the list at least at the head
and tail (and maybe some reshuffling in case of ip fragmentation?); so I
guess that's why it seems good to have an <offset, length> pair per
page/fragment. (If there can be multiple fragments in a page, even this
might not be frugal enough ... )

Looks like there are 2 kinds of entities that we are looking for in the kio
descriptor:
     - A collection of physical memory pages (call it say, a page_list)
     - A collection of fragments of memory described as <offset, len>
tuples w.r.t this collection
     (offset in turn could be <index in page-list, offset-in-page> if it
helps) (call this collection a frag_list)

Can't we define a kiobuf structure as just this ? A combination of a
frag_list and a page_list ? (Clone kiobufs might share the original
kiobuf's page_list, but just split parts of the frag_list)
How hard is it to maintain and to manipulate such a structure ?

BTW, We could have a higher level io container that includes a <status>
field and a <wait_queue_head> to take care of i/o completion (If we have a
wait queue head, then I don't think we need a separate callback function if
we have Ben's wakeup functions in place).

Or,  is this going in the direction of a cross between and elephant and a
bicycle :-)  ?

Regards
Suparna


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 201+ messages in thread
* Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait  /notify + callback chains
@ 2001-02-02 15:31 bsuparna
  0 siblings, 0 replies; 201+ messages in thread
From: bsuparna @ 2001-02-02 15:31 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: Ben LaHaise, linux-kernel, kiobuf-io-devel


>Hi,
>
>On Thu, Feb 01, 2001 at 01:28:33PM +0530, bsuparna@in.ibm.com wrote:
>>
>> Here's a second pass attempt, based on Ben's wait queue extensions:
> Does this sound any better ?
>
>It's a mechanism, all right, but you haven't described what problems
>it is trying to solve, and where it is likely to be used, so it's hard
>to judge it. :)

Hmm .. I thought I had done that in my first posting, but obviously, I
mustn't have done a good job at expressing it, so let me take another stab
at trying to convey why I started on this.

There are certain specific situations that I have in mind right now, but to
me it looks like the very nature of the abstraction is such that it is
quite likely that there would be uses in some other situations which I may
not have thought of yet, or just do not understand well enough to vouch for
at this point. What those situations could be, and the associated issues
involved (especially performance related) is something that I hope other
people on this forum would be able to help pinpoint, based on their
experiences and areas of expertise.

I do realize that generic and yet simple and performance optimal in all
kinds of situations is a really difficult (if not impossible :-) ) thing to
achieve, but even then, won't it be nice to at least abstract out
uniformity in patterns across situations in a way which can be
tweaked/tuned for each specific class of situations ?

And the nice thing which I see about Ben's wait queue extensions is that it
gives us a route to try to do that ...

Some needs considered (and associated problems):

a. Stacking of completion events - asynchronously, through multiple layers
     - layered drivers  (encryption, conversion)
     - filter filesystems
    Key aspects:
     1. It should be possible to pass the same (original) i/o container
structure all the way down (no copies/clones should need to happen, unless
actual i/o splitting, or extra buffer space or multiple sub-ios are
involved)
     2. Transparency: Neither the upper layer nor the layer below it should
need to have any specific knowledge about the existence/absense of an
intermediate filter layer (the mechanism should hide all that)
     3. LIFO ordering of completion actions
     4. The i/o structure should be marked as up-to-date only after all the
completion actions are done.
     5. Preferably have waiters on the i/o structure woken up only after
all completion actions are through (to avoid spurious/redundant wakeups
since the data won't be ready for use)
     6. Possible to have completion actions execute later in task context

b. Co-relation between multiple completion events and their associated
operations and data structures
     -  (bottom up aspect) merging results of split i/o requests, and
marking the completion of the compound i/o through multiple such layers
(tree), e.g
          - lvm
          - md / raid
          - evms aggregator features
     - (top down aspect) cascading down i/o cancellation requests /
sub-event waits , monitoring sub-io status etc
      Some aspects:
     1. Result of collation of sub-i/os may be driver specific  (In some
situations like lvm  - each sub i/o maps to a particular portion of a
buffer; with software raid or some other kind of scheme the collation may
involve actually interpreting the data read)
     2. Re-start/retries of sub-ios (in case of errors) can be handled.
     3. Transparency : Neither the upper layer nor the layer below it
should need to have any specific knowledge about the existence/absense of
an intermediate layer (that sends out multiple sub i/os)
     4. The system should be devised to avoid extra logic/fields in the
generic i/o structures being passed around, in situations where no compound
i/o is involved (i.e. in the simple i/o cases and most common situations).
As far as possible it is desirable to keep the linkage information outside
of the i/o structure for this reason.
     5. Possible to have collation/completion actions execute later in task
context


Ben LaHaise's wait queue extensions takes care of most of the aspects of
(a), if used with a little care to ensure a(4).
[This just means that function that marks the i/o structure as up-to-date
should be put in the completion queue first]
With this, we don't even need and explicit end_io() in bh/kiobufs etc. Just
the wait queue would do.

Only a(5) needs some thought since cache efficiency is upset by changing
the ordering of waits.

But, (b) needs a little more work as a higher level construct/mechanism
that latches on to the wait queue extensions. That is what the cev_wait
structure was designed for.
It keeps the chaining information outside of the i/o structures by default
(They can be allocated together, if desired anyway)

Is this still too much in the air ? Maybe I should describe the flow in a
specific scenario to illustrate ?

Regards
Suparna


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 201+ messages in thread
* Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait  /notify + callback chains
@ 2001-02-01 13:20 bsuparna
  0 siblings, 0 replies; 201+ messages in thread
From: bsuparna @ 2001-02-01 13:20 UTC (permalink / raw)
  To: mjacob, dank; +Cc: linux-kernel, kiobuf-io-devel


sct wrote:
>> >
>> > Thanks for mentioning this. I didn't know about it earlier. I've been
>> > going through the 4/00 kqueue patch on freebsd ...
>>
>> Linus has already denounced them as massively over-engineered...
>
>That shouldn't stop anyone from looking at them and learning, though.
>There might be a good idea or two hiding in there somewhere.
>- Dan
>

There is always a scope to learn from a different approach to tackle a
problem of a similar nature -  both good ideas as well as over-engineered
ones - sometimes more from the later :-)

As far as I have understood so far from looking at the original kevent
patch and notes (which perhaps isn't enough and maybe out of date as well),
the concept of knotes and filter ops, and the event queuing mechanism in
itself is interesting and generic, but most of it seems to have been
designed with linkage to user-mode issueable event waits in mind - like
poll/select/aio/signal etc, at least as it appears from the way its been
used in the kernel. A little different from what I had in mind, though its
perhaps possible to use it otherwise. But maybe I've just not thought about
it enough or understood it.

Regards
Suparna

  Suparna Bhattacharya
  Systems Software Group, IBM Global Services, India
  E-mail : bsuparna@in.ibm.com
  Phone : 91-80-5267117, Extn : 2525


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 201+ messages in thread
* Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait  /notify + callback chains
@ 2001-02-01  7:58 bsuparna
  2001-02-01 12:39 ` Stephen C. Tweedie
  0 siblings, 1 reply; 201+ messages in thread
From: bsuparna @ 2001-02-01  7:58 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: Ben LaHaise, linux-kernel, kiobuf-io-devel


Here's a second pass attempt, based on Ben's wait queue extensions:
Does this sound any better ?

[This doesn't require any changes to the existing wait_queue_head based i/o
structures or to existing drivers, and the constructs mentioned come into
the picture only when compound events are actually required]

The key aspects are:
1.  Just using an extended wait queue now instead of the callbackq for
completion (this can take care of layered callbacks, and aggregation via
wakeup functions)
2. The io structures don't need to change - as they already have a
wait_queue_head embedded anyway (e.g b_wait; in fact io completion happens
simply by waking up the waiters in the wait queue, just as it happens now.
3. Instead, all co-relation information is maintained in the wait_queue
entries that involve compound events
4. No cancel callback queue any more.

(a) For simple layered callbacks (as in encryption filesystems/drivers):
     Intermediate layers simply use add_wait_queue(_lifo) to add their
callbacks to the object's wait queue as wakeup functions. The wakeup
function can access fields in the object associated with the wait queue,
using the wait_queue_head address since the wait_queue_head is embedded in
the object.
     If the wakeup function has to be associated with any other private
data, then an embedding structure is required, e.g:
/* Layered event structure */
 struct lev {
     wait_queue_t        wait;
     void           *data;
}

or, maybe something like the work_todo structure that Ben had stated as an
example (if callback actions have to be delayed to task context). Actually
in that case, we might like to have the wakeup function return 1 if it
needs to do some work later, and that work needs to be completed before the
remaining waiters are worken up.

(b) For compound events:

/* Compound event structure */
 struct cev_wait {
     wait_queue_t        wait;
     wait_queue_head_t * parent;
     unsigned int        flags;      /* optional */
     struct list_head         cev_list;  /* links to siblings or child
cev_waits as applicable*/
     wait_queue_head_t   *head;    /* head of wait queue on which this
is/was queued  - optional ? */
  };

In this case , for each child:
 wait.func() is set up to a routine that performs any necessary
transfer/status/count updates from the child to the parent object, issues a
wakeup on the parent's wait queue (it also removes itself from the child's
wait queue, and optionally from the parent's cev_list too ).
It is this update step that will be situation/subsystem specific, and also
have a return value to indicate whether to detach from the parent or not.

And for the parent queue, a cev_wait would be registered at the beginning,
with its wait.func() set up to collate ios and let completion proceed if
the relevant criteria is met. It can reach all the child cev_waits through
the cev_list links, useful for aggregating data from all children.
During i/o cancellation, the status of the parent object is set to indicate
cancellation and wakeup issued on its wait queue. The parent cev_wait's
wakeup function, if it recognizes the cancel, would then cancel all the
sub-events.
(Is there a nice way to access the object's status from the wakeup function
that doesn't involve subsystem specific code ? )

So, it is the step of collating ios and deciding whether to proceed  which
is situation/subsystem specific. Similarly, the actual operation
cancellation logic (e.g cancelling the underlying io request) is also
non-generic.

For this reason, I was toying with the option of introducing two function
pointers - complete() and cancel() to the cev_wait  structure, so that the
rest of the logic in the wakeup function can be kept common. Does that make
sense ?

Need to define routines for initializing and setting up parent-child
cev_waits.

Right now this assumes that the changes suggested in my last posting can be
made. So still need to think if there is a way to address the cache
efficiency issue (that's a little hard).

Regards
Suparna

  Suparna Bhattacharya
  Systems Software Group, IBM Global Services, India
  E-mail : bsuparna@in.ibm.com
  Phone : 91-80-5267117, Extn : 2525


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 201+ messages in thread
* Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains
@ 2001-02-01  4:55 bsuparna
  2001-02-01 12:19 ` Stephen C. Tweedie
  0 siblings, 1 reply; 201+ messages in thread
From: bsuparna @ 2001-02-01  4:55 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: linux-kernel, kiobuf-io-devel, Stephen Tweedie



>My first comment is that this looks very heavyweight indeed.  Isn't it
>just over-engineered?

Yes, I know it is, in its current form (sigh !).

But at the same time, I do not want to give up (not yet, at least) on
trying to arrive at something that can serve the objectives, and yet be
simple in principle and lightweight too. I feel the need may  grow as we
have more filter layers coming in, and as async i/o and even i/o
cancellation usage increases. And it may not be just with kiobufs ...

I took a second pass attempt at it last night based on Ben's wait queue
extensions. Will write that up in a separate note after this. Do let me
know if it seems like any improvement at all.

>We _do_ need the ability to stack completion events, but as far as the
>kiobuf work goes, my current thoughts are to do that by stacking
>lightweight "clone" kiobufs.

Would that work with stackable filesystems ?

>
>The idea is that completion needs to pass upwards (a)
>bytes-transferred, and (b) errno, to satisfy the caller: everything
>else, including any private data, can be hooked by the caller off the
>kiobuf private data (or in fact the caller's private data can embed
>the clone kiobuf).
>
>A clone kiobuf is a simple header, nothing more, nothing less: it
>shares the same page vector as its parent kiobuf.  It has private
>length/offset fields, so (for example) a LVM driver can carve the
>parent kiobuf into multiple non-overlapping children, all sharing the
>same page list but each one actually referencing only a small region
>of the whole.
>
>That ought to clean up a great deal of the problems of passing kiobufs
>through soft raid, LVM or loop drivers.
>
>I am tempted to add fields to allow the children of a kiobuf to be
>tracked and identified, but I'm really not sure it's necessary so I'll
>hold off for now.  We already have the "io-count" field which
>enumerates sub-ios, so we can define each child to count as one such
>sub-io; and adding a parent kiobuf reference to each kiobuf makes a
>lot of sense if we want to make it easy to pass callbacks up the
>stack.  More than that seems unnecessary for now.

Being able to track the children of a kiobuf would help with I/O
cancellation (e.g. to pull sub-ios off their request queues if I/O
cancellation for the parent kiobuf was issued). Not essential, I guess, in
general, but useful in some situations.
With a clone kiobuf there is no direct way to reach a clone kiobuf given
the original kiobuf (without adding some indexing scheme ).

>
>--Stephen



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 201+ messages in thread
* Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait  /notify + callback chains
@ 2001-02-01  3:59 bsuparna
  0 siblings, 0 replies; 201+ messages in thread
From: bsuparna @ 2001-02-01  3:59 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: Ben LaHaise, linux-kernel, kiobuf-io-devel



>Hi,
>
>On Wed, Jan 31, 2001 at 07:28:01PM +0530, bsuparna@in.ibm.com wrote:
>>
>> Do the following modifications to your wait queue extension sound
>> reasonable ?
>>
>> 1. Change add_wait_queue to add elements to the end of queue (fifo, by
>> default) and instead have an add_wait_queue_lifo() routine that adds to
the
>> head of the queue ?
>
>Cache efficiency: you wake up the task whose data set is most likely
>to be in L1 cache by waking it before its triggering event is flushed
>from cache.
>
>--Stephen

Valid point.


_______________________________________________
Kiobuf-io-devel mailing list
Kiobuf-io-devel@lists.sourceforge.net
http://lists.sourceforge.net/lists/listinfo/kiobuf-io-devel



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 201+ messages in thread
[parent not found: <CA2569E5.004D51A7.00@d73mta05.au.ibm.com>]
* Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains
@ 2001-01-31 13:58 bsuparna
  0 siblings, 0 replies; 201+ messages in thread
From: bsuparna @ 2001-01-31 13:58 UTC (permalink / raw)
  To: Ben LaHaise; +Cc: Stephen C. Tweedie, linux-kernel, kiobuf-io-devel


>The waitqueue extension below is a minimalist approach for providing
>kernel support for fully asynchronous io.  The basic idea is that a
>function pointer is added to the wait queue structure that is called
>during wake_up on a wait queue head.  (The patch below also includes
>support for exclusive lifo wakeups, which isn't crucial/perfect, but just
>happened to be part of the code.)  No function pointer or other data is
>added to the wait queue structure.  Rather, users are expected to make use
>of it by embedding the wait queue structure within their own data
>structure that contains all needed info for running the state machine.

>I suspect that chaining of events should be built on top of the
>primatives, which should be kept as simple as possible.  Comments?

Do the following modifications to your wait queue extension sound
reasonable ?

1. Change add_wait_queue to add elements to the end of queue (fifo, by
default) and instead have an add_wait_queue_lifo() routine that adds to the
head of the queue ?
  [This will help avoid the problem of waiters getting woken up before LIFO
wakeup functions have run, just because the wait happened to have been
issued after the LIFO callbacks were registered, for example, while an IO
is going on]
   Or is there a reason why add_wait_queue adds elements to the head by
default ?

2. Pass the wait_queue_head pointer as a parameter to the wakeup function
(in addition to wait queue entry pointer).
[This will make it easier for the wakeup function to access the  structure
in which the wait queue is embedded, i.e. the object which the wait queue
is associated with. Without this, we might have to store a pointer to this
object in each element linked in the wait queue. This never was a problem
with sleeping waiters because the a reference to the object being waited
for would have been on the waiter's stack/context, but with wakeup
functions there is no such context]

3. Have  __wake_up_common break out of the loop if the wakeup function
returns 1 (or some other value) ?
[This makes it possible to abort the loop based on conditional logic in the
wakeup function ]


Regards
Suparna


  Suparna Bhattacharya
  Systems Software Group, IBM Global Services, India
  E-mail : bsuparna@in.ibm.com
  Phone : 91-80-5267117, Extn : 2525




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 201+ messages in thread
* Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains
@ 2001-01-30 14:09 bsuparna
  0 siblings, 0 replies; 201+ messages in thread
From: bsuparna @ 2001-01-30 14:09 UTC (permalink / raw)
  To: Ben LaHaise; +Cc: linux-kernel, kiobuf-io-devel


Ben,

This indeed looks neat and simple !
I'd avoided touching the wait queue structure as I suspected that you might
already have something like this in place :-)
and was hoping that you'd see this posting and comment.
I agree entirely that it makes sense to have chaining of events built over
simple minimalist primitives. That's what was making me uncomfortable with
the cev design I had.

So now I'm thinking how to do this using the wait queues extension you
have. Some things to consider:
     - Since non-exclusive waiters are always added to the head of the
queue (unless we use a tq in a wtd kind of structure as ), ordering of
layered callbacks might still be a problem. (e.g. with an encryption filter
fs, we want the decrypt callback to run  before any waiter gets woken up;
irrespective of whether the wait was issued before or after the decrypt
callback was added by the filter layer)
     - The wait_queue_func gets only a pointer to the wait structure as an
argument, with no other means to pass any state about the sub-event that
caused it (could that be a problem with event chaining ... ? every
encapsulating structure will have to maintain a pointer to the related
sub-event ...  ? )

Regards
Suparna


  Suparna Bhattacharya
  Systems Software Group, IBM Global Services, India
  E-mail : bsuparna@in.ibm.com
  Phone : 91-80-5267117, Extn : 2525


Ben LaHaise <bcrl@redhat.com> on 01/30/2001 10:59:46 AM

Please respond to Ben LaHaise <bcrl@redhat.com>

To:   Suparna Bhattacharya/India/IBM@IBMIN
cc:   linux-kernel@vger.kernel.org, kiobuf-io-devel@lists.sourceforge.net
Subject:  Re: [Kiobuf-io-devel] RFC:  Kernel mechanism: Compound event
      wait/notify + callback chains




On Tue, 30 Jan 2001 bsuparna@in.ibm.com wrote:

>
> Comments, suggestions, advise, feedback solicited !
>
> If this seems like something that might (after some refinements) be a
> useful abstraction to have, then I need some help in straightening out
the
> design. I am not very satisfied with it in its current form.

Here's my first bit of feedback from the point of "this is what my code
currently does and why".

The waitqueue extension below is a minimalist approach for providing
kernel support for fully asynchronous io.  The basic idea is that a
function pointer is added to the wait queue structure that is called
during wake_up on a wait queue head.  (The patch below also includes
support for exclusive lifo wakeups, which isn't crucial/perfect, but just
happened to be part of the code.)  No function pointer or other data is
added to the wait queue structure.  Rather, users are expected to make use
of it by embedding the wait queue structure within their own data
structure that contains all needed info for running the state machine.

Here's a snippet of code which demonstrates a non blocking lock of a page
cache page:

struct worktodo {
     wait_queue_t            wait;
     struct tq_struct        tq;
     void *data;
};

static void __wtd_lock_page_waiter(wait_queue_t *wait)
{
        struct worktodo *wtd = (struct worktodo *)wait;
        struct page *page = (struct page *)wtd->data;

        if (!TryLockPage(page)) {
                __remove_wait_queue(&page->wait, &wtd->wait);
                wtd_queue(wtd);
        } else {
                schedule_task(&run_disk_tq);
        }
}

void wtd_lock_page(struct worktodo *wtd, struct page *page)
{
        if (TryLockPage(page)) {
                int raced = 0;
                wtd->data = page;
                init_waitqueue_func_entry(&wtd->wait,
__wtd_lock_page_waiter);
                add_wait_queue_cond(&page->wait, &wtd->wait,
TryLockPage(page), raced = 1);

                if (!raced) {
                        run_task_queue(&tq_disk);
                        return;
                }
        }

        wtd->tq.routine(wtd->tq.data);
}


The use of wakeup functions is also useful for waking a specific reader or
writer in the rw_sems, making semaphore avoid spurious wakeups, etc.

I suspect that chaining of events should be built on top of the
primatives, which should be kept as simple as possible.  Comments?

          -ben


diff -urN v2.4.1pre10/include/linux/mm.h work/include/linux/mm.h
--- v2.4.1pre10/include/linux/mm.h Fri Jan 26 19:03:05 2001
+++ work/include/linux/mm.h   Fri Jan 26 19:14:07 2001
@@ -198,10 +198,11 @@
  */
 #define UnlockPage(page)     do { \
                         smp_mb__before_clear_bit(); \
+                        if (!test_bit(PG_locked, &(page)->flags)) {
printk("last: %p\n", (page)->last_unlock); BUG(); } \
+                        (page)->last_unlock = current_text_addr(); \
                         if (!test_and_clear_bit(PG_locked,
&(page)->flags)) BUG(); \
                         smp_mb__after_clear_bit(); \
-                        if (waitqueue_active(&page->wait)) \
-                             wake_up(&page->wait); \
+                        wake_up(&page->wait); \
                    } while (0)
 #define PageError(page)      test_bit(PG_error, &(page)->flags)
 #define SetPageError(page)   set_bit(PG_error, &(page)->flags)
diff -urN v2.4.1pre10/include/linux/sched.h work/include/linux/sched.h
--- v2.4.1pre10/include/linux/sched.h    Fri Jan 26 19:03:05 2001
+++ work/include/linux/sched.h     Fri Jan 26 19:14:07 2001
@@ -751,6 +751,7 @@

 extern void FASTCALL(add_wait_queue(wait_queue_head_t *q, wait_queue_t *
wait));
 extern void FASTCALL(add_wait_queue_exclusive(wait_queue_head_t *q,
wait_queue_t * wait));
+extern void FASTCALL(add_wait_queue_exclusive_lifo(wait_queue_head_t *q,
wait_queue_t * wait));
 extern void FASTCALL(remove_wait_queue(wait_queue_head_t *q, wait_queue_t
* wait));

 #define __wait_event(wq, condition)                         \
diff -urN v2.4.1pre10/include/linux/wait.h work/include/linux/wait.h
--- v2.4.1pre10/include/linux/wait.h     Thu Jan  4 17:50:46 2001
+++ work/include/linux/wait.h Fri Jan 26 19:14:06 2001
@@ -43,17 +43,20 @@
 } while (0)
 #endif

+typedef struct __wait_queue wait_queue_t;
+typedef void (*wait_queue_func_t)(wait_queue_t *wait);
+
 struct __wait_queue {
     unsigned int flags;
 #define WQ_FLAG_EXCLUSIVE    0x01
     struct task_struct * task;
     struct list_head task_list;
+    wait_queue_func_t func;
 #if WAITQUEUE_DEBUG
     long __magic;
     long __waker;
 #endif
 };
-typedef struct __wait_queue wait_queue_t;

 /*
  * 'dual' spinlock architecture. Can be switched between spinlock_t and
@@ -110,7 +113,7 @@
 #endif

 #define __WAITQUEUE_INITIALIZER(name,task) \
-    { 0x0, task, { NULL, NULL } __WAITQUEUE_DEBUG_INIT(name)}
+    { 0x0, task, { NULL, NULL }, NULL __WAITQUEUE_DEBUG_INIT(name)}
 #define DECLARE_WAITQUEUE(name,task) \
     wait_queue_t name = __WAITQUEUE_INITIALIZER(name,task)

@@ -144,6 +147,22 @@
 #endif
     q->flags = 0;
     q->task = p;
+    q->func = NULL;
+#if WAITQUEUE_DEBUG
+    q->__magic = (long)&q->__magic;
+#endif
+}
+
+static inline void init_waitqueue_func_entry(wait_queue_t *q,
+                        wait_queue_func_t func)
+{
+#if WAITQUEUE_DEBUG
+    if (!q || !p)
+         WQ_BUG();
+#endif
+    q->flags = 0;
+    q->task = NULL;
+    q->func = func;
 #if WAITQUEUE_DEBUG
     q->__magic = (long)&q->__magic;
 #endif
@@ -200,6 +219,19 @@
 #endif
     list_del(&old->task_list);
 }
+
+#define add_wait_queue_cond(q, wait, cond, fail) \
+    do {                                \
+         unsigned long flags;                     \
+         wq_write_lock_irqsave(&(q)->lock, flags);     \
+         (wait)->flags = 0;                  \
+         if (cond)                      \
+              __add_wait_queue((q), (wait));           \
+         else {                              \
+              fail;                          \
+         }                              \
+         wq_write_unlock_irqrestore(&(q)->lock, flags);     \
+    } while (0)

 #endif /* __KERNEL__ */

diff -urN v2.4.1pre10/kernel/fork.c work/kernel/fork.c
--- v2.4.1pre10/kernel/fork.c Fri Jan 26 19:03:05 2001
+++ work/kernel/fork.c   Fri Jan 26 19:06:29 2001
@@ -44,6 +44,16 @@
     wq_write_unlock_irqrestore(&q->lock, flags);
 }

+void add_wait_queue_exclusive_lifo(wait_queue_head_t *q, wait_queue_t *
wait)
+{
+    unsigned long flags;
+
+    wq_write_lock_irqsave(&q->lock, flags);
+    wait->flags = WQ_FLAG_EXCLUSIVE;
+    __add_wait_queue(q, wait);
+    wq_write_unlock_irqrestore(&q->lock, flags);
+}
+
 void add_wait_queue_exclusive(wait_queue_head_t *q, wait_queue_t * wait)
 {
     unsigned long flags;
diff -urN v2.4.1pre10/kernel/sched.c work/kernel/sched.c
--- v2.4.1pre10/kernel/sched.c     Fri Jan 26 19:03:05 2001
+++ work/kernel/sched.c  Fri Jan 26 19:10:19 2001
@@ -714,12 +714,22 @@
     while (tmp != head) {
          unsigned int state;
                 wait_queue_t *curr = list_entry(tmp, wait_queue_t,
task_list);
+         wait_queue_func_t func;

          tmp = tmp->next;

 #if WAITQUEUE_DEBUG
          CHECK_MAGIC(curr->__magic);
 #endif
+         func = curr->func;
+         if (func) {
+              unsigned flags = curr->flags;
+              func(curr);
+              if (flags & WQ_FLAG_EXCLUSIVE && !--nr_exclusive)
+                   break;
+              continue;
+         }
+
          p = curr->task;
          state = p->state;
          if (state & mode) {


_______________________________________________
Kiobuf-io-devel mailing list
Kiobuf-io-devel@lists.sourceforge.net
http://lists.sourceforge.net/lists/listinfo/kiobuf-io-devel



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 201+ messages in thread

end of thread, other threads:[~2001-02-12 10:47 UTC | newest]

Thread overview: 201+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-02-01 14:44 [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains bsuparna
2001-02-01 15:09 ` Christoph Hellwig
2001-02-01 16:08   ` Steve Lord
2001-02-01 16:49     ` Stephen C. Tweedie
2001-02-01 17:02       ` Christoph Hellwig
2001-02-01 17:34         ` Alan Cox
2001-02-01 17:49           ` Stephen C. Tweedie
2001-02-01 17:09             ` Chaitanya Tumuluri
2001-02-01 20:33             ` Christoph Hellwig
2001-02-01 20:56               ` Steve Lord
2001-02-01 20:59                 ` Christoph Hellwig
2001-02-01 21:17                   ` Steve Lord
2001-02-01 21:44               ` Stephen C. Tweedie
2001-02-01 22:07               ` Stephen C. Tweedie
2001-02-02 12:02                 ` Christoph Hellwig
2001-02-05 12:19                   ` Stephen C. Tweedie
2001-02-05 21:28                     ` Ingo Molnar
2001-02-05 22:58                       ` Stephen C. Tweedie
2001-02-05 23:06                         ` Alan Cox
2001-02-05 23:16                           ` Stephen C. Tweedie
2001-02-06  0:19                         ` Manfred Spraul
2001-02-03 20:28                 ` Linus Torvalds
2001-02-05 11:03                   ` Stephen C. Tweedie
2001-02-05 12:00                     ` Manfred Spraul
2001-02-05 15:03                       ` Stephen C. Tweedie
2001-02-05 15:19                         ` Alan Cox
2001-02-05 17:20                           ` Stephen C. Tweedie
2001-02-05 17:29                             ` Alan Cox
2001-02-05 18:49                               ` Stephen C. Tweedie
2001-02-05 19:04                                 ` Alan Cox
2001-02-05 19:09                                 ` Linus Torvalds
2001-02-05 19:16                                   ` [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait Alan Cox
2001-02-05 19:28                                     ` Linus Torvalds
2001-02-05 20:54                                       ` Stephen C. Tweedie
2001-02-05 21:08                                         ` David Lang
2001-02-05 21:51                                         ` Alan Cox
2001-02-06  0:07                                         ` Stephen C. Tweedie
2001-02-06 17:00                                           ` Christoph Hellwig
2001-02-06 17:05                                             ` Stephen C. Tweedie
2001-02-06 17:14                                               ` Jens Axboe
2001-02-06 17:22                                               ` Christoph Hellwig
2001-02-06 18:26                                                 ` Stephen C. Tweedie
2001-02-06 17:37                                               ` Ben LaHaise
2001-02-06 18:00                                                 ` Jens Axboe
2001-02-06 18:09                                                   ` Ben LaHaise
2001-02-06 19:35                                                     ` Jens Axboe
2001-02-06 18:14                                                 ` Linus Torvalds
2001-02-08 11:21                                                   ` Andi Kleen
2001-02-08 14:11                                                   ` Martin Dalecki
2001-02-08 17:59                                                     ` Linus Torvalds
2001-02-06 18:18                                                 ` Ingo Molnar
2001-02-06 18:25                                                   ` Ben LaHaise
2001-02-06 18:35                                                     ` Ingo Molnar
2001-02-06 18:54                                                       ` Ben LaHaise
2001-02-06 18:58                                                         ` Ingo Molnar
2001-02-06 19:11                                                           ` Ben LaHaise
2001-02-06 19:32                                                             ` Jens Axboe
2001-02-06 19:32                                                             ` Ingo Molnar
2001-02-06 19:32                                                             ` Linus Torvalds
2001-02-06 19:44                                                               ` Ingo Molnar
2001-02-06 19:49                                                               ` Ben LaHaise
2001-02-06 19:57                                                                 ` Ingo Molnar
2001-02-06 20:07                                                                   ` Jens Axboe
2001-02-06 20:25                                                                   ` Ben LaHaise
2001-02-06 20:41                                                                     ` Manfred Spraul
2001-02-06 20:50                                                                       ` Jens Axboe
2001-02-06 21:26                                                                         ` Manfred Spraul
2001-02-06 21:42                                                                           ` Linus Torvalds
2001-02-06 20:16                                                                             ` Marcelo Tosatti
2001-02-06 22:09                                                                               ` Jens Axboe
2001-02-06 22:26                                                                                 ` Linus Torvalds
2001-02-06 21:13                                                                                   ` Marcelo Tosatti
2001-02-06 23:26                                                                                     ` Linus Torvalds
2001-02-07 23:17                                                                                       ` select() returning busy for regular files [was Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait] Pavel Machek
2001-02-08 13:57                                                                                         ` Ben LaHaise
2001-02-08 17:52                                                                                         ` Linus Torvalds
2001-02-08 15:06                                                                                       ` [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait Ben LaHaise
2001-02-08 13:44                                                                                         ` Marcelo Tosatti
2001-02-08 13:45                                                                                           ` Marcelo Tosatti
2001-02-07 23:15                                                                                   ` Pavel Machek
2001-02-08 13:22                                                                                     ` Stephen C. Tweedie
2001-02-08 12:03                                                                                       ` Marcelo Tosatti
2001-02-08 15:46                                                                                         ` Mikulas Patocka
2001-02-08 14:05                                                                                           ` Marcelo Tosatti
2001-02-08 16:11                                                                                             ` Mikulas Patocka
2001-02-08 14:44                                                                                               ` Marcelo Tosatti
2001-02-08 16:57                                                                                               ` Rik van Riel
2001-02-08 17:13                                                                                                 ` James Sutherland
2001-02-08 18:38                                                                                                 ` Linus Torvalds
2001-02-09 12:17                                                                                                   ` Martin Dalecki
2001-02-08 15:55                                                                                           ` Jens Axboe
2001-02-08 18:09                                                                                         ` Linus Torvalds
2001-02-08 14:52                                                                                     ` Mikulas Patocka
2001-02-08 19:50                                                                                       ` Stephen C. Tweedie
2001-02-11 21:30                                                                                       ` Pavel Machek
2001-02-06 21:57                                                                             ` Manfred Spraul
2001-02-06 22:13                                                                               ` Linus Torvalds
2001-02-06 22:26                                                                                 ` Andre Hedrick
2001-02-06 20:49                                                                     ` Jens Axboe
2001-02-07  0:21                                                                   ` Stephen C. Tweedie
2001-02-07  0:25                                                                     ` Ingo Molnar
2001-02-07  0:36                                                                       ` Stephen C. Tweedie
2001-02-07  0:50                                                                         ` Linus Torvalds
2001-02-07  1:49                                                                           ` Stephen C. Tweedie
2001-02-07  2:37                                                                             ` Linus Torvalds
2001-02-07 14:52                                                                               ` Stephen C. Tweedie
2001-02-07 19:12                                                                               ` Richard Gooch
2001-02-07 20:03                                                                                 ` Stephen C. Tweedie
2001-02-07  1:51                                                                           ` Jeff V. Merkey
2001-02-07  1:01                                                                             ` Ingo Molnar
2001-02-07  1:59                                                                               ` Jeff V. Merkey
2001-02-07  1:02                                                                             ` Jens Axboe
2001-02-07  1:19                                                                               ` Linus Torvalds
2001-02-07  1:39                                                                                 ` Jens Axboe
2001-02-07  1:45                                                                                   ` Linus Torvalds
2001-02-07  1:55                                                                                     ` Jens Axboe
2001-02-07  9:10                                                                                     ` David Howells
2001-02-07 12:16                                                                                       ` Stephen C. Tweedie
2001-02-07  2:00                                                                               ` Jeff V. Merkey
2001-02-07  1:06                                                                                 ` Ingo Molnar
2001-02-07  1:09                                                                                   ` Jens Axboe
2001-02-07  1:11                                                                                     ` Ingo Molnar
2001-02-07  1:26                                                                                   ` Linus Torvalds
2001-02-07  2:07                                                                                   ` Jeff V. Merkey
2001-02-07  1:08                                                                                 ` Jens Axboe
2001-02-07  2:08                                                                                   ` Jeff V. Merkey
2001-02-07  1:42                                                                         ` Jeff V. Merkey
2001-02-07  0:42                                                                       ` Linus Torvalds
2001-02-07  0:35                                                                     ` Jens Axboe
2001-02-07  0:41                                                                     ` Linus Torvalds
2001-02-07  1:27                                                                       ` Stephen C. Tweedie
2001-02-07  1:40                                                                         ` Linus Torvalds
2001-02-12 10:07                                                                           ` Jamie Lokier
2001-02-06 20:26                                                                 ` Linus Torvalds
2001-02-06 20:25                                                               ` Christoph Hellwig
2001-02-06 20:35                                                                 ` Ingo Molnar
2001-02-06 19:05                                                                   ` Marcelo Tosatti
2001-02-06 20:59                                                                     ` Ingo Molnar
2001-02-06 21:20                                                                       ` Steve Lord
2001-02-07 18:27                                                                   ` Christoph Hellwig
2001-02-06 20:59                                                                 ` Linus Torvalds
2001-02-07 18:26                                                                   ` Christoph Hellwig
2001-02-07 18:36                                                                     ` Linus Torvalds
2001-02-07 18:44                                                                       ` Christoph Hellwig
2001-02-08  0:34                                                                       ` Neil Brown
2001-02-06 19:46                                                             ` Ingo Molnar
2001-02-06 20:16                                                               ` Ben LaHaise
2001-02-06 20:22                                                                 ` Ingo Molnar
2001-02-06 19:20                                                         ` Linus Torvalds
2001-02-06  0:31                                       ` Roman Zippel
2001-02-06  1:01                                         ` Linus Torvalds
2001-02-06  9:22                                           ` Roman Zippel
2001-02-06  9:30                                           ` Ingo Molnar
2001-02-06  1:08                                         ` David S. Miller
2001-02-05 22:09                         ` [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains Ingo Molnar
2001-02-05 16:56                       ` Linus Torvalds
2001-02-05 17:27                         ` [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait Alan Cox
2001-02-05 16:36                     ` [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains Linus Torvalds
2001-02-05 19:08                       ` Stephen C. Tweedie
2001-02-01 17:49           ` Christoph Hellwig
2001-02-01 17:58             ` Alan Cox
2001-02-01 18:32               ` Rik van Riel
2001-02-01 18:59                 ` yodaiken
2001-02-01 19:33             ` Stephen C. Tweedie
2001-02-01 18:51           ` bcrl
2001-02-01 16:16   ` Stephen C. Tweedie
2001-02-01 17:05     ` Christoph Hellwig
2001-02-01 17:09       ` Christoph Hellwig
2001-02-01 17:41       ` Stephen C. Tweedie
2001-02-01 18:14         ` Christoph Hellwig
2001-02-01 18:25           ` Alan Cox
2001-02-01 18:39             ` Rik van Riel
2001-02-01 18:46               ` [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait Alan Cox
2001-02-01 18:48             ` [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains Christoph Hellwig
2001-02-01 18:57               ` Alan Cox
2001-02-01 19:00                 ` Christoph Hellwig
2001-02-01 19:32           ` Stephen C. Tweedie
2001-02-01 20:46             ` Christoph Hellwig
2001-02-01 21:25               ` Stephen C. Tweedie
2001-02-02 11:51                 ` Christoph Hellwig
2001-02-02 14:04                   ` Stephen C. Tweedie
2001-02-02  4:18           ` bcrl
2001-02-02 12:12             ` Christoph Hellwig
2001-02-01 20:04         ` Chaitanya Tumuluri
  -- strict thread matches above, loose matches on Subject: below --
2001-02-06 13:50 bsuparna
2001-02-06 14:07 ` Jens Axboe
     [not found] <CA2569EA.00506BBC.00@d73mta05.au.ibm.com>
2001-02-05 15:01 ` Stephen C. Tweedie
2001-02-05 14:31 bsuparna
     [not found] <CA2569E9.004A4E23.00@d73mta05.au.ibm.com>
2001-02-05 12:09 ` Stephen C. Tweedie
2001-02-04 13:24 bsuparna
2001-02-02 15:31 bsuparna
2001-02-01 13:20 bsuparna
2001-02-01  7:58 bsuparna
2001-02-01 12:39 ` Stephen C. Tweedie
2001-02-01  4:55 bsuparna
2001-02-01 12:19 ` Stephen C. Tweedie
2001-02-01 16:30   ` Chaitanya Tumuluri
2001-02-01  3:59 bsuparna
     [not found] <CA2569E5.004D51A7.00@d73mta05.au.ibm.com>
2001-01-31 23:32 ` Stephen C. Tweedie
2001-01-31 13:58 bsuparna
2001-01-30 14:09 bsuparna

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).