linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: bsuparna@in.ibm.com
To: "Stephen C. Tweedie" <sct@redhat.com>
Cc: linux-kernel@vger.kernel.org,
	kiobuf-io-devel@lists.sourceforge.net,
	Alan Cox <alan@lxorguk.ukuu.org.uk>,
	Christoph Hellwig <hch@caldera.de>, Andi Kleen <ak@suse.de>
Subject: Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait  /notify + callback chains
Date: Mon, 5 Feb 2001 20:01:45 +0530	[thread overview]
Message-ID: <CA2569EA.00506354.00@d73mta03.au.ibm.com> (raw)



>Hi,
>
>On Sun, Feb 04, 2001 at 06:54:58PM +0530, bsuparna@in.ibm.com wrote:
>>
>> Can't we define a kiobuf structure as just this ? A combination of a
>> frag_list and a page_list ?
>

>Then all code which needs to accept an arbitrary kiobuf needs to be
>able to parse both --- ugh.
>

Making this a little more explicit to help analyse tradeoffs:

/* Memory descriptor portion of a kiobuf - this is something that may get
passed around between layers and subsystems */
struct kio_mdesc {
     int nr_frags;
     struct frag *frag_list;
     int nr_pages;
     struct page **page_list;
     /* list follows */
};

For block i/o requiring #1 type descriptors, the list could have allocated
extra space for:
struct kio_type1_ext {
     struct frag frag;
     struct page *pages[NUM_STATIC_PAGES];
}

For n/w i/o or cases requiring  #2 type descriptors, the list could have
allocated extra space for:

struct kio_type2_ext {
     struct frag frags[NUM_STATIC_FRAGS];
     struct page *page[NUM_STATIC_FRAGS];
}


struct  kiobuf {
     int            status;
     wait_queue_head_t   waitq;
     struct kio_mdesc    mdesc;
     /* list follows - leaves room for allocation for mem descs, completion
sub structs etc */
}

Code that accepts an arbitrary kiobuf needs to do the following :
     process the fragments one by one
          - type #1 case, only one fragment would typically be there, but
processing it would involve crossing all pages in the page list
               So extra processing vs a kiobuf with single <offset, len>
pair, involves the following:
                    dereferencing the frag_list pointer
                    checking the nr_frags field
          - type #2 case, the number of fragments would be equal to or
greater than number of pages, so processing will typically go over each
fragments and thus cross each page in the list one by one
               So extra processing vs a kiobuf with per-page <offset, len>
pairs, involves
                    deferencing the page list entry (involves computing the
page-index in the page_list from the offset value)
                    check if offset+len doesn't fall outside the page


Boils down to approx one extra dereference and one comparison  per kiobuf
for the common cases (have I missed something critical ?)  vs the most
optimized choices of descriptors for those cases.

In terms of resource consumption (extra bytes taken up), two fields extra
per kiobuf chain (e.g. nr_frags and frag_list pointer when it comes to #1),
i.e. a total of 8 bytes, for the common cases vs the most optimized choice
of structures for those cases.

This seems to be more uniformly balanced across #1 and #2 cases, than an
<offset, len> for every page, as well as an overall <offset, len>. But,
then, come to think of it, since the need for lightweight structures is
greater in the case of #2, should the point of balance (if at all we want
to find one) be tilted towards #2 ?

On the other hand, since having a common structure does involve extra bytes
and cycles, if there are very few situations where we need both #1 and #2 -
conversion only at subsystem boundaries like i2o does may turn out to be
better.

Oh well ...


>> BTW, We could have a higher level io container that includes a <status>
>> field and a <wait_queue_head> to take care of i/o completion
>
>IO completion requirements are much more complex.  Think of disk
>readahead: we can create a single request struct for an IO of a
>hundred buffer heads, and as the device driver satisfies that request,
>it wakes up the buffer heads as it goes.  There is a separete
>completion notification for every single buffer head in the chain.
>
I understand the requirement of independent completion notifiers for higher
level buffers/other structures, since they are indeed independently usable
structures. That was one aspect that I thought I was being able to address
in the cev_wait design based on wait_queue wakeup functions.
The way it would work is that there would be multiple wakeup functions
registered on the container for the big request, each wakeup function being
responsible for waking up a higher level buffer. This way, the linkage
information is actually external to the buffer structures (which seems
reasonable, since it is only required while the i/o is happening, unless
there is another reason to keep a more lasting association)

>It's the very essence of readahead that we wake up the earlier buffers
>as soon as they become available, without waiting for the later ones
>to complete, so we _need_ this multiple completion concept.
>

I can understand this in principle, but when we have a single request going
down to the device that actually fills in multiple buffers, do we get
notified (interrupted) by the device before all the data in that request
got transferred ? I mean, how do we know that some buffers have become
available until the overall device request has completed (unless of course
the request actually gets broken up at this level and completed bit by
bit).


>Which is exactly why we have one kiobuf per higher-level buffer, and
>we chain together kiobufs when we need to for a long request, but we
>still get the independent completion notifiers.

As I mentioned above, the alternative is to have the i/o completion related
linkage information within the wakeup structures instead. That way, it
doesn't matter to the lower level driver what higher level structure we
have above (maybe buffer heads, may be page cache structures, may be
kiobufs). We only chain together memory descriptors for the buffers during
the io.

>
>Cheers,
> Stephen



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

             reply	other threads:[~2001-02-05 14:39 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-02-05 14:31 bsuparna [this message]
  -- strict thread matches above, loose matches on Subject: below --
2001-02-06 13:50 [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains bsuparna
2001-02-06 14:07 ` Jens Axboe
     [not found] <CA2569EA.00506BBC.00@d73mta05.au.ibm.com>
2001-02-05 15:01 ` Stephen C. Tweedie
     [not found] <CA2569E9.004A4E23.00@d73mta05.au.ibm.com>
2001-02-05 12:09 ` Stephen C. Tweedie
2001-02-04 13:24 bsuparna
2001-02-02 15:31 bsuparna
2001-02-01 14:44 bsuparna
2001-02-01 15:09 ` Christoph Hellwig
2001-02-01 16:08   ` Steve Lord
2001-02-01 16:49     ` Stephen C. Tweedie
2001-02-01 17:02       ` Christoph Hellwig
2001-02-01 17:34         ` Alan Cox
2001-02-01 17:49           ` Stephen C. Tweedie
2001-02-01 17:09             ` Chaitanya Tumuluri
2001-02-01 20:33             ` Christoph Hellwig
2001-02-01 20:56               ` Steve Lord
2001-02-01 20:59                 ` Christoph Hellwig
2001-02-01 21:17                   ` Steve Lord
2001-02-01 21:44               ` Stephen C. Tweedie
2001-02-01 22:07               ` Stephen C. Tweedie
2001-02-02 12:02                 ` Christoph Hellwig
2001-02-05 12:19                   ` Stephen C. Tweedie
2001-02-05 21:28                     ` Ingo Molnar
2001-02-05 22:58                       ` Stephen C. Tweedie
2001-02-05 23:06                         ` Alan Cox
2001-02-05 23:16                           ` Stephen C. Tweedie
2001-02-06  0:19                         ` Manfred Spraul
2001-02-03 20:28                 ` Linus Torvalds
2001-02-05 11:03                   ` Stephen C. Tweedie
2001-02-05 12:00                     ` Manfred Spraul
2001-02-05 15:03                       ` Stephen C. Tweedie
2001-02-05 15:19                         ` Alan Cox
2001-02-05 17:20                           ` Stephen C. Tweedie
2001-02-05 17:29                             ` Alan Cox
2001-02-05 18:49                               ` Stephen C. Tweedie
2001-02-05 19:04                                 ` Alan Cox
2001-02-05 19:09                                 ` Linus Torvalds
2001-02-05 22:09                         ` Ingo Molnar
2001-02-05 16:56                       ` Linus Torvalds
2001-02-05 16:36                     ` Linus Torvalds
2001-02-05 19:08                       ` Stephen C. Tweedie
2001-02-01 17:49           ` Christoph Hellwig
2001-02-01 17:58             ` Alan Cox
2001-02-01 18:32               ` Rik van Riel
2001-02-01 18:59                 ` yodaiken
2001-02-01 19:33             ` Stephen C. Tweedie
2001-02-01 18:51           ` bcrl
2001-02-01 16:16   ` Stephen C. Tweedie
2001-02-01 17:05     ` Christoph Hellwig
2001-02-01 17:09       ` Christoph Hellwig
2001-02-01 17:41       ` Stephen C. Tweedie
2001-02-01 18:14         ` Christoph Hellwig
2001-02-01 18:25           ` Alan Cox
2001-02-01 18:39             ` Rik van Riel
2001-02-01 18:48             ` Christoph Hellwig
2001-02-01 18:57               ` Alan Cox
2001-02-01 19:00                 ` Christoph Hellwig
2001-02-01 19:32           ` Stephen C. Tweedie
2001-02-01 20:46             ` Christoph Hellwig
2001-02-01 21:25               ` Stephen C. Tweedie
2001-02-02 11:51                 ` Christoph Hellwig
2001-02-02 14:04                   ` Stephen C. Tweedie
2001-02-02  4:18           ` bcrl
2001-02-02 12:12             ` Christoph Hellwig
2001-02-01 20:04         ` Chaitanya Tumuluri
2001-02-01 13:20 bsuparna
2001-02-01  7:58 bsuparna
2001-02-01 12:39 ` Stephen C. Tweedie
2001-02-01  4:55 bsuparna
2001-02-01 12:19 ` Stephen C. Tweedie
2001-02-01 16:30   ` Chaitanya Tumuluri
2001-02-01  3:59 bsuparna
     [not found] <CA2569E5.004D51A7.00@d73mta05.au.ibm.com>
2001-01-31 23:32 ` Stephen C. Tweedie
2001-01-31 13:58 bsuparna
2001-01-30 14:09 bsuparna

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA2569EA.00506354.00@d73mta03.au.ibm.com \
    --to=bsuparna@in.ibm.com \
    --cc=ak@suse.de \
    --cc=alan@lxorguk.ukuu.org.uk \
    --cc=hch@caldera.de \
    --cc=kiobuf-io-devel@lists.sourceforge.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=sct@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).