All of lore.kernel.org
 help / color / mirror / Atom feed
* splice methods in character device driver
@ 2009-05-11 14:40 Steve Rottinger
  2009-05-11 19:22 ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Steve Rottinger @ 2009-05-11 14:40 UTC (permalink / raw)
  To: linux-kernel

Hi,

Has anyone successfully implemented the splice() method's in a character
device driver?
I'm having a tough time finding any existing drivers that implement
these method's, which
I can use as an example. Specifically, it is unclear to me, as to how I
need to set up .ops
in the splice_pipe_desc, when using splice_to_pipe(). 
My ultimate goal is to use splice to move data from a high speed data
acquisition device,
which has a buffer in PCI space to disk without the need for going
through block memory.

Thanks,

-Steve


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-05-11 14:40 splice methods in character device driver Steve Rottinger
@ 2009-05-11 19:22 ` Jens Axboe
  2009-05-13 16:59   ` Steve Rottinger
  2009-06-06 21:25   ` Leon Woestenberg
  0 siblings, 2 replies; 19+ messages in thread
From: Jens Axboe @ 2009-05-11 19:22 UTC (permalink / raw)
  To: Steve Rottinger; +Cc: linux-kernel

On Mon, May 11 2009, Steve Rottinger wrote:
> Hi,
> 
> Has anyone successfully implemented the splice() method's in a character
> device driver?
> I'm having a tough time finding any existing drivers that implement
> these method's, which
> I can use as an example. Specifically, it is unclear to me, as to how I
> need to set up .ops
> in the splice_pipe_desc, when using splice_to_pipe(). 
> My ultimate goal is to use splice to move data from a high speed data
> acquisition device,
> which has a buffer in PCI space to disk without the need for going
> through block memory.

I implemented ->splice_write() for /dev/null for testing purposes, but I
doubt that you'll find much inspiration there.

To use splice_to_pipe(), basically all you need to do is provide some
way of stuffing the data pages in question into a struct page *pages[].
See fs/splice.c:vmsplice_to_pipe(), for instance. Then you need to
provide a way to ensure that these pages can be settled if they need to
be accessed. Splice doesn't require that the IO is completed on the
pages before they are put in the pipe, that's part of the power of the
design. So if your design is allocating the pages in the ->splice_read()
handler and initiating IO to these pages, then you need to provide a
suitable ->confirm() hook that can wait on this IO to complete if
needed. ->map() and ->unmap() can typically use the generic functions,
ditto ->release(). You can implement ->steal() easily if you use the
method of dynamically allocating pages for each IO instead of reusing
them.

So it should not be very hard, your best inspiration is likely to be
found in fs/splice.c itself.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-05-11 19:22 ` Jens Axboe
@ 2009-05-13 16:59   ` Steve Rottinger
  2009-06-03 21:32     ` Leon Woestenberg
  2009-06-06 21:25   ` Leon Woestenberg
  1 sibling, 1 reply; 19+ messages in thread
From: Steve Rottinger @ 2009-05-13 16:59 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel

Thanks!  This really helped me to progress using this interface.  My
current problem
is passing in the pages into splice_to_pipe.  The pages are associated
with a PCI BAR,
not main memory.  I'm wondering if this could be a problem?  Within my
driver, I use
ioremap() to map in the desired PCI BAR space; then, use virt_to_page()
to generate the
page table, based on the address returned from ioremap().   I can see
the correct pages
being received on the other end (by instrumenting the splice_to_file()
routine).  However,
the data in the resulting outfile does not match the data contained
within in my PCI BAR. I
see that splice_to_file() uses kmap_atomic() to mapping in the page,
before performing a
memcpy.  Is this going to be valid on an page that doesn't reside in
main memory?


Jens Axboe wrote:
> On Mon, May 11 2009, Steve Rottinger wrote:
>   
>> Hi,
>>
>> Has anyone successfully implemented the splice() method's in a character
>> device driver?
>> I'm having a tough time finding any existing drivers that implement
>> these method's, which
>> I can use as an example. Specifically, it is unclear to me, as to how I
>> need to set up .ops
>> in the splice_pipe_desc, when using splice_to_pipe(). 
>> My ultimate goal is to use splice to move data from a high speed data
>> acquisition device,
>> which has a buffer in PCI space to disk without the need for going
>> through block memory.
>>     
>
> I implemented ->splice_write() for /dev/null for testing purposes, but I
> doubt that you'll find much inspiration there.
>
> To use splice_to_pipe(), basically all you need to do is provide some
> way of stuffing the data pages in question into a struct page *pages[].
> See fs/splice.c:vmsplice_to_pipe(), for instance. Then you need to
> provide a way to ensure that these pages can be settled if they need to
> be accessed. Splice doesn't require that the IO is completed on the
> pages before they are put in the pipe, that's part of the power of the
> design. So if your design is allocating the pages in the ->splice_read()
> handler and initiating IO to these pages, then you need to provide a
> suitable ->confirm() hook that can wait on this IO to complete if
> needed. ->map() and ->unmap() can typically use the generic functions,
> ditto ->release(). You can implement ->steal() easily if you use the
> method of dynamically allocating pages for each IO instead of reusing
> them.
>
> So it should not be very hard, your best inspiration is likely to be
> found in fs/splice.c itself.
>
>   


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-05-13 16:59   ` Steve Rottinger
@ 2009-06-03 21:32     ` Leon Woestenberg
  2009-06-04  7:32       ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Leon Woestenberg @ 2009-06-03 21:32 UTC (permalink / raw)
  To: Steve Rottinger; +Cc: Jens Axboe, linux-kernel

Hello all,

On Wed, May 13, 2009 at 6:59 PM, Steve Rottinger <steve@pentek.com> wrote:
> is passing in the pages into splice_to_pipe.  The pages are associated
> with a PCI BAR, not main memory.  I'm wondering if this could be a problem?
>
Good question; my newbie answer would be the pages need to be mapped
in kernel space.

I have a similar use case but with memory being DMA'd to host main
memory (instead of the data sitting in your PCI device) in a character
device driver. The driver is a complete rewrite from scratch from
what's currently sitting-butt-ugly in staging/altpcichdma.c
so-please-don't-look-there.

I have already implemented zero-latency overlapping transfers in the
DMA engine (i.e. it never sits idle if async I/O is performed through
threads), now it would be really cool to add zero-copy.

What is it my driver is expected to do?

.splice_read:

- Allocate a bunch of single pages
- Create a scatter-gather list
- "stuff the data pages in question into a struct page *pages[]." a la
"fs/splice.c:vmsplice_to_pipe()"
- Start the DMA from the device to the pages (i.e. the transfer)
- Return.

.splice_write:

- Create a scatter-gather list

interrupt handler / DMA service routine:
- device book keeping
- wake_up_interruptible(transfer_queue)

.confirm():

"then you need to provide a suitable ->confirm() hook that can wait on
this IO to complete if needed."
- wait_on_event_interruptibe(transfer_queue)

.release():

- release the pages

.steal():

unsure

.map

unsure

Regards,
-- 
Leon

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-03 21:32     ` Leon Woestenberg
@ 2009-06-04  7:32       ` Jens Axboe
  2009-06-04 13:20         ` Steve Rottinger
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2009-06-04  7:32 UTC (permalink / raw)
  To: Leon Woestenberg; +Cc: Steve Rottinger, linux-kernel

On Wed, Jun 03 2009, Leon Woestenberg wrote:
> Hello all,
> 
> On Wed, May 13, 2009 at 6:59 PM, Steve Rottinger <steve@pentek.com> wrote:
> > is passing in the pages into splice_to_pipe.  The pages are associated
> > with a PCI BAR, not main memory.  I'm wondering if this could be a problem?
> >
> Good question; my newbie answer would be the pages need to be mapped
> in kernel space.

That is what the ->map() hook is for.

> I have a similar use case but with memory being DMA'd to host main
> memory (instead of the data sitting in your PCI device) in a character
> device driver. The driver is a complete rewrite from scratch from
> what's currently sitting-butt-ugly in staging/altpcichdma.c
> so-please-don't-look-there.
> 
> I have already implemented zero-latency overlapping transfers in the
> DMA engine (i.e. it never sits idle if async I/O is performed through
> threads), now it would be really cool to add zero-copy.
> 
> What is it my driver is expected to do?
> 
> .splice_read:
> 
> - Allocate a bunch of single pages
> - Create a scatter-gather list
> - "stuff the data pages in question into a struct page *pages[]." a la
> "fs/splice.c:vmsplice_to_pipe()"
> - Start the DMA from the device to the pages (i.e. the transfer)
> - Return.
> 
> .splice_write:
> 
> - Create a scatter-gather list
> 
> interrupt handler / DMA service routine:
> - device book keeping
> - wake_up_interruptible(transfer_queue)
> 
> .confirm():
> 
> "then you need to provide a suitable ->confirm() hook that can wait on
> this IO to complete if needed."
> - wait_on_event_interruptibe(transfer_queue)
> 
> .release():
> 
> - release the pages
> 
> .steal():
> 
> unsure

This is what allows zero copy throughout the pipe line. ->steal(), if
sucesful, should pass ownership of that page to the caller. The previous
owner must no longer modify it.

> .map
> 
> unsure

See above :-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-04  7:32       ` Jens Axboe
@ 2009-06-04 13:20         ` Steve Rottinger
  2009-06-12 19:21           ` Leon Woestenberg
  0 siblings, 1 reply; 19+ messages in thread
From: Steve Rottinger @ 2009-06-04 13:20 UTC (permalink / raw)
  To: linux-kernel; +Cc: Jens Axboe, Leon Woestenberg

Since I was working with a memory region that didn't have any "struct pages"
associated with it  (or at least I wasn't able to find a way to retrieve
them for
this space),  I took the approach of generating fake struct pages, which
I passed
through the pipe.  Unfortunately, this also required me to make some
rather hanus
hacks to various kernel macros to get them to handle the fake pages; ie:
page_to_phys.
I'm not sure if this was the best way to do it, but it was the only way
that I could come
up with.  ->map didn't help, since I am in O_DIRECT mode -- I wanted the
disk controller's
DMA to directly transfer from PCI memory.

As this point,  I have proof of concept, since I am now able to transfer
some data directly from
PCI space to disk; however,  I am still wrestling with some issues:

-  I'm not sure at what point it is safe to free up the pages that I am
passing
through the pipe. I tried doing it in the "release" method, however,
this is apparently too
soon, since this results in a crash.  How do I know when the system is
done with them?

- The performance is poor, and much slower than transferring directly from
main memory with O_DIRECT.  I suspect that this has a lot to do with
large amount of
systems calls required to move the data, since each call moves only
64K.  Maybe I'll
try increasing the pipe size, next.

Once I get past these issues, and I get the code in a better state, I'll
be happy to share what
I can.

-Steve



Jens Axboe wrote:
> On Wed, Jun 03 2009, Leon Woestenberg wrote:
>   
>> Hello all,
>>
>> On Wed, May 13, 2009 at 6:59 PM, Steve Rottinger <steve@pentek.com> wrote:
>>     
>>> is passing in the pages into splice_to_pipe.  The pages are associated
>>> with a PCI BAR, not main memory.  I'm wondering if this could be a problem?
>>>
>>>       
>> Good question; my newbie answer would be the pages need to be mapped
>> in kernel space.
>>     
>
> That is what the ->map() hook is for.
>
>   
>> I have a similar use case but with memory being DMA'd to host main
>> memory (instead of the data sitting in your PCI device) in a character
>> device driver. The driver is a complete rewrite from scratch from
>> what's currently sitting-butt-ugly in staging/altpcichdma.c
>> so-please-don't-look-there.
>>
>> I have already implemented zero-latency overlapping transfers in the
>> DMA engine (i.e. it never sits idle if async I/O is performed through
>> threads), now it would be really cool to add zero-copy.
>>
>> What is it my driver is expected to do?
>>
>> .splice_read:
>>
>> - Allocate a bunch of single pages
>> - Create a scatter-gather list
>> - "stuff the data pages in question into a struct page *pages[]." a la
>> "fs/splice.c:vmsplice_to_pipe()"
>> - Start the DMA from the device to the pages (i.e. the transfer)
>> - Return.
>>
>> .splice_write:
>>
>> - Create a scatter-gather list
>>
>> interrupt handler / DMA service routine:
>> - device book keeping
>> - wake_up_interruptible(transfer_queue)
>>
>> .confirm():
>>
>> "then you need to provide a suitable ->confirm() hook that can wait on
>> this IO to complete if needed."
>> - wait_on_event_interruptibe(transfer_queue)
>>
>> .release():
>>
>> - release the pages
>>
>> .steal():
>>
>> unsure
>>     
>
> This is what allows zero copy throughout the pipe line. ->steal(), if
> sucesful, should pass ownership of that page to the caller. The previous
> owner must no longer modify it.
>
>   
>> .map
>>
>> unsure
>>     
>
> See above :-)
>
>   


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-05-11 19:22 ` Jens Axboe
  2009-05-13 16:59   ` Steve Rottinger
@ 2009-06-06 21:25   ` Leon Woestenberg
  2009-06-08  7:05     ` Jens Axboe
  1 sibling, 1 reply; 19+ messages in thread
From: Leon Woestenberg @ 2009-06-06 21:25 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Steve Rottinger, linux-kernel

Hello Jens,

in the spirit of splice-and-dice I continue asking questions until I grasp this.

On Mon, May 11, 2009 at 9:22 PM, Jens Axboe<jens.axboe@oracle.com> wrote:
> So if your design is allocating the pages in the ->splice_read()
> handler and initiating IO to these pages, then you need to provide a
> suitable ->confirm() hook that can wait on this IO to complete if
> needed.

When the driver's splice_read() is called, the kernel wants the driver to
allocate pages and later check that they are filled with data through the
confirm() hook. Is that correct?

How can I pass information from the splice_read(), which spawns a hardware
DMA to the pages in my case, to the confirm() hook which is called at some
(random) time in the future?

static int splice_read(struct file *in, loff_t *ppos,
struct pipe_inode_info *pipe, size_t len, unsigned int flags)

static int alt_pipe_buf_confirm(struct pipe_inode_info *pipe,
struct pipe_buffer *buf)


Regards,
-- 
Leon

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-06 21:25   ` Leon Woestenberg
@ 2009-06-08  7:05     ` Jens Axboe
  2009-06-12 22:05       ` Leon Woestenberg
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2009-06-08  7:05 UTC (permalink / raw)
  To: Leon Woestenberg; +Cc: Steve Rottinger, linux-kernel

On Sat, Jun 06 2009, Leon Woestenberg wrote:
> Hello Jens,
> 
> in the spirit of splice-and-dice I continue asking questions until I grasp this.
> 
> On Mon, May 11, 2009 at 9:22 PM, Jens Axboe<jens.axboe@oracle.com> wrote:
> > So if your design is allocating the pages in the ->splice_read()
> > handler and initiating IO to these pages, then you need to provide a
> > suitable ->confirm() hook that can wait on this IO to complete if
> > needed.
> 
> When the driver's splice_read() is called, the kernel wants the driver to
> allocate pages and later check that they are filled with data through the
> confirm() hook. Is that correct?

Correct.

> How can I pass information from the splice_read(), which spawns a hardware
> DMA to the pages in my case, to the confirm() hook which is called at some
> (random) time in the future?

There's a ->private for each pipe_buffer, so you can pass along info on
a per-page granularity.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-04 13:20         ` Steve Rottinger
@ 2009-06-12 19:21           ` Leon Woestenberg
  2009-06-12 19:59             ` Jens Axboe
  2009-06-12 20:45             ` Steve Rottinger
  0 siblings, 2 replies; 19+ messages in thread
From: Leon Woestenberg @ 2009-06-12 19:21 UTC (permalink / raw)
  To: Steve Rottinger; +Cc: linux-kernel, Jens Axboe

Steve, Jens,

another few questions:

On Thu, Jun 4, 2009 at 3:20 PM, Steve Rottinger<steve@pentek.com> wrote:
> ...
> - The performance is poor, and much slower than transferring directly from
> main memory with O_DIRECT.  I suspect that this has a lot to do with
> large amount of systems calls required to move the data, since each call moves only
> 64K.  Maybe I'll try increasing the pipe size, next.
>
> Once I get past these issues, and I get the code in a better state, I'll
> be happy to share what
> I can.
>
I've been experimenting a bit using mostly-empty functions to learn
understand the function call flow:

splice_from_pipe(pipe, out, ppos, len, flags, pipe_to_device);
pipe_to_device(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
struct splice_desc *sd)

So some back-of-a-coaster calculations:

If I understand correctly, a pipe_buffer never spans more than one
page (typically 4kB).

The SPLICE_BUFFERS is 16, thus splice_from_pipe() is called every 64kB.

The actor "pipe_to_device" is called on each pipe_buffer, so for every 4kB.

For my case, I have a DMA engine that does say 200 MB/s, resulting in
50000 actor calls per second.

As my use case would be to splice from an acquisition card to disk,
splice() made an interesting approach.

However, if the above is correct, I assume splice() is not meant for
my use-case?


Regards,

Leon.






/* the actor which takes pages from the pipe to the device
 *
 * it must move a single struct pipe_buffer to the desired destination
 * Existing implementations are pipe_to_file, pipe_to_sendpage, pipe_to_user.
 */
static int pipe_to_device(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
		 struct splice_desc *sd)
{
	int rc;
	printk(KERN_DEBUG "pipe_to_device(buf->offset=%d, sd->len=%d)\n",
		buf->offset, sd->len);
	/* make sure the data in this buffer is up-to-date */
	rc = buf->ops->confirm(pipe, buf);
	if (unlikely(rc))
		return rc;
	/* create a transfer for this buffer */
	
}
/* kernel wants to write from the pipe into our file at ppos */
ssize_t splice_write(struct pipe_inode_info *pipe, struct file *out,
loff_t *ppos, size_t len, unsigned int flags)
{
	int ret;
	printk(KERN_DEBUG "splice_write(len=%d)\n", len);
	ret = splice_from_pipe(pipe, out, ppos, len, flags, pipe_to_device);
	return 0;
}

-- 
Leon

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-12 19:21           ` Leon Woestenberg
@ 2009-06-12 19:59             ` Jens Axboe
  2009-06-12 20:45             ` Steve Rottinger
  1 sibling, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2009-06-12 19:59 UTC (permalink / raw)
  To: Leon Woestenberg; +Cc: Steve Rottinger, linux-kernel

On Fri, Jun 12 2009, Leon Woestenberg wrote:
> Steve, Jens,
> 
> another few questions:
> 
> On Thu, Jun 4, 2009 at 3:20 PM, Steve Rottinger<steve@pentek.com> wrote:
> > ...
> > - The performance is poor, and much slower than transferring directly from
> > main memory with O_DIRECT.  I suspect that this has a lot to do with
> > large amount of systems calls required to move the data, since each call moves only
> > 64K.  Maybe I'll try increasing the pipe size, next.
> >
> > Once I get past these issues, and I get the code in a better state, I'll
> > be happy to share what
> > I can.
> >
> I've been experimenting a bit using mostly-empty functions to learn
> understand the function call flow:
> 
> splice_from_pipe(pipe, out, ppos, len, flags, pipe_to_device);
> pipe_to_device(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
> struct splice_desc *sd)
> 
> So some back-of-a-coaster calculations:
> 
> If I understand correctly, a pipe_buffer never spans more than one
> page (typically 4kB).

Correct

> The SPLICE_BUFFERS is 16, thus splice_from_pipe() is called every 64kB.

Also correct.

> The actor "pipe_to_device" is called on each pipe_buffer, so for every 4kB.

Ditto.

> For my case, I have a DMA engine that does say 200 MB/s, resulting in
> 50000 actor calls per second.
> 
> As my use case would be to splice from an acquisition card to disk,
> splice() made an interesting approach.
> 
> However, if the above is correct, I assume splice() is not meant for
> my use-case?

50000 function calls per second is not a lot. We do lots of things on a
per-page basis in the kernel. Batching would of course speed things up,
but it's not been a problem thus far. So I would not worry about 50k
function calls per second to begin with.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-12 19:21           ` Leon Woestenberg
  2009-06-12 19:59             ` Jens Axboe
@ 2009-06-12 20:45             ` Steve Rottinger
  2009-06-16 11:59               ` Jens Axboe
  1 sibling, 1 reply; 19+ messages in thread
From: Steve Rottinger @ 2009-06-12 20:45 UTC (permalink / raw)
  To: Leon Woestenberg; +Cc: linux-kernel, Jens Axboe

Hi Leon,

It does seem like a lot of code needs to be executed to move a small
chunk of data.
Although,  I think that most of the overhead that I was experiencing
came from the cumulative
overhead of each splice system call.   I increased my pipe size using
Jens' pipe size patch,
from 16 to 256 pages, and this had a huge effect -- the speed of my
transfers more than doubled.
Pipe sizes larger that 256 pages, cause my kernel to crash.

I'm doing about 300MB/s to my hardware RAID, running two instances of my
splice() copy application
(One on each RAID channel).  I would like to combine the two RAID
channels using a software RAID 0;
however, splice, even from /dev/zero runs horribly slow to a software
RAID device.  I'd be curious
to know if anyone else has tried this?

-Steve

Leon Woestenberg wrote:
> Steve, Jens,
>
> another few questions:
>
> On Thu, Jun 4, 2009 at 3:20 PM, Steve Rottinger<steve@pentek.com> wrote:
>   
>> ...
>> - The performance is poor, and much slower than transferring directly from
>> main memory with O_DIRECT.  I suspect that this has a lot to do with
>> large amount of systems calls required to move the data, since each call moves only
>> 64K.  Maybe I'll try increasing the pipe size, next.
>>
>> Once I get past these issues, and I get the code in a better state, I'll
>> be happy to share what
>> I can.
>>
>>     
> I've been experimenting a bit using mostly-empty functions to learn
> understand the function call flow:
>
> splice_from_pipe(pipe, out, ppos, len, flags, pipe_to_device);
> pipe_to_device(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
> struct splice_desc *sd)
>
> So some back-of-a-coaster calculations:
>
> If I understand correctly, a pipe_buffer never spans more than one
> page (typically 4kB).
>
> The SPLICE_BUFFERS is 16, thus splice_from_pipe() is called every 64kB.
>
> The actor "pipe_to_device" is called on each pipe_buffer, so for every 4kB.
>
> For my case, I have a DMA engine that does say 200 MB/s, resulting in
> 50000 actor calls per second.
>
> As my use case would be to splice from an acquisition card to disk,
> splice() made an interesting approach.
>
> However, if the above is correct, I assume splice() is not meant for
> my use-case?
>
>
> Regards,
>
> Leon.
>
>
>
>
>
>
> /* the actor which takes pages from the pipe to the device
>  *
>  * it must move a single struct pipe_buffer to the desired destination
>  * Existing implementations are pipe_to_file, pipe_to_sendpage, pipe_to_user.
>  */
> static int pipe_to_device(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
> 		 struct splice_desc *sd)
> {
> 	int rc;
> 	printk(KERN_DEBUG "pipe_to_device(buf->offset=%d, sd->len=%d)\n",
> 		buf->offset, sd->len);
> 	/* make sure the data in this buffer is up-to-date */
> 	rc = buf->ops->confirm(pipe, buf);
> 	if (unlikely(rc))
> 		return rc;
> 	/* create a transfer for this buffer */
> 	
> }
> /* kernel wants to write from the pipe into our file at ppos */
> ssize_t splice_write(struct pipe_inode_info *pipe, struct file *out,
> loff_t *ppos, size_t len, unsigned int flags)
> {
> 	int ret;
> 	printk(KERN_DEBUG "splice_write(len=%d)\n", len);
> 	ret = splice_from_pipe(pipe, out, ppos, len, flags, pipe_to_device);
> 	return 0;
> }
>
>   


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-08  7:05     ` Jens Axboe
@ 2009-06-12 22:05       ` Leon Woestenberg
  2009-06-13  7:26         ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Leon Woestenberg @ 2009-06-12 22:05 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Steve Rottinger, linux-kernel

Hello Jens,

On Mon, Jun 8, 2009 at 9:05 AM, Jens Axboe<jens.axboe@oracle.com> wrote:
> On Sat, Jun 06 2009, Leon Woestenberg wrote:
>> How can I pass information from the splice_read(), which spawns a hardware
>> DMA to the pages in my case, to the confirm() hook which is called at some
>> (random) time in the future?
>
> There's a ->private for each pipe_buffer, so you can pass along info on
> a per-page granularity.
>
So, this means in my driver's splice_read(), I must set
pipe->bufs[i]->private for each 0 <= i < PIPE_BUFFERS?


struct pipe_buffer {
   ...
   unsigned long private;
};
struct pipe_inode_info {
   ...
   struct pipe_buffer bufs[PIPE_BUFFERS];
};
static int splice_read(..., struct pipe_inode_info *pipe, ...)

Regards,
-- 
Leon

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-12 22:05       ` Leon Woestenberg
@ 2009-06-13  7:26         ` Jens Axboe
  2009-06-13 20:04           ` Leon Woestenberg
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2009-06-13  7:26 UTC (permalink / raw)
  To: Leon Woestenberg; +Cc: Steve Rottinger, linux-kernel

On Sat, Jun 13 2009, Leon Woestenberg wrote:
> Hello Jens,
> 
> On Mon, Jun 8, 2009 at 9:05 AM, Jens Axboe<jens.axboe@oracle.com> wrote:
> > On Sat, Jun 06 2009, Leon Woestenberg wrote:
> >> How can I pass information from the splice_read(), which spawns a hardware
> >> DMA to the pages in my case, to the confirm() hook which is called at some
> >> (random) time in the future?
> >
> > There's a ->private for each pipe_buffer, so you can pass along info on
> > a per-page granularity.
> >
> So, this means in my driver's splice_read(), I must set
> pipe->bufs[i]->private for each 0 <= i < PIPE_BUFFERS?

Yes. There's no way to make it bigger granularity, since you could have
a mix of source pages in the pipe.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-13  7:26         ` Jens Axboe
@ 2009-06-13 20:04           ` Leon Woestenberg
  2009-06-16 11:57             ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Leon Woestenberg @ 2009-06-13 20:04 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Steve Rottinger, linux-kernel

Hello Jens, Steve,

On Sat, Jun 13, 2009 at 9:26 AM, Jens Axboe<jens.axboe@oracle.com> wrote:
> On Sat, Jun 13 2009, Leon Woestenberg wrote:
>> On Mon, Jun 8, 2009 at 9:05 AM, Jens Axboe<jens.axboe@oracle.com> wrote:
>> > On Sat, Jun 06 2009, Leon Woestenberg wrote:
>> >> How can I pass information from the splice_read(), which spawns a hardware
>> >> DMA to the pages in my case, to the confirm() hook which is called at some
>> >> (random) time in the future?
>> >
>> > There's a ->private for each pipe_buffer, so you can pass along info on
>> > a per-page granularity.
>> >
>> So, this means in my driver's splice_read(), I must set
>> pipe->bufs[i]->private for each 0 <= i < PIPE_BUFFERS?
>
> Yes. There's no way to make it bigger granularity, since you could have
> a mix of source pages in the pipe.
>

My current splice support code is copied at the end of this email.

I would like to batch up some pages before I start the DMA transfer,
as starting a device-driven DMA on page granularity (with a
corresponding interrupt)
looks like too much overhead to me.

I allocate a device transfer in splice_write(), which I would like to
fill-in in my write actor pipe_to_device(). At some point, I have to
start a transfer.

(sd-> len == sd->total_len) is not a strong enough condition, and I
find that SPLICE_F_MORE is never set:

root@mpc8315e-rdb:~# /splice-in /7000-bytes.bin  | /splice-out -s8192 /dev/alt
altpciesgdma_open(0xc74fc368, 0xc7be7000)

splice_write(len=8192)

transfer = 0xc7114140

pipe_to_device(buf->offset=0, sd->len/total_len=4096/8192, sd->data =
0xc7114140)

pipe_to_device() expect no more

pipe_to_device(buf->offset=0, sd->len/total_len=2904/4096, sd->data =
0xc7114140)

pipe_to_device() expect no more

splice_write(len=8192)

transfer = 0xc7114ac0

altpciesgdma_close(0xc74fc368, 0xc7be7000)

Is total_len <= PAGE_SIZE a sensible and robust (always occuring)
condition that indicates the last buffer?


Regards, Leon.



/* the write actor which takes a page from the pipe to the device
 *
 * it must move a single struct pipe_buffer to the desired destination
 * Existing implementations are pipe_to_file, pipe_to_sendpage, pipe_to_user.
 */
static int pipe_to_device(


#if SPLICE

static void *alt_pipe_buf_map(struct pipe_inode_info *pipe,
struct pipe_buffer *buf, int atomic)
{
	printk(KERN_DEBUG "alt_pipe_buf_map(buf->page=0x%p)\n", buf->page);
	if (atomic) {
		buf->flags |= PIPE_BUF_FLAG_ATOMIC;
		return kmap_atomic(buf->page, KM_USER0);
	}
	return kmap(buf->page);
}

static void alt_pipe_buf_unmap(struct pipe_inode_info *pipe,
struct pipe_buffer *buf, void *map_data)
{
	printk(KERN_DEBUG "alt_pipe_buf_unmap()\n");
	if (buf->flags & PIPE_BUF_FLAG_ATOMIC) {
		buf->flags &= ~PIPE_BUF_FLAG_ATOMIC;
		kunmap_atomic(map_data, KM_USER0);
	} else
	kunmap(buf->page);
}

/*
 * Check whether the contents of the pipe buffer may be accessed.
 * XXX to be implemented, see page_cache_pipe_buf_confirm
 */
static int alt_pipe_buf_confirm(struct pipe_inode_info *pipe,
struct pipe_buffer *buf)
{
	printk(KERN_DEBUG "alt_pipe_buf_confirm()\n");
	/* 0 seems ok */
	return 0;
}

/* XXX to be implemented, see page_cache_pipe_buf_release */
static void alt_pipe_buf_release(struct pipe_inode_info *pipe,
struct pipe_buffer *buf)
{
	printk(KERN_DEBUG "alt_pipe_buf_release()\n");
	put_page(buf->page);
	buf->flags &= ~PIPE_BUF_FLAG_LRU;
}

/* XXX to be implemented, see page_cache_pipe_buf_steal */
static int alt_pipe_buf_steal(struct pipe_inode_info *pipe,
struct pipe_buffer *buf)
{
	printk(KERN_DEBUG "alt_pipe_buf_steal()\n");
	return 1;
}

static void alt_pipe_buf_get(struct pipe_inode_info *pipe, struct
pipe_buffer *buf)
{
	printk(KERN_DEBUG "alt_pipe_buf_get()\n");
	page_cache_get(buf->page);
}

static void alt_spd_release_page(struct splice_pipe_desc *spd, unsigned int i)
{
	printk(KERN_DEBUG "alt_spd_release_page()\n");
	put_page(spd->pages[i]);
}

static const struct pipe_buf_operations alt_pipe_buf_ops = {
	.can_merge = 0,
	.map = alt_pipe_buf_map,
	.unmap = alt_pipe_buf_unmap,
	.confirm = alt_pipe_buf_confirm,
	.release = alt_pipe_buf_release,
	.steal = alt_pipe_buf_steal,
	.get = alt_pipe_buf_get,
};

/* kernel wants to read from our file in at ppos to the pipe */
static int splice_read(struct file *in, loff_t *ppos,
struct pipe_inode_info *pipe, size_t len, unsigned int flags)
{
	int i;
	struct page *pages[PIPE_BUFFERS];
	struct partial_page partial[PIPE_BUFFERS];
	struct splice_pipe_desc spd = {
		/* pointer to an array of page pointers */
		.pages = pages,
		.partial = partial,
		.flags = flags,
		.ops = &alt_pipe_buf_ops,
		.spd_release = alt_spd_release_page,
	};

	printk(KERN_DEBUG "splice_read(len=%d)\n", len);
	printk(KERN_DEBUG "pipe_info() = 0x%p\n", pipe);

	while (i < PIPE_BUFFERS) {
		pages[i] = alloc_page(GFP_KERNEL);
		printk(KERN_DEBUG "spd.pages[%d] = 0x%p\n", i, spd.pages[i]);
		if (!pages[i]) break;
		i++;
	}
	BUG_ON(i < PIPE_BUFFERS);
	/* allocate pages */
	spd.nr_pages = PIPE_BUFFERS;
	if (spd.nr_pages <= 0)
		return spd.nr_pages;

	/* @todo somehow, fill a private field that we can use during confirm() */

	/* @todo now, start a transfer on the hardware */
	
	return splice_to_pipe(pipe, &spd);
}

/* the write actor which takes a page from the pipe to the device
 *
 * it must move a single struct pipe_buffer to the desired destination
 * Existing implementations are pipe_to_file, pipe_to_sendpage, pipe_to_user.
 */
static int pipe_to_device(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
		 struct splice_desc *sd)
{
	int rc;
	dma_addr_t dma_addr;
	struct ape_sgdma_transfer *transfer = sd->u.data;
	printk(KERN_DEBUG "pipe_to_device(buf->offset=%d, sd->len=%d/%d,
sd->data = 0x%x)\n",
		buf->offset, sd->len, sd->total_len, sd->u.data);
	/* have pipe source confirm that the data in this buffer is up-to-date */
	rc = buf->ops->confirm(pipe, buf);
	/* not up-to-date? */
	if (unlikely(rc))
		return rc;
	
	/* map page into PCI address space so device can DMA into it */
	dma_addr = pci_map_page(transfer->ape->pci_dev, buf->page, buf->offset,
		sd->len, DMA_TO_DEVICE);
	/* XXX rewind/bailout if that failed */
	/* XXX pci_unmap_page must be called later */

	/* create a transfer descriptor for this buffer */
	transfer_add(transfer, dma_addr, sd->len, buf->offset, 1/*dir_to_dev*/);
	
	printk(KERN_DEBUG "pipe_to_device(): expect %s more data\n",
		sd->flags & SPLICE_F_MORE ? "" : "no");

	/* splice complete, now start the transfer */
	if (sd-> len == sd->total_len) {
		/* terminate transfer list */
		transfer_end(transfer);
		/* dump the descriptor list for debugging purposes */
		dump_transfer(transfer);
		/* start the transfer on the device */
		queue_transfer(transfer->ape->read_engine, transfer);
	}
	return sd->len;
}

/* kernel wants to write from the pipe into our file at ppos */
ssize_t splice_write(struct pipe_inode_info *pipe, struct file *out,
loff_t *ppos, size_t len, unsigned int flags)
{
	struct ape_dev *ape = (struct ape_dev *)out->private_data;
	struct ape_sgdma_transfer *transfer;
	int ret;
	struct splice_desc sd = {
		.total_len = len,
		.flags = flags,
		.pos = *ppos,
	};
	printk(KERN_DEBUG "splice_write(len=%d)\n", len);
	/* allocate a new transfer request */
	transfer = alloc_transfer(ape, PIPE_BUFFERS, 1/*dir_to_dev*/);
	/* remember transfer in the splice descriptor */
	sd.u.data = transfer;
	printk(KERN_DEBUG "transfer = 0x%p\n", sd.u.data);
#if 1
	ret = __splice_from_pipe(pipe, &sd, pipe_to_device);
#else
	ret = splice_from_pipe(pipe, out, ppos, len, flags, pipe_to_device);
#endif
	return ret;
}

#endif /* SPLICE */

/*
 * character device file operations
 */
static struct file_operations sg_fops = {
	.owner = THIS_MODULE,
	.open = sg_open,
	.release = sg_close,
	.read = sg_read,
	.write = sg_write,
	/* asynchronous */
	.aio_read = sg_aio_read,
	.aio_write = sg_aio_write,
#if SPLICE
	/* splice */
	.splice_read = splice_read,
	.splice_write = splice_write,
#endif /* SPLICE */
};

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-13 20:04           ` Leon Woestenberg
@ 2009-06-16 11:57             ` Jens Axboe
  0 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2009-06-16 11:57 UTC (permalink / raw)
  To: Leon Woestenberg; +Cc: Steve Rottinger, linux-kernel

On Sat, Jun 13 2009, Leon Woestenberg wrote:
> Hello Jens, Steve,
> 
> On Sat, Jun 13, 2009 at 9:26 AM, Jens Axboe<jens.axboe@oracle.com> wrote:
> > On Sat, Jun 13 2009, Leon Woestenberg wrote:
> >> On Mon, Jun 8, 2009 at 9:05 AM, Jens Axboe<jens.axboe@oracle.com> wrote:
> >> > On Sat, Jun 06 2009, Leon Woestenberg wrote:
> >> >> How can I pass information from the splice_read(), which spawns a hardware
> >> >> DMA to the pages in my case, to the confirm() hook which is called at some
> >> >> (random) time in the future?
> >> >
> >> > There's a ->private for each pipe_buffer, so you can pass along info on
> >> > a per-page granularity.
> >> >
> >> So, this means in my driver's splice_read(), I must set
> >> pipe->bufs[i]->private for each 0 <= i < PIPE_BUFFERS?
> >
> > Yes. There's no way to make it bigger granularity, since you could have
> > a mix of source pages in the pipe.
> >
> 
> My current splice support code is copied at the end of this email.
> 
> I would like to batch up some pages before I start the DMA transfer,
> as starting a device-driven DMA on page granularity (with a
> corresponding interrupt)
> looks like too much overhead to me.
> 
> I allocate a device transfer in splice_write(), which I would like to
> fill-in in my write actor pipe_to_device(). At some point, I have to
> start a transfer.
> 
> (sd-> len == sd->total_len) is not a strong enough condition, and I
> find that SPLICE_F_MORE is never set:
> 
> root@mpc8315e-rdb:~# /splice-in /7000-bytes.bin  | /splice-out -s8192 /dev/alt
> altpciesgdma_open(0xc74fc368, 0xc7be7000)
> 
> splice_write(len=8192)
> 
> transfer = 0xc7114140
> 
> pipe_to_device(buf->offset=0, sd->len/total_len=4096/8192, sd->data =
> 0xc7114140)
> 
> pipe_to_device() expect no more
> 
> pipe_to_device(buf->offset=0, sd->len/total_len=2904/4096, sd->data =
> 0xc7114140)
> 
> pipe_to_device() expect no more
> 
> splice_write(len=8192)
> 
> transfer = 0xc7114ac0
> 
> altpciesgdma_close(0xc74fc368, 0xc7be7000)
> 
> Is total_len <= PAGE_SIZE a sensible and robust (always occuring)
> condition that indicates the last buffer?

It's probably good enough I think. For best results, you want the caller
to pass you that information (since he knows). splice-in is just a dummy
test app, you could easily modify that to pass in SPLICE_F_MORE.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-12 20:45             ` Steve Rottinger
@ 2009-06-16 11:59               ` Jens Axboe
  2009-06-16 15:06                 ` Steve Rottinger
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2009-06-16 11:59 UTC (permalink / raw)
  To: Steve Rottinger; +Cc: Leon Woestenberg, linux-kernel

On Fri, Jun 12 2009, Steve Rottinger wrote:
> Hi Leon,
> 
> It does seem like a lot of code needs to be executed to move a small
> chunk of data.

It's really not, you should try and benchmark the function call overhead
:-).

> Although,  I think that most of the overhead that I was experiencing
> came from the cumulative
> overhead of each splice system call.   I increased my pipe size using
> Jens' pipe size patch,
> from 16 to 256 pages, and this had a huge effect -- the speed of my
> transfers more than doubled.
> Pipe sizes larger that 256 pages, cause my kernel to crash.

Yes, the system call is more expensive. Increasing the pipe size can
definitely help there.

> I'm doing about 300MB/s to my hardware RAID, running two instances of my
> splice() copy application
> (One on each RAID channel).  I would like to combine the two RAID
> channels using a software RAID 0;
> however, splice, even from /dev/zero runs horribly slow to a software
> RAID device.  I'd be curious
> to know if anyone else has tried this?

Did you trace it and find out why it was slow? It should not be. Moving
300MB/sec should not be making any machine sweat.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-16 11:59               ` Jens Axboe
@ 2009-06-16 15:06                 ` Steve Rottinger
  2009-06-16 18:24                   ` [RFC][PATCH] add support for shrinking/growing a pipe (Was "Re: splice methods in character device driver") Jens Axboe
  2009-06-16 18:28                   ` splice methods in character device driver Jens Axboe
  0 siblings, 2 replies; 19+ messages in thread
From: Steve Rottinger @ 2009-06-16 15:06 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Leon Woestenberg, linux-kernel

Hi Jens,

Jens Axboe wrote:
>
>> Although,  I think that most of the overhead that I was experiencing
>> came from the cumulative
>> overhead of each splice system call.   I increased my pipe size using
>> Jens' pipe size patch,
>> from 16 to 256 pages, and this had a huge effect -- the speed of my
>> transfers more than doubled.
>> Pipe sizes larger that 256 pages, cause my kernel to crash.
>>     
>
> Yes, the system call is more expensive. Increasing the pipe size can
> definitely help there.
>
>   
I know that you have been asked this before, but is there any chance
that we can
get the pipe size patch into the kernel mainline?  It seems like it is
essential to
moving data fast using the splice interface.

>> I'm doing about 300MB/s to my hardware RAID, running two instances of my
>> splice() copy application
>> (One on each RAID channel).  I would like to combine the two RAID
>> channels using a software RAID 0;
>> however, splice, even from /dev/zero runs horribly slow to a software
>> RAID device.  I'd be curious
>> to know if anyone else has tried this?
>>     
>
> Did you trace it and find out why it was slow? It should not be. Moving
> 300MB/sec should not be making any machine sweat.
>
>   
I haven't dug into this too deeply, yet; however, I did discover
something interesting:
The splice runs much faster using the software raid, if I transfer to a
file on a mounted
filesystem, instead of the raw md block device. 

-Steve


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC][PATCH] add support for shrinking/growing a pipe (Was "Re: splice methods in character device driver")
  2009-06-16 15:06                 ` Steve Rottinger
@ 2009-06-16 18:24                   ` Jens Axboe
  2009-06-16 18:28                   ` splice methods in character device driver Jens Axboe
  1 sibling, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2009-06-16 18:24 UTC (permalink / raw)
  To: Steve Rottinger; +Cc: Leon Woestenberg, linux-kernel, Linus Torvalds

On Tue, Jun 16 2009, Steve Rottinger wrote:
> >> Although,  I think that most of the overhead that I was experiencing
> >> came from the cumulative
> >> overhead of each splice system call.   I increased my pipe size using
> >> Jens' pipe size patch,
> >> from 16 to 256 pages, and this had a huge effect -- the speed of my
> >> transfers more than doubled.
> >> Pipe sizes larger that 256 pages, cause my kernel to crash.
> >>     
> >
> > Yes, the system call is more expensive. Increasing the pipe size can
> > definitely help there.
> >
> >   
> I know that you have been asked this before, but is there any chance
> that we can
> get the pipe size patch into the kernel mainline?  It seems like it is
> essential to
> moving data fast using the splice interface.

Sure, the only unresolved issue with it is what sort of interface to
export for changing the pipe size. I went with fcntl().

Linus, I think we discussed this years ago. The patch in question is
here:

http://git.kernel.dk/?p=linux-2.6-block.git;a=commit;h=24547ac4d97bebb58caf9ce58bd507a95c812a3f

I'd like to get it in now, there has been several requests for this in
the past. But I didn't want to push it before this was resolved.

I don't know whether other operating systems allow this functionality,
and if they do what interface they use. I suspect that our need is
somewhat special, since we have splice.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: splice methods in character device driver
  2009-06-16 15:06                 ` Steve Rottinger
  2009-06-16 18:24                   ` [RFC][PATCH] add support for shrinking/growing a pipe (Was "Re: splice methods in character device driver") Jens Axboe
@ 2009-06-16 18:28                   ` Jens Axboe
  1 sibling, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2009-06-16 18:28 UTC (permalink / raw)
  To: Steve Rottinger; +Cc: Leon Woestenberg, linux-kernel

On Tue, Jun 16 2009, Steve Rottinger wrote:
> >> I'm doing about 300MB/s to my hardware RAID, running two instances of my
> >> splice() copy application
> >> (One on each RAID channel).  I would like to combine the two RAID
> >> channels using a software RAID 0;
> >> however, splice, even from /dev/zero runs horribly slow to a software
> >> RAID device.  I'd be curious
> >> to know if anyone else has tried this?
> >>     
> >
> > Did you trace it and find out why it was slow? It should not be. Moving
> > 300MB/sec should not be making any machine sweat.
> >
> I haven't dug into this too deeply, yet; however, I did discover
> something interesting: The splice runs much faster using the software
> raid, if I transfer to a file on a mounted filesystem, instead of the
> raw md block device. 

OK, that's a least a starting point. I'll try this tomorrow (raw block
device vs fs file).

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2009-06-16 18:28 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-05-11 14:40 splice methods in character device driver Steve Rottinger
2009-05-11 19:22 ` Jens Axboe
2009-05-13 16:59   ` Steve Rottinger
2009-06-03 21:32     ` Leon Woestenberg
2009-06-04  7:32       ` Jens Axboe
2009-06-04 13:20         ` Steve Rottinger
2009-06-12 19:21           ` Leon Woestenberg
2009-06-12 19:59             ` Jens Axboe
2009-06-12 20:45             ` Steve Rottinger
2009-06-16 11:59               ` Jens Axboe
2009-06-16 15:06                 ` Steve Rottinger
2009-06-16 18:24                   ` [RFC][PATCH] add support for shrinking/growing a pipe (Was "Re: splice methods in character device driver") Jens Axboe
2009-06-16 18:28                   ` splice methods in character device driver Jens Axboe
2009-06-06 21:25   ` Leon Woestenberg
2009-06-08  7:05     ` Jens Axboe
2009-06-12 22:05       ` Leon Woestenberg
2009-06-13  7:26         ` Jens Axboe
2009-06-13 20:04           ` Leon Woestenberg
2009-06-16 11:57             ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.