linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Linus Torvalds <torvalds@osdl.org>
To: Ingo Oeser <ioe-lkml@axxeo.de>
Cc: linux@horizon.com, Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: Make pipe data structure be a circular list of pages, rather
Date: Fri, 14 Jan 2005 16:16:00 -0800 (PST)	[thread overview]
Message-ID: <Pine.LNX.4.58.0501141550000.2310@ppc970.osdl.org> (raw)
In-Reply-To: <200501150034.31880.ioe-lkml@axxeo.de>



On Sat, 15 Jan 2005, Ingo Oeser wrote:
> 
> Now imagine this (ACSCII art in monospace font):
> 
> [ chip A ] ------(1)------ [ chip B ]   [ CPU ]   [memory]  [disk]
>       |                        |           |         |        |
>       +----------(2)-----------+---(2)-----+----(2)--+--------+
> 
> Yes, I understand and support your vision. Now I would like to use path (1)
> the direct for this. Possible? 

Yes, if I understand what you're thinking right. I'm just assuming (for 
the sake of simplicity) that "A" generates all data, and "B" is the one 
that uses it. If both A and B can generate/use data, it still works, but 
you'd need two pipes per endpoint, just because pipes are fundamentally 
uni-directional.

Also, for this particular example, I'll also assume that whatever buffers 
that chips A/B use are "mappable" to the CPU too: they could be either 
regular memory pages (that A/B DMA to/from), or they can be IO buffers on 
the card itself that can be mapped into the CPU address space and 
memcpy'ed.

That second simplification is something that at least my current example
code (which I don't think I've sent to you) kind of assumes. It's not
really _forced_ onto the design, but it avoids the need for a separate 
"accessor" function to copy things around.

[ If you cannot map them into CPU space, then each "struct pipe_buffer" op
  structure would need operations to copy them to/from kernel memory and
  to copy them to/from user memory, so the second simplification basically
  avoids having to have four extra "accessor" functions. ]

What you'd do to get your topology above is:
 - create a "pipe" for both A and B (let's call them "fdA" and "fdB" 
   respectively).
 - the driver is responsible for creating the "struct pipe_buffer" for any 
   data generated by chip A. If it's regular memory, it's a page 
   allocation and the appropriate DMA setup, and if it's IO memory on the 
   card, you'd need to generate fake "struct page *" and a mapping 
   function for them.
 - if you want to send the data that A generates to both fdB _and_ write 
   it to disk (file X), you'd also need one anonymous pipe (fdC), and open
   a fdX for writing to the file, and then just in your process
   effectively do:

	for (;;) {
		n = tee(fdA, fdB, fbC);
		splice(fdC, fdX, n);
	}

   and you'd get what I think you are after. The reason you need the "fdC" 
   is simply because the "tee()" operation would only work on pipes: you 
   can not duplicate any other data structure with "tee()" than a pipe
   buffer, so you couldn't do a direct "tee(fdA, fdB, fdX)" in my world.

(The above example is _extremely_ simplified: in real life, you'd have to
take care of partial results from "splice()" etc error conditions, of
course).

NOTE! My first cut will _assume_ that all buffers are in RAM, so it
wouldn't support the notion of on-card memory. On-card memory does
complicate things - not just the accessor functions, but also the notion
of how to "splice()" (or "tee()" from one to the other). If you assume
buffers-in-RAM, then all pointers are the same, and a tee/splice really
just moves a pointer around. But if the pointer can have magic meaning
that depends on the source it came from, then splicing such a pointer to
another device must inevitably imply at least the possibility of some kind
of memory copy.

So I don't think you'll get _exactly_ what you want for a while, since
you'd have to go through in-memory buffers. But there's no huge conceptual
problem (just enough implementation issues to make it inconvenient) with
the concept of actually keeping the buffer on an external controller.

		Linus

  reply	other threads:[~2005-01-15  0:16 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-01-08  8:25 Make pipe data structure be a circular list of pages, rather linux
2005-01-08 18:41 ` Linus Torvalds
2005-01-08 21:47   ` Alan Cox
2005-01-13 21:46   ` Ingo Oeser
2005-01-13 22:32     ` Linus Torvalds
2005-01-14 21:03       ` Ingo Oeser
2005-01-14 21:29         ` Linus Torvalds
2005-01-14 22:12           ` Ingo Oeser
2005-01-14 22:44             ` Linus Torvalds
2005-01-14 23:34               ` Ingo Oeser
2005-01-15  0:16                 ` Linus Torvalds [this message]
2005-01-16  2:59                   ` Linus Torvalds
2005-01-17 16:03                     ` Ingo Oeser
2005-01-19 21:12                     ` Make pipe data structure be a circular list of pages, rather than linux
2005-01-20  2:06                       ` Robert White
2005-01-15 23:42 Make pipe data structure be a circular list of pages, rather linux
2005-01-15 22:55 ` Alan Cox
2005-01-16  0:12   ` Linus Torvalds
2005-01-16  2:02     ` Miquel van Smoorenburg
2005-01-16  2:06     ` Jeff Garzik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.58.0501141550000.2310@ppc970.osdl.org \
    --to=torvalds@osdl.org \
    --cc=ioe-lkml@axxeo.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@horizon.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).