linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Karim Yaghmour <karim@opersys.com>
To: "Perez-Gonzalez, Inaky" <inaky.perez-gonzalez@intel.com>
Cc: "'Martin Hicks'" <mort@wildopensource.com>,
	"'Daniel Stekloff'" <dsteklof@us.ibm.com>,
	"'Patrick Mochel'" <mochel@osdl.org>,
	"'Randy.Dunlap'" <rddunlap@osdl.org>,
	"'hpa@zytor.com'" <hpa@zytor.com>,
	"'pavel@ucw.cz'" <pavel@ucw.cz>,
	"'jes@wildopensource.com'" <jes@wildopensource.com>,
	"'linux-kernel@vger.kernel.org'" <linux-kernel@vger.kernel.org>,
	"'wildos@sgi.com'" <wildos@sgi.com>,
	"'Tom Zanussi'" <zanussi@us.ibm.com>
Subject: Re: [patch] printk subsystems
Date: Mon, 21 Apr 2003 11:56:14 -0400	[thread overview]
Message-ID: <3EA4149E.776C2FA7@opersys.com> (raw)
In-Reply-To: A46BBDB345A7D5118EC90002A5072C780C2630D5@orsmsx116.jf.intel.com


Others have addressed several points already, I just want to come
back to the scalability issues to make my point clear:

"Perez-Gonzalez, Inaky" wrote:
> Well, the total overhead for queuing an event is strictly O(1),
> bar the acquisition of the queue's semaphore in the middle [I
> still hadn't time to finish this and post it, btw]. I think it
> is pretty scalable assuming you don't have the whole system
> delivering to a single queue.

Consider the following:
1) kue_read() has a while(1) which loops around and delivers messages
one-by-one (to the best of my understanding of the code you posted).
Hence, delivery time increases with the number of events. In contrast,
relayfs can deliver tens of thousands of events in a single shot.

2) by having to maintain next and prev pointers, kue consumes more
memory than relayfs (at least 8 bytes/message more actually, on a
32-bit machine.) For large messages, the impact is negligeable, but
the smaller the messages the bigger the overhead.

3) by having to go through the next/prev pointers, accessing message
X requires reading all messages before it. This can be simplified
with relayfs if: a) equal-sized messages are used, b) sub-buffers
are used. [Other kue calls are also handicapped by similar problems,
such as the deletion of the entire list.]

> > Also, at that rate, you simply can't wait on the reader to read
> > events one-by-one until you can reuse the structure where you
> > stored the data to be read.
> 
> That's the difference. I don't intend to have that. The data
> storage can be reused or not, that is up to the client of the
> kernel API. They still can reuse it if needed by reclaiming the
> event (recall_event), refilling the data and re-sending it.

Right, but by reusing the event, older data is thereby destroyed
(undelivered). Which comes back to what I (and others) have been
saying: kue requires the sender's data structures to exist until
their content is delivered.

> That's where the send-and-forget method helps: provide a
> destructor [will replace the 'flags' field - have it cooking
> on my CVS] that will be called once the event is delivered
> to all parties [if not NULL]. Then you can implement your
> own recovery system using a circular buffer, or kmalloc or
> whatever you wish.

Right, but then you have 2 layers of buffering/queing instead
of a single one.

> > relayfs) and the reader has to read events by the thousand every
> > time.
> 
> The reader can do that, in user space; as many events as
> fit into the reader-provided buffer will be delivered.

Right, but kue has to loop through the queue to deliver the messages
one-by-one. The more messages there are, the longer the delivery time.
Not to mention that you first have to copy it to user-space before
the reader can do write() to put it to permanent storage. With relafys,
you just do write() and you're done.

Cheers,

Karim

===================================================
                 Karim Yaghmour
               karim@opersys.com
      Embedded and Real-Time Linux Expert
===================================================

  parent reply	other threads:[~2003-04-21 15:49 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-04-17 19:58 [patch] printk subsystems Perez-Gonzalez, Inaky
2003-04-17 20:34 ` Karim Yaghmour
2003-04-17 21:03   ` Perez-Gonzalez, Inaky
2003-04-17 21:37     ` Tom Zanussi
2003-04-18  7:21     ` Tom Zanussi
2003-04-18  7:42     ` Greg KH
2003-04-21 15:56     ` Karim Yaghmour [this message]
  -- strict thread matches above, loose matches on Subject: below --
2003-04-24 18:56 Manfred Spraul
2003-04-24 19:10 ` bob
2003-04-23  0:28 Perez-Gonzalez, Inaky
2003-04-22 22:53 Perez-Gonzalez, Inaky
2003-04-23  3:58 ` Tom Zanussi
2003-04-22 19:02 Perez-Gonzalez, Inaky
2003-04-22 19:03 ` H. Peter Anvin
2003-04-22 21:52 ` Tom Zanussi
2003-04-22 18:46 Perez-Gonzalez, Inaky
2003-04-22 23:28 ` Karim Yaghmour
2003-04-22  5:09 Perez-Gonzalez, Inaky
2003-04-24 18:22 ` bob
2003-04-22  4:02 Perez-Gonzalez, Inaky
2003-04-22  5:52 ` Karim Yaghmour
2003-04-22  6:04 ` Tom Zanussi
2003-04-22  3:04 Perez-Gonzalez, Inaky
2003-04-22  6:00 ` Tom Zanussi
2003-04-22  2:49 Perez-Gonzalez, Inaky
2003-04-22  4:34 ` Karim Yaghmour
2003-04-21 18:42 Perez-Gonzalez, Inaky
2003-04-21 18:23 Perez-Gonzalez, Inaky
2003-04-21 18:30 ` H. Peter Anvin
2003-04-08 23:15 Chuck Ebbert
2003-04-07 20:13 Martin Hicks
2003-04-08 18:41 ` Pavel Machek
2003-04-08 20:02   ` Jes Sorensen
2003-04-08 21:02     ` Pavel Machek
2003-04-08 21:10       ` H. Peter Anvin
2003-04-08 21:57         ` Pavel Machek
2003-04-08 22:02           ` Jes Sorensen
2003-04-08 22:05           ` H. Peter Anvin
2003-04-08 22:55             ` Martin Hicks
2003-04-08 23:10               ` Randy.Dunlap
2003-04-14 18:33                 ` Patrick Mochel
2003-04-14 22:33                   ` Daniel Stekloff
2003-04-16 18:42                     ` Patrick Mochel
2003-04-16 12:35                       ` Daniel Stekloff
2003-04-16 19:16                       ` Martin Hicks
2003-04-16 12:43                         ` Daniel Stekloff
2003-04-17 15:56                           ` Martin Hicks
2003-04-17 13:58                             ` Karim Yaghmour
2003-04-15 13:27                   ` Martin Hicks
2003-04-15 14:40                     ` Karim Yaghmour
2003-04-08 22:00       ` Jes Sorensen
2003-04-11 19:21 ` Martin Hicks

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3EA4149E.776C2FA7@opersys.com \
    --to=karim@opersys.com \
    --cc=dsteklof@us.ibm.com \
    --cc=hpa@zytor.com \
    --cc=inaky.perez-gonzalez@intel.com \
    --cc=jes@wildopensource.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mochel@osdl.org \
    --cc=mort@wildopensource.com \
    --cc=pavel@ucw.cz \
    --cc=rddunlap@osdl.org \
    --cc=wildos@sgi.com \
    --cc=zanussi@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).