All of lore.kernel.org
 help / color / mirror / Atom feed
* can we reuse an skb
@ 2009-06-19  6:46 Radha Mohan
  2009-06-19  6:51 ` jon_zhou
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Radha Mohan @ 2009-06-19  6:46 UTC (permalink / raw)
  To: netdev


Hi,

For an ethernet driver, we need to allocate some pool of SKBs for receiving packets. Is there any way we can reuse the same SKBs without the need to re-allocate in atomic every time one has been used up for netif_rx(). 

Any pointers will be helpful.

-- Mohan


      ICC World Twenty20 England '09 exclusively on YAHOO! CRICKET http://cricket.yahoo.com


^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: can we reuse an skb
  2009-06-19  6:46 can we reuse an skb Radha Mohan
@ 2009-06-19  6:51 ` jon_zhou
  2009-06-19  7:10   ` Radha Mohan
  2009-06-19  7:21   ` Peter Chacko
  2009-06-19 10:37 ` Saikiran Madugula
  2009-06-19 16:56 ` Rick Jones
  2 siblings, 2 replies; 19+ messages in thread
From: jon_zhou @ 2009-06-19  6:51 UTC (permalink / raw)
  To: radhamohan_ch, netdev

I am also thinking about this...

i.e.pcnet32.c
seems the skb will be claimed to be unuseful(mark some bits) in the device driver, than it will be recycled in the softirq handler,

that means unable to reuse it, unless modidy the driver.

Regards,
zhou rui

-----Original Message-----
From: netdev-owner@vger.kernel.org [mailto:netdev-owner@vger.kernel.org] On Behalf Of Radha Mohan
Sent: Friday, June 19, 2009 2:47 PM
To: netdev@vger.kernel.org
Subject: can we reuse an skb


Hi,

For an ethernet driver, we need to allocate some pool of SKBs for receiving packets. Is there any way we can reuse the same SKBs without the need to re-allocate in atomic every time one has been used up for netif_rx(). 

Any pointers will be helpful.

-- Mohan


      ICC World Twenty20 England '09 exclusively on YAHOO! CRICKET http://cricket.yahoo.com

--
To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-19  6:51 ` jon_zhou
@ 2009-06-19  7:10   ` Radha Mohan
  2009-06-19  7:21   ` Peter Chacko
  1 sibling, 0 replies; 19+ messages in thread
From: Radha Mohan @ 2009-06-19  7:10 UTC (permalink / raw)
  To: jon_zhou, netdev



We may need to modify the part in kernel where kfree_skb() will be called somewhere after copying to user space. Right?

I came to know that Windows NDIS has this type of facility. The NDIS gives back the buffer after copying to application.

I came across a patch made long ago for 2.4.19 citing the same concept
http://www.candelatech.com/oss/napi_tune_2.4.19.patch

-- Mohan



----- Original Message ----
From: "jon_zhou@agilent.com" <jon_zhou@agilent.com>
To: radhamohan_ch@yahoo.com; netdev@vger.kernel.org
Sent: Friday, 19 June, 2009 12:21:45 PM
Subject: RE: can we reuse an skb

I am also thinking about this...

i.e.pcnet32.c
seems the skb will be claimed to be unuseful(mark some bits) in the device driver, than it will be recycled in the softirq handler,

that means unable to reuse it, unless modidy the driver.

Regards,
zhou rui

-----Original Message-----
From: netdev-owner@vger.kernel.org [mailto:netdev-owner@vger.kernel.org] On Behalf Of Radha Mohan
Sent: Friday, June 19, 2009 2:47 PM
To: netdev@vger.kernel.org
Subject: can we reuse an skb


Hi,

For an ethernet driver, we need to allocate some pool of SKBs for receiving packets. Is there any way we can reuse the same SKBs without the need to re-allocate in atomic every time one has been used up for netif_rx(). 

Any pointers will be helpful.

-- Mohan


      ICC World Twenty20 England '09 exclusively on YAHOO! CRICKET http://cricket.yahoo.com

--
To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html


      ICC World Twenty20 England &#39;09 exclusively on YAHOO! CRICKET http://cricket.yahoo.com


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-19  6:51 ` jon_zhou
  2009-06-19  7:10   ` Radha Mohan
@ 2009-06-19  7:21   ` Peter Chacko
  1 sibling, 0 replies; 19+ messages in thread
From: Peter Chacko @ 2009-06-19  7:21 UTC (permalink / raw)
  To: jon_zhou; +Cc: radhamohan_ch, netdev

Radha,

skb memory is coming from slab allocation pool, which itself are
re-usable pools. kmalloc(ATOMIC) on these object caches don't incur
much penalty as think for a case when it does memory
allocation/de-allocation. So the intelligence you want to put in the
driver is already done by slab layer. But if you want to add some
thing like per-flow ring-buffers, optimized for a point-to-point link
or similar purpose, you can have a driver level cache .

thanks

On Fri, Jun 19, 2009 at 12:21 PM, <jon_zhou@agilent.com> wrote:
> I am also thinking about this...
>
> i.e.pcnet32.c
> seems the skb will be claimed to be unuseful(mark some bits) in the device driver, than it will be recycled in the softirq handler,
>
> that means unable to reuse it, unless modidy the driver.
>
> Regards,
> zhou rui
>
> -----Original Message-----
> From: netdev-owner@vger.kernel.org [mailto:netdev-owner@vger.kernel.org] On Behalf Of Radha Mohan
> Sent: Friday, June 19, 2009 2:47 PM
> To: netdev@vger.kernel.org
> Subject: can we reuse an skb
>
>
> Hi,
>
> For an ethernet driver, we need to allocate some pool of SKBs for receiving packets. Is there any way we can reuse the same SKBs without the need to re-allocate in atomic every time one has been used up for netif_rx().
>
> Any pointers will be helpful.
>
> -- Mohan
>
>
>      ICC World Twenty20 England &#39;09 exclusively on YAHOO! CRICKET http://cricket.yahoo.com
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-19  6:46 can we reuse an skb Radha Mohan
  2009-06-19  6:51 ` jon_zhou
@ 2009-06-19 10:37 ` Saikiran Madugula
  2009-06-19 18:41   ` Neil Horman
  2009-06-19 16:56 ` Rick Jones
  2 siblings, 1 reply; 19+ messages in thread
From: Saikiran Madugula @ 2009-06-19 10:37 UTC (permalink / raw)
  To: Radha Mohan; +Cc: netdev

Radha Mohan wrote:
> Hi,
> 
> For an ethernet driver, we need to allocate some pool of SKBs for receiving packets. Is there any way we can reuse the same SKBs without the need to re-allocate in atomic every time one has been used up for netif_rx(). 
> 
> Any pointers will be helpful.
http://kerneltrap.org/mailarchive/linux-netdev/2008/9/28/3433504

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-19  6:46 can we reuse an skb Radha Mohan
  2009-06-19  6:51 ` jon_zhou
  2009-06-19 10:37 ` Saikiran Madugula
@ 2009-06-19 16:56 ` Rick Jones
  2009-06-19 23:29   ` David Miller
  2 siblings, 1 reply; 19+ messages in thread
From: Rick Jones @ 2009-06-19 16:56 UTC (permalink / raw)
  To: Radha Mohan; +Cc: netdev

Radha Mohan wrote:
> Hi,
> 
> For an ethernet driver, we need to allocate some pool of SKBs for
> receiving packets. Is there any way we can reuse the same SKBs
> without the need to re-allocate in atomic every time one has been
> used up for netif_rx().

Assuming a driver did have its own "pool" and didn't rely on the pool(s) 
from which skbs are drawn, doesn't that mean you have to now have 
another configuable?  There is no good guarantees on when the upper 
layers will be finished with the skb right?  Which means you would be 
requiring the admin(s) to have an idea of how long their applications 
wait to pull data from their sockets and configure your driver accordingly.

It would seem there would have to be a considerable performance gain 
demonstrated for that kind of thing?

rick jones

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-19 10:37 ` Saikiran Madugula
@ 2009-06-19 18:41   ` Neil Horman
  0 siblings, 0 replies; 19+ messages in thread
From: Neil Horman @ 2009-06-19 18:41 UTC (permalink / raw)
  To: Saikiran Madugula; +Cc: Radha Mohan, netdev

On Fri, Jun 19, 2009 at 11:37:37AM +0100, Saikiran Madugula wrote:
> Radha Mohan wrote:
> > Hi,
> > 
> > For an ethernet driver, we need to allocate some pool of SKBs for receiving packets. Is there any way we can reuse the same SKBs without the need to re-allocate in atomic every time one has been used up for netif_rx(). 
> > 
> > Any pointers will be helpful.
> http://kerneltrap.org/mailarchive/linux-netdev/2008/9/28/3433504
Thats usefull on the transmit side, This appears to be in relation to receive
side buffering.

The answer to the question I think is, yes, you probably could come up with some
method to recycle skb's on the recieve side, but in all honesty, you're not
likely to come up with a solution that doesn't kill your performance.  If you
want to recycle the same skbs to your hardware, you either have to detect when
an application has received that data at the top of the stack, or you have to
copy the data out of the skb you just got from your hardware, so you can reuse
it immediately.  Theres not much point in doing the latter, as it doesn't avoid
the skb allocation your trying to not make.  The former is technically feasible,
but it requires your driver to wait an indefinate amount of time while an
application (or applications, if you consider the multicast case) all get on the
scheduler and receive from their open sockets, so theres no easy way to size
your receive ring, and if you run out, you have to drop frames until someone
releases an skb to you.

Its far easier, and more efficient to just allocate a new skb, and insert that
your hardwares ring buffer.  And its really not that expensive considering the
caching effect that our various allocators have
Neil

> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-19 16:56 ` Rick Jones
@ 2009-06-19 23:29   ` David Miller
  2009-06-20  3:54     ` Peter Chacko
  2009-06-21  5:41     ` Peter Chacko
  0 siblings, 2 replies; 19+ messages in thread
From: David Miller @ 2009-06-19 23:29 UTC (permalink / raw)
  To: rick.jones2; +Cc: radhamohan_ch, netdev

From: Rick Jones <rick.jones2@hp.com>
Date: Fri, 19 Jun 2009 09:56:06 -0700

> Assuming a driver did have its own "pool" and didn't rely on the
> pool(s) from which skbs are drawn, doesn't that mean you have to now
> have another configuable?  There is no good guarantees on when the
> upper layers will be finished with the skb right?  Which means you
> would be requiring the admin(s) to have an idea of how long their
> applications wait to pull data from their sockets and configure your
> driver accordingly.
> 
> It would seem there would have to be a considerable performance gain
> demonstrated for that kind of thing?

Applications can hold onto such data "forever" if they want to.

Any scheme which doesn't allow dynamically increasing the pool
is prone to trivial DoS.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-19 23:29   ` David Miller
@ 2009-06-20  3:54     ` Peter Chacko
  2009-06-20  8:00       ` Evgeniy Polyakov
  2009-06-20 11:51       ` Ben Hutchings
  2009-06-21  5:41     ` Peter Chacko
  1 sibling, 2 replies; 19+ messages in thread
From: Peter Chacko @ 2009-06-20  3:54 UTC (permalink / raw)
  To: David Miller; +Cc: rick.jones2, radhamohan_ch, netdev

But if a network application is holding on to a NIC-drive level pooled
buffers,  we also have architectural issues in violating layered
software design. Application plays at a stateful protocol level, while
driver should be stateless and flow-unaware.

Another thought in the lines of radha's original idea:

Assume that we have n-cores capable of processing packet at the same
time. if we trade off memory for computing, why don't we pre-allocate
"n" dedicated skb buffers regardless of the size of each packet, but
just as big as the size of the MTU itself.(forget JUMBO packet for
now).(  today, dev_alloc_skb() allocate based on the packet len, which
is memory usage optimized.).

each dedicated memory buffer is now a per-CPU/thread data structure.

In the  IO stack world , we have buffer cache, identified by buffer
headers. That design was conceived so early as each block level write
was typically 512 bytes and same for all blocks. why don't we adapt
that into network stack ?

Could you share me the problems  with this approach ?


thanks
peter.
On Sat, Jun 20, 2009 at 4:59 AM, David Miller<davem@davemloft.net> wrote:
> From: Rick Jones <rick.jones2@hp.com>
> Date: Fri, 19 Jun 2009 09:56:06 -0700
>
>> Assuming a driver did have its own "pool" and didn't rely on the
>> pool(s) from which skbs are drawn, doesn't that mean you have to now
>> have another configuable?  There is no good guarantees on when the
>> upper layers will be finished with the skb right?  Which means you
>> would be requiring the admin(s) to have an idea of how long their
>> applications wait to pull data from their sockets and configure your
>> driver accordingly.
>>
>> It would seem there would have to be a considerable performance gain
>> demonstrated for that kind of thing?
>
> Applications can hold onto such data "forever" if they want to.
>
> Any scheme which doesn't allow dynamically increasing the pool
> is prone to trivial DoS.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-20  3:54     ` Peter Chacko
@ 2009-06-20  8:00       ` Evgeniy Polyakov
  2009-06-20 11:51       ` Ben Hutchings
  1 sibling, 0 replies; 19+ messages in thread
From: Evgeniy Polyakov @ 2009-06-20  8:00 UTC (permalink / raw)
  To: Peter Chacko; +Cc: David Miller, rick.jones2, radhamohan_ch, netdev

On Sat, Jun 20, 2009 at 09:24:52AM +0530, Peter Chacko (peterchacko35@gmail.com) wrote:
> Assume that we have n-cores capable of processing packet at the same
> time. if we trade off memory for computing, why don't we pre-allocate
> "n" dedicated skb buffers regardless of the size of each packet, but
> just as big as the size of the MTU itself.(forget JUMBO packet for
> now).(  today, dev_alloc_skb() allocate based on the packet len, which
> is memory usage optimized.).
> 
> each dedicated memory buffer is now a per-CPU/thread data structure.
> 
> In the  IO stack world , we have buffer cache, identified by buffer
> headers. That design was conceived so early as each block level write
> was typically 512 bytes and same for all blocks. why don't we adapt
> that into network stack ?

And page cache has ugly buffer_head strucures to point partially updated
blocks within the page, while block IO has to perform read-modify-write
cycles when it updates less than a single block, which is rather costly,
especially on big-sized blocks.

Consider 100-byte writes and 1500 MTU (or consider 9k MTU for a moment)
- this is a huge overhead and if socket queue is 10 MB for example, you
will waste 140 MB of RAM or have miserable performance if the whole 1500
bytes are accounted into socket buffer while only 100 bytes are used.

For those who wants to play with skb recycling - just implement own pool
of skbs and provide private destructor, which will requeue the same
object into private pool. Not sure this will work any bit faster than
existing scheme though if proper memory accounting is implemented.

-- 
	Evgeniy Polyakov

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-20  3:54     ` Peter Chacko
  2009-06-20  8:00       ` Evgeniy Polyakov
@ 2009-06-20 11:51       ` Ben Hutchings
  1 sibling, 0 replies; 19+ messages in thread
From: Ben Hutchings @ 2009-06-20 11:51 UTC (permalink / raw)
  To: Peter Chacko; +Cc: David Miller, rick.jones2, radhamohan_ch, netdev

On Sat, 2009-06-20 at 09:24 +0530, Peter Chacko wrote:
> But if a network application is holding on to a NIC-drive level pooled
> buffers,  we also have architectural issues in violating layered
> software design. Application plays at a stateful protocol level, while
> driver should be stateless and flow-unaware.

Meanwhile, in the real world, we want to avoid copying data, so an skb
doesn't belong to any specific protocol layer.

> Another thought in the lines of radha's original idea:
> 
> Assume that we have n-cores capable of processing packet at the same
> time. if we trade off memory for computing, why don't we pre-allocate
> "n" dedicated skb buffers regardless of the size of each packet, but
> just as big as the size of the MTU itself.(forget JUMBO packet for
> now).

MTU is interface-specific, and the MTU tells us whether jumbos are
allowed or not.

Anyway, where is this pre-allocation to be done?

> (  today, dev_alloc_skb() allocate based on the packet len, which
> is memory usage optimized.).
[...]

skbs for received packets are normally allocated in advance anyway,
which means they are already MTU-sized.  However, LRO or GRO can make it
worthwhile to defer skb allocation.

Ben.

-- 
Ben Hutchings, Senior Software Engineer, Solarflare Communications
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-19 23:29   ` David Miller
  2009-06-20  3:54     ` Peter Chacko
@ 2009-06-21  5:41     ` Peter Chacko
  2009-06-21  5:49       ` David Miller
                         ` (2 more replies)
  1 sibling, 3 replies; 19+ messages in thread
From: Peter Chacko @ 2009-06-21  5:41 UTC (permalink / raw)
  To: David Miller; +Cc: rick.jones2, radhamohan_ch, netdev

Hi Dave,

Here i am considering a special case where Linux stack is not used in
a host environment. Its dedicated packet processor. Application data
is not expected. (discarded if thats the case).

In the case of a host, yes we cannot pre-allocate buffers as it would
requires us to create another copy at the L4  junction. Current memory
allocation for packet buffers are only meant for this case. Not meant
for a special case when the Linux box is a 100% packet processor.I
wish to argue that in this special case, we don't need an skb_alloc()
or skb_free() sorts of interface.

What i am considering here is the super optimization of memory buffers
for a multi-layer packet processor, without needing to move packets
into user space. In that case, i am optimizing my custom network stack
with a pre-allocated MTU sized and a few  jumbo-sized buffers. And no
interrupts as i do NAPI at all times, as this is a dedicated
appliance. I keep all these buffers in the L1 cache and hence i have
different sets of pools for different cores. I  am currently guiding
my engineers to implement the code changes now..

Seeking your advice if any body has done this already or a patch is
available or any advice against this ...

I would greatly appreciate if any one of you could share me your
experience in related work.



Best regards,
Peter chacko.
On Sat, Jun 20, 2009 at 4:59 AM, David Miller<davem@davemloft.net> wrote:
> From: Rick Jones <rick.jones2@hp.com>
> Date: Fri, 19 Jun 2009 09:56:06 -0700
>
>> Assuming a driver did have its own "pool" and didn't rely on the
>> pool(s) from which skbs are drawn, doesn't that mean you have to now
>> have another configuable?  There is no good guarantees on when the
>> upper layers will be finished with the skb right?  Which means you
>> would be requiring the admin(s) to have an idea of how long their
>> applications wait to pull data from their sockets and configure your
>> driver accordingly.
>>
>> It would seem there would have to be a considerable performance gain
>> demonstrated for that kind of thing?
>
> Applications can hold onto such data "forever" if they want to.
>
> Any scheme which doesn't allow dynamically increasing the pool
> is prone to trivial DoS.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-21  5:41     ` Peter Chacko
@ 2009-06-21  5:49       ` David Miller
  2009-06-21 11:46       ` [RFD] Pluggable code design (was: can we reuse an skb) Al Boldi
  2009-06-21 11:46       ` Al Boldi
  2 siblings, 0 replies; 19+ messages in thread
From: David Miller @ 2009-06-21  5:49 UTC (permalink / raw)
  To: peterchacko35; +Cc: rick.jones2, radhamohan_ch, netdev

From: Peter Chacko <peterchacko35@gmail.com>
Date: Sun, 21 Jun 2009 11:11:16 +0530

> Here i am considering a special case where Linux stack is not used in
> a host environment. Its dedicated packet processor. Application data
> is not expected. (discarded if thats the case).

Linux is a general purpose operating system.

You can modify it for your own needs, locally, however you want.  But
upstream, we have to consider all use cases, not just your's.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFD] Pluggable code design (was: can we reuse an skb)
  2009-06-21  5:41     ` Peter Chacko
  2009-06-21  5:49       ` David Miller
@ 2009-06-21 11:46       ` Al Boldi
  2009-06-21 11:46       ` Al Boldi
  2 siblings, 0 replies; 19+ messages in thread
From: Al Boldi @ 2009-06-21 11:46 UTC (permalink / raw)
  To: Peter Chacko, David Miller
  Cc: rick.jones2, radhamohan_ch, netdev, linux-kernel

Peter Chacko wrote:
> What i am considering here is the super optimization of memory buffers
> for a multi-layer packet processor, without needing to move packets
> into user space. In that case, i am optimizing my custom network stack
> with a pre-allocated MTU sized and a few  jumbo-sized buffers. And no
> interrupts as i do NAPI at all times, as this is a dedicated
> appliance. I keep all these buffers in the L1 cache and hence i have
> different sets of pools for different cores. I  am currently guiding
> my engineers to implement the code changes now..

Yes, having a customizable/pluggable network stack sounds very useful.

In general, OpenSource projects like Linux don't give much incentive to 
pluggable designs, because the source, being open, represents a weired form 
of pluggability.  Unfortunately, this "hack it up / code it hard" design 
style usually inhibits healthy development.

A rethink is probably in place...


Thanks!

--
Al

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFD] Pluggable code design (was: can we reuse an skb)
  2009-06-21  5:41     ` Peter Chacko
  2009-06-21  5:49       ` David Miller
  2009-06-21 11:46       ` [RFD] Pluggable code design (was: can we reuse an skb) Al Boldi
@ 2009-06-21 11:46       ` Al Boldi
  2 siblings, 0 replies; 19+ messages in thread
From: Al Boldi @ 2009-06-21 11:46 UTC (permalink / raw)
  To: Peter Chacko, David Miller
  Cc: rick.jones2, radhamohan_ch, netdev, linux-kernel

Peter Chacko wrote:
> What i am considering here is the super optimization of memory buffers
> for a multi-layer packet processor, without needing to move packets
> into user space. In that case, i am optimizing my custom network stack
> with a pre-allocated MTU sized and a few  jumbo-sized buffers. And no
> interrupts as i do NAPI at all times, as this is a dedicated
> appliance. I keep all these buffers in the L1 cache and hence i have
> different sets of pools for different cores. I  am currently guiding
> my engineers to implement the code changes now..

Yes, having a customizable/pluggable network stack sounds very useful.

In general, OpenSource projects like Linux don't give much incentive to 
pluggable designs, because the source, being open, represents a weired form 
of pluggability.  Unfortunately, this "hack it up / code it hard" design 
style usually inhibits healthy development.

A rethink is probably in place...


Thanks!

--
Al

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-22 13:56   ` Peter Chacko
@ 2009-06-22 14:33     ` Philby John
  0 siblings, 0 replies; 19+ messages in thread
From: Philby John @ 2009-06-22 14:33 UTC (permalink / raw)
  To: Peter Chacko; +Cc: Nicholas Van Orton, jon_zhou, radhamohan_ch, netdev

On Mon, 2009-06-22 at 19:26 +0530, Peter Chacko wrote:
> Philby,
> 
> I thank you much for your time helping me out, answering me.
> 
> My intention here is to customize packet buffer allocation for
> special case, when the linux box in question is just packet processor.
>  That i don't want to allocate memory from a common pool for common
> purpose, like slab cached , re-usable objects like skb. I want to have
> finer control of the memory access time(by allocating objects from L1
> cache, and  keeping it around as fixed no of  packet buffers, like in
> a typical routers.
> 
I am ignorant of a method that can use L1 cache in a predictable manner,
either that or the task at hand is very specific to your line of work.
In that case, you are on the right track.

> I just want to know whether i can re-use any body's work /a patch
> available in this goal, before  i embark on making custom code.
> 
Not that I know of. Sorry :(

Regards,
Philby




> As you said, i will down-load the most updated code and correct my
> self, if there are enough optimizations available already.
> 
> Thanks
> Peter chacko,
> 
> 
> On Mon, Jun 22, 2009 at 7:04 PM, Philby John<pjohn@in.mvista.com> wrote:
> > On Fri, 2009-06-19 at 15:41 +0530, Nicholas Van Orton wrote:
> >> Does this mean that when skb buffer has been allocated using
> >> dev_alloc_skb(), filled with received data and passed to the upper
> >> layers
> >> the kernel would automatically do the task of releasing this buffer
> >> without the user calling dev_kfree_skb()?
> >
> > Yes, I think that is the case. Except when the user calls an ioctl that
> > closes your ethernet device, by say using $ifconfig eth0 down, in which
> > case you must free the ring skb buffer's allocated using
> > dev_kfree_skb().
> >
> >>  I once got
> >> KERNEL: assertion (!atomic_read(&skb->users)) failed at net/core/dev.c
> >> errors when trying
> >> to free them using dev_kfree_skb()
> >>
> >> Could this be cause I did not wait until netif_rx_completed() was called?
> >
> > You are using an old version of the kernel, can't see such code in
> > 2.6.30. From what I know, this usually happens if skb->users is not
> > equal to one, which means the buffer is in use by some user. Like I
> > said, you needn't call dev_kfree_skb() explicitly, it will be freed
> > after use by the upper network layers.
> >
> > netif_receive_skb() ->deliver_skb()-> pt_prev->func() ->
> > ip_rcv() -> ip_rcv_finish()
> >
> > ip_rcv_finish() would finally free it as per the specified protocol.
> > This I think is the flow, but I guess there would be experts here who
> > would correct me if I am wrong.
> >
> > -Philby
> >
> >


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-22 13:34 ` Philby John
@ 2009-06-22 13:56   ` Peter Chacko
  2009-06-22 14:33     ` Philby John
  0 siblings, 1 reply; 19+ messages in thread
From: Peter Chacko @ 2009-06-22 13:56 UTC (permalink / raw)
  To: Philby John; +Cc: Nicholas Van Orton, jon_zhou, radhamohan_ch, netdev

Philby,

I thank you much for your time helping me out, answering me.

My intention here is to customize packet buffer allocation for
special case, when the linux box in question is just packet processor.
 That i don't want to allocate memory from a common pool for common
purpose, like slab cached , re-usable objects like skb. I want to have
finer control of the memory access time(by allocating objects from L1
cache, and  keeping it around as fixed no of  packet buffers, like in
a typical routers.

I just want to know whether i can re-use any body's work /a patch
available in this goal, before  i embark on making custom code.

As you said, i will down-load the most updated code and correct my
self, if there are enough optimizations available already.

Thanks
Peter chacko,


On Mon, Jun 22, 2009 at 7:04 PM, Philby John<pjohn@in.mvista.com> wrote:
> On Fri, 2009-06-19 at 15:41 +0530, Nicholas Van Orton wrote:
>> Does this mean that when skb buffer has been allocated using
>> dev_alloc_skb(), filled with received data and passed to the upper
>> layers
>> the kernel would automatically do the task of releasing this buffer
>> without the user calling dev_kfree_skb()?
>
> Yes, I think that is the case. Except when the user calls an ioctl that
> closes your ethernet device, by say using $ifconfig eth0 down, in which
> case you must free the ring skb buffer's allocated using
> dev_kfree_skb().
>
>>  I once got
>> KERNEL: assertion (!atomic_read(&skb->users)) failed at net/core/dev.c
>> errors when trying
>> to free them using dev_kfree_skb()
>>
>> Could this be cause I did not wait until netif_rx_completed() was called?
>
> You are using an old version of the kernel, can't see such code in
> 2.6.30. From what I know, this usually happens if skb->users is not
> equal to one, which means the buffer is in use by some user. Like I
> said, you needn't call dev_kfree_skb() explicitly, it will be freed
> after use by the upper network layers.
>
> netif_receive_skb() ->deliver_skb()-> pt_prev->func() ->
> ip_rcv() -> ip_rcv_finish()
>
> ip_rcv_finish() would finally free it as per the specified protocol.
> This I think is the flow, but I guess there would be experts here who
> would correct me if I am wrong.
>
> -Philby
>
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
  2009-06-19 10:11 can we reuse an skb Nicholas Van Orton
@ 2009-06-22 13:34 ` Philby John
  2009-06-22 13:56   ` Peter Chacko
  0 siblings, 1 reply; 19+ messages in thread
From: Philby John @ 2009-06-22 13:34 UTC (permalink / raw)
  To: Nicholas Van Orton; +Cc: Peter Chacko, jon_zhou, radhamohan_ch, netdev

On Fri, 2009-06-19 at 15:41 +0530, Nicholas Van Orton wrote:
> Does this mean that when skb buffer has been allocated using
> dev_alloc_skb(), filled with received data and passed to the upper
> layers
> the kernel would automatically do the task of releasing this buffer
> without the user calling dev_kfree_skb()?

Yes, I think that is the case. Except when the user calls an ioctl that
closes your ethernet device, by say using $ifconfig eth0 down, in which
case you must free the ring skb buffer's allocated using
dev_kfree_skb().

>  I once got
> KERNEL: assertion (!atomic_read(&skb->users)) failed at net/core/dev.c
> errors when trying
> to free them using dev_kfree_skb()
> 
> Could this be cause I did not wait until netif_rx_completed() was called?

You are using an old version of the kernel, can't see such code in
2.6.30. From what I know, this usually happens if skb->users is not
equal to one, which means the buffer is in use by some user. Like I
said, you needn't call dev_kfree_skb() explicitly, it will be freed
after use by the upper network layers.

netif_receive_skb() ->deliver_skb()-> pt_prev->func() ->
ip_rcv() -> ip_rcv_finish()

ip_rcv_finish() would finally free it as per the specified protocol.
This I think is the flow, but I guess there would be experts here who
would correct me if I am wrong.

-Philby


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: can we reuse an skb
@ 2009-06-19 10:11 Nicholas Van Orton
  2009-06-22 13:34 ` Philby John
  0 siblings, 1 reply; 19+ messages in thread
From: Nicholas Van Orton @ 2009-06-19 10:11 UTC (permalink / raw)
  To: Peter Chacko; +Cc: jon_zhou, radhamohan_ch, netdev

On Fri, 2009-06-19 at 12:51 +0530, Peter Chacko wrote:
Radha,
>
> skb memory is coming from slab allocation pool, which itself are
> re-usable pools. kmalloc(ATOMIC) on these object caches don't incur
> much penalty as think for a case when it does memory
> allocation/de-allocation. So the intelligence you want to put in the
> driver is already done by slab layer. But if you want to add some
> thing like per-flow ring-buffers, optimized for a point-to-point link
> or similar purpose, you can have a driver level cache .

Does this mean that when skb buffer has been allocated using
dev_alloc_skb(), filled with received data and passed to the upper
layers
the kernel would automatically do the task of releasing this buffer
without the user calling dev_kfree_skb()? I once got
KERNEL: assertion (!atomic_read(&skb->users)) failed at net/core/dev.c
errors when trying
to free them using dev_kfree_skb()

Could this be cause I did not wait until netif_rx_completed() was called?

Regards,
Nicholas




>
> thanks
>
> On Fri, Jun 19, 2009 at 12:21 PM, <jon_zhou@agilent.com> wrote:
> > I am also thinking about this...
> >
> > i.e.pcnet32.c
> > seems the skb will be claimed to be unuseful(mark some bits) in the device driver, than it will be recycled in the softirq handler,
> >
> > that means unable to reuse it, unless modidy the driver.
> >
> > Regards,
> > zhou rui
> >
> > -----Original Message-----
> > From: netdev-owner@vger.kernel.org [mailto:netdev-owner@vger.kernel.org] On Behalf Of Radha Mohan
> > Sent: Friday, June 19, 2009 2:47 PM
> > To: netdev@vger.kernel.org
> > Subject: can we reuse an skb
> >
> >
> > Hi,
> >
> > For an ethernet driver, we need to allocate some pool of SKBs for receiving packets. Is there any way we can reuse the same SKBs without the need to re-allocate in atomic every time one has been used up for netif_rx().
> >
> > Any pointers will be helpful.
> >
> > -- Mohan
> >
> >
> >      ICC World Twenty20 England &#39;09 exclusively on YAHOO! CRICKET http://cricket.yahoo.com
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2009-06-22 14:33 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-06-19  6:46 can we reuse an skb Radha Mohan
2009-06-19  6:51 ` jon_zhou
2009-06-19  7:10   ` Radha Mohan
2009-06-19  7:21   ` Peter Chacko
2009-06-19 10:37 ` Saikiran Madugula
2009-06-19 18:41   ` Neil Horman
2009-06-19 16:56 ` Rick Jones
2009-06-19 23:29   ` David Miller
2009-06-20  3:54     ` Peter Chacko
2009-06-20  8:00       ` Evgeniy Polyakov
2009-06-20 11:51       ` Ben Hutchings
2009-06-21  5:41     ` Peter Chacko
2009-06-21  5:49       ` David Miller
2009-06-21 11:46       ` [RFD] Pluggable code design (was: can we reuse an skb) Al Boldi
2009-06-21 11:46       ` Al Boldi
2009-06-19 10:11 can we reuse an skb Nicholas Van Orton
2009-06-22 13:34 ` Philby John
2009-06-22 13:56   ` Peter Chacko
2009-06-22 14:33     ` Philby John

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.