linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Linux 2.6.9 pktgen module causes INIT process respawning and sickness
@ 2004-11-19 21:53 Jeff V. Merkey
  2004-11-19 22:06 ` Jeff V. Merkey
  0 siblings, 1 reply; 17+ messages in thread
From: Jeff V. Merkey @ 2004-11-19 21:53 UTC (permalink / raw)
  To: linux-kernel, jmerkey


With pktgen.o configured to send 123MB/S on a gigabit on a system using 
pktgen set to the following parms:

pgset "odev eth1"
pgset "pkt_size 1500"
pgset "count 0"
pgset "ipg 5000"
pgset "src_min 10.0.0.1"
pgset "src_max 10.0.0.254"
pgset "dst_min 192.168.0.1"
pgset "dst_max 192.168.0.254"

After 37 hours of continual packet generation into a gigabit 
regeneration tap device,
the server system console will start to respawn the INIT process about 
every 10-12
hours of continuous packet generation.

As a side note, this module in Linux is extremely useful and the "USE 
WITH CAUTION" warnings
are certainly will stated.  The performance of this tool is excellent.

Jeff


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning and sickness
  2004-11-19 21:53 Linux 2.6.9 pktgen module causes INIT process respawning and sickness Jeff V. Merkey
@ 2004-11-19 22:06 ` Jeff V. Merkey
  2004-11-22  3:44   ` Lincoln Dale
  2004-11-22 17:19   ` Martin Josefsson
  0 siblings, 2 replies; 17+ messages in thread
From: Jeff V. Merkey @ 2004-11-19 22:06 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: linux-kernel, jmerkey


Additionally, when packets sizes 64, 128, and 256 are selected, pktgen 
is unable to achieve > 500,000 pps (349,000 only on my system).
A Smartbits generator can achieve over 1 million pps with 64 byte 
packets on gigabit.  This is one performance
issue for this app.  However, at 1500 and 1048 sizes, gigabit saturation 
is achievable. 

Jeff

Jeff V. Merkey wrote:

>
> With pktgen.o configured to send 123MB/S on a gigabit on a system 
> using pktgen set to the following parms:
>
> pgset "odev eth1"
> pgset "pkt_size 1500"
> pgset "count 0"
> pgset "ipg 5000"
> pgset "src_min 10.0.0.1"
> pgset "src_max 10.0.0.254"
> pgset "dst_min 192.168.0.1"
> pgset "dst_max 192.168.0.254"
>
> After 37 hours of continual packet generation into a gigabit 
> regeneration tap device,
> the server system console will start to respawn the INIT process about 
> every 10-12
> hours of continuous packet generation.
>
> As a side note, this module in Linux is extremely useful and the "USE 
> WITH CAUTION" warnings
> are certainly will stated.  The performance of this tool is excellent.
>
> Jeff
>
> -
> To unsubscribe from this list: send the line "unsubscribe 
> linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning and sickness
  2004-11-19 22:06 ` Jeff V. Merkey
@ 2004-11-22  3:44   ` Lincoln Dale
  2004-11-22 17:06     ` Jeff V. Merkey
  2004-11-22 17:19   ` Martin Josefsson
  1 sibling, 1 reply; 17+ messages in thread
From: Lincoln Dale @ 2004-11-22  3:44 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: Jeff V. Merkey, linux-kernel, jmerkey

Jeff,

you're using commodity x86 hardware.  what do you expect?

while the speed of PCs has increased significantly, there are still 
significant bottlenecks when it comes to PCI bandwidth, PCI arbitration 
efficiency & # of interrupts/second.
linux ain't bad -- but there are other OSes which still do slightly better 
given equivalent hardware.

with a PC comes flexibility.
that won't match the speed of the FPGAs in a Spirent Smartbits, Agilent 
RouterTester, IXIA et al ...


cheers,

lincoln.

At 09:06 AM 20/11/2004, Jeff V. Merkey wrote:

>Additionally, when packets sizes 64, 128, and 256 are selected, pktgen is 
>unable to achieve > 500,000 pps (349,000 only on my system).
>A Smartbits generator can achieve over 1 million pps with 64 byte packets 
>on gigabit.  This is one performance
>issue for this app.  However, at 1500 and 1048 sizes, gigabit saturation 
>is achievable.
>Jeff
>
>Jeff V. Merkey wrote:
>
>>
>>With pktgen.o configured to send 123MB/S on a gigabit on a system using 
>>pktgen set to the following parms:
>>
>>pgset "odev eth1"
>>pgset "pkt_size 1500"
>>pgset "count 0"
>>pgset "ipg 5000"
>>pgset "src_min 10.0.0.1"
>>pgset "src_max 10.0.0.254"
>>pgset "dst_min 192.168.0.1"
>>pgset "dst_max 192.168.0.254"
>>
>>After 37 hours of continual packet generation into a gigabit regeneration 
>>tap device,
>>the server system console will start to respawn the INIT process about 
>>every 10-12
>>hours of continuous packet generation.
>>
>>As a side note, this module in Linux is extremely useful and the "USE 
>>WITH CAUTION" warnings
>>are certainly will stated.  The performance of this tool is excellent.
>>
>>Jeff


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning  and sickness
  2004-11-22  3:44   ` Lincoln Dale
@ 2004-11-22 17:06     ` Jeff V. Merkey
  2004-11-22 22:50       ` Lincoln Dale
  0 siblings, 1 reply; 17+ messages in thread
From: Jeff V. Merkey @ 2004-11-22 17:06 UTC (permalink / raw)
  To: Lincoln Dale; +Cc: linux-kernel


Lincoln,

I've studied these types of problems for years, and I think it's 
possible even for Linux. The
problem with small packet sizes on x86 hardware is related to 
non-cachable writes to
the adapter ring buffer for preloading of addresses. From my 
measurements, I have observed
that the increased memory write traffic increases latency to the point 
that the OS is unable to
receive data off the card at high enough rates. With testing against 
Linux with a Spirent Smartbits,
at @ 3,00,000 packets per second for 64 byte packets aboutn 80% of the 
packets get dropped
at 1000 mbs rates. It's true that Linux is simply incapable of 
generating at these rates, but the
reason in Linux is due to poor design at the xmit layer. You see a lot 
better behavior at
1500 byte packet sizes, but this is because the card doesnt have to 
preload as many addresses
into the ring buffer since you are only dealing with 150,000 packets per 
second in the 1500
byte case, not in the millions for the 64 byte case.

Linux uses polling (bad) and the tx queue does not feed packets back to 
the adapter on tx cleaning
of the queue via tx complete (or terminal dma count) interrupts 
durectly, instead they go through
a semaphore to trigger the next send -- horribly broken for high speed 
communications. They should
just post the packets and allow tx complete interrupts to feed them off 
the queues. The queue depths
in qdisc are far too short before Linux starts dropping packets 
internally. I've had to increase
the depth of tx_queue_len for some apps to work properly without 
dropping all the skbs on the floor.

So how to get around this problem. At present, the design of the Intel 
drivers allow all the ripe ring buffers
to be reaped at once from a single interrupt. This is very efficient on 
the RX side and in fact, with static
tests, I have been able to program the Intel card to accept 64 byte 
packets at the maximum rate for
gigabit saturation on Linux provided the ring buffers are loaded with 
static addresses. This indicates
the problem in the design is related to the preloading anbd serializing 
memory behavior of Intel's
architecture at the ring buffer level on the card. This also means that 
Linux on current PC architecture,
(and most OS for that matter) will not be able to sustain 10 gigabit 
rates unless the packet sizes get larger
and larger due to the nature of this problem. The solution for he card 
vendors is to instrument the
ability to load a descriptor to the card once which contains the 
addresses of all the ring buffers
for a session of the card and reap them in A / B lists. i.e. two active 
preload memory tables which
contain a listing of preload addresses for receive and when the card 
fills one list, it switches to the second
for receives, sends an interruptr, and the ISR loads the next table into 
the card.

I see no other way for OS to sustain high packet loading about 500,000 
packets per second on Linux
or even come close to dealing with small packets or full 10 gigabite 
ethernet without such a model.
The bus speeds are actually fine for dealing with this on current 
hardware. The problem is realated
to the serializing behavior of non-cachable memory references on IO 
mapped card memory, and this
suggestion could be implemented in Intel Gigabit and 10 gE hardware with 
microcode and minor changes
to the DMA designs of their chipsets. It would allow all OS to reach 
performance levels of a Smartbits
or even a CISCO router without the need for custom hardware design.

My 2 cents.

Jeff






Lincoln Dale wrote:

> Jeff,
>
> you're using commodity x86 hardware. what do you expect?
>
> while the speed of PCs has increased significantly, there are still 
> significant bottlenecks when it comes to PCI bandwidth, PCI 
> arbitration efficiency & # of interrupts/second.
> linux ain't bad -- but there are other OSes which still do slightly 
> better given equivalent hardware.
>
> with a PC comes flexibility.
> that won't match the speed of the FPGAs in a Spirent Smartbits, 
> Agilent RouterTester, IXIA et al ...
>
> cheers,
>
> lincoln.
>
> At 09:06 AM 20/11/2004, Jeff V. Merkey wrote:
>
>> Additionally, when packets sizes 64, 128, and 256 are selected, 
>> pktgen is unable to achieve > 500,000 pps (349,000 only on my system).
>> A Smartbits generator can achieve over 1 million pps with 64 byte 
>> packets on gigabit. This is one performance
>> issue for this app. However, at 1500 and 1048 sizes, gigabit 
>> saturation is achievable.
>> Jeff
>>
>> Jeff V. Merkey wrote:
>>
>>>
>>> With pktgen.o configured to send 123MB/S on a gigabit on a system 
>>> using pktgen set to the following parms:
>>>
>>> pgset "odev eth1"
>>> pgset "pkt_size 1500"
>>> pgset "count 0"
>>> pgset "ipg 5000"
>>> pgset "src_min 10.0.0.1"
>>> pgset "src_max 10.0.0.254"
>>> pgset "dst_min 192.168.0.1"
>>> pgset "dst_max 192.168.0.254"
>>>
>>> After 37 hours of continual packet generation into a gigabit 
>>> regeneration tap device,
>>> the server system console will start to respawn the INIT process 
>>> about every 10-12
>>> hours of continuous packet generation.
>>>
>>> As a side note, this module in Linux is extremely useful and the 
>>> "USE WITH CAUTION" warnings
>>> are certainly will stated. The performance of this tool is excellent.
>>>
>>> Jeff
>>
>
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning and sickness
  2004-11-19 22:06 ` Jeff V. Merkey
  2004-11-22  3:44   ` Lincoln Dale
@ 2004-11-22 17:19   ` Martin Josefsson
  2004-11-22 18:33     ` Jeff V. Merkey
  1 sibling, 1 reply; 17+ messages in thread
From: Martin Josefsson @ 2004-11-22 17:19 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: linux-kernel, jmerkey

[-- Attachment #1: Type: text/plain, Size: 866 bytes --]

On Fri, 2004-11-19 at 23:06, Jeff V. Merkey wrote:
> Additionally, when packets sizes 64, 128, and 256 are selected, pktgen 
> is unable to achieve > 500,000 pps (349,000 only on my system).
> A Smartbits generator can achieve over 1 million pps with 64 byte 
> packets on gigabit.  This is one performance
> issue for this app.  However, at 1500 and 1048 sizes, gigabit saturation 
> is achievable. 

What hardware are you using? 349kpps is _low_ performance at 64byte
packets.

Here you can see Roberts (pktgen author) results when testing diffrent
e1000 nics at diffrent bus speeds. He also tested 2port and 4port e1000
cards, the 4port nics have an pci-x bridge...

http://robur.slu.se/Linux/net-development/experiments/2004/040808-pktgen

I get a lot higher than 349kpps with an e1000 desktop adapter running at
32bit/66MHz.
 
-- 
/Martin

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning and sickness
  2004-11-22 17:19   ` Martin Josefsson
@ 2004-11-22 18:33     ` Jeff V. Merkey
  0 siblings, 0 replies; 17+ messages in thread
From: Jeff V. Merkey @ 2004-11-22 18:33 UTC (permalink / raw)
  To: Martin Josefsson; +Cc: linux-kernel, jmerkey

Martin,

See the comments below.  This test uses dual and quad adapters, but 
doesn;t get around the
poor design of dev_queue_xmit or the driver layer for xmit packets.  The 
reasons explained
below:

Jeff


Lincoln,

I've studied these types of problems for years, and I think it's 
possible even for Linux. The
problem with small packet sizes on x86 hardware is related to 
non-cachable writes to
the adapter ring buffer for preloading of addresses. From my 
measurements, I have observed
that the increased memory write traffic increases latency to the point 
that the OS is unable to
receive data off the card at high enough rates. With testing against 
Linux with a Spirent Smartbits,
at @ 3,00,000 packets per second for 64 byte packets aboutn 80% of the 
packets get dropped
at 1000 mbs rates. It's true that Linux is simply incapable of 
generating at these rates, but the
reason in Linux is due to poor design at the xmit layer. You see a lot 
better behavior at
1500 byte packet sizes, but this is because the card doesnt have to 
preload as many addresses
into the ring buffer since you are only dealing with 150,000 packets per 
second in the 1500
byte case, not in the millions for the 64 byte case.

Linux uses polling (bad) and the tx queue does not feed packets back to 
the adapter on tx cleaning
of the queue via tx complete (or terminal dma count) interrupts 
durectly, instead they go through
a semaphore to trigger the next send -- horribly broken for high speed 
communications. They should
just post the packets and allow tx complete interrupts to feed them off 
the queues. The queue depths
in qdisc are far too short before Linux starts dropping packets 
internally. I've had to increase
the depth of tx_queue_len for some apps to work properly without 
dropping all the skbs on the floor.

So how to get around this problem. At present, the design of the Intel 
drivers allow all the ripe ring buffers
to be reaped at once from a single interrupt. This is very efficient on 
the RX side and in fact, with static
tests, I have been able to program the Intel card to accept 64 byte 
packets at the maximum rate for
gigabit saturation on Linux provided the ring buffers are loaded with 
static addresses. This indicates
the problem in the design is related to the preloading anbd serializing 
memory behavior of Intel's
architecture at the ring buffer level on the card. This also means that 
Linux on current PC architecture,
(and most OS for that matter) will not be able to sustain 10 gigabit 
rates unless the packet sizes get larger
and larger due to the nature of this problem. The solution for he card 
vendors is to instrument the
ability to load a descriptor to the card once which contains the 
addresses of all the ring buffers
for a session of the card and reap them in A / B lists. i.e. two active 
preload memory tables which
contain a listing of preload addresses for receive and when the card 
fills one list, it switches to the second
for receives, sends an interruptr, and the ISR loads the next table into 
the card.

I see no other way for OS to sustain high packet loading about 500,000 
packets per second on Linux
or even come close to dealing with small packets or full 10 gigabite 
ethernet without such a model.
The bus speeds are actually fine for dealing with this on current 
hardware. The problem is realated
to the serializing behavior of non-cachable memory references on IO 
mapped card memory, and this
suggestion could be implemented in Intel Gigabit and 10 gE hardware with 
microcode and minor changes
to the DMA designs of their chipsets. It would allow all OS to reach 
performance levels of a Smartbits
or even a CISCO router without the need for custom hardware design.

My 2 cents.

Jeff






Lincoln Dale wrote:

 > Jeff,
 >
 > you're using commodity x86 hardware. what do you expect?
 >
 > while the speed of PCs has increased significantly, there are still 
significant bottlenecks when it comes to PCI bandwidth, PCI arbitration 
efficiency & # of interrupts/second.
 > linux ain't bad -- but there are other OSes which still do slightly 
better given equivalent hardware.
 >
 > with a PC comes flexibility.
 > that won't match the speed of the FPGAs in a Spirent Smartbits, 
Agilent RouterTester, IXIA et al ...
 >
 > cheers,
 >
 > lincoln.



Martin Josefsson wrote:

>On Fri, 2004-11-19 at 23:06, Jeff V. Merkey wrote:
>  
>
>>Additionally, when packets sizes 64, 128, and 256 are selected, pktgen 
>>is unable to achieve > 500,000 pps (349,000 only on my system).
>>A Smartbits generator can achieve over 1 million pps with 64 byte 
>>packets on gigabit.  This is one performance
>>issue for this app.  However, at 1500 and 1048 sizes, gigabit saturation 
>>is achievable. 
>>    
>>
>
>What hardware are you using? 349kpps is _low_ performance at 64byte
>packets.
>
>Here you can see Roberts (pktgen author) results when testing diffrent
>e1000 nics at diffrent bus speeds. He also tested 2port and 4port e1000
>cards, the 4port nics have an pci-x bridge...
>
>http://robur.slu.se/Linux/net-development/experiments/2004/040808-pktgen
>
>I get a lot higher than 349kpps with an e1000 desktop adapter running at
>32bit/66MHz.
> 
>  
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning  and sickness
  2004-11-22 17:06     ` Jeff V. Merkey
@ 2004-11-22 22:50       ` Lincoln Dale
  2004-11-23  0:36         ` Jeff V. Merkey
  0 siblings, 1 reply; 17+ messages in thread
From: Lincoln Dale @ 2004-11-22 22:50 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: linux-kernel

Jeff,

At 04:06 AM 23/11/2004, Jeff V. Merkey wrote:
>I've studied these types of problems for years, and I think it's possible 
>even for Linux.

so you have the source code --if its such a big deal for you, how about you 
contribute the work to make this possible ?

the fact is, large-packet-per-second generation fits into two categories:
  (a) script kiddies / haxors who are interested in building DoS tools
  (b) folks that spend too much time benchmarking.

for the (b) case, typically the PPS-generation is only part of it.  getting 
meaningful statistics on reordering (if any) as well as accurate latency 
and ideally real-world traffic flows is important.  there are specialized 
tools out there to do this: Spirent, Ixia, Agilent et al make them.

>[..]
>I see no other way for OS to sustain high packet loading about 500,000 
>packets per second on Linux
>or even come close to dealing with small packets or full 10 gigabite 
>ethernet without such a model.

10GbE NICs are an entirely different beast from 1GbE.
as you pointed out, with real-world packet sizes today, one can sustain 
wire-rate 1GbE today (same holds true for 2Gbps Fibre Channel also).

i wouldn't call pushing minimum-packet-size @ 1GbE (which is 46 payload, 72 
bytes on the wire btw) "real world".  and its 1.488M packets/second.

>The bus speeds are actually fine for dealing with this on current hardware.

its fine when you have meaningful interrupt coalescing going on & large 
packets to DMA.
it fails when you have inefficient DMA (small) with the overhead of setting 
up & tearing down the DMA and associated arbitration overhead.



cheers,

lincoln.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning   and sickness
  2004-11-22 22:50       ` Lincoln Dale
@ 2004-11-23  0:36         ` Jeff V. Merkey
  2004-11-23  1:06           ` Jeff V. Merkey
  2004-11-23  1:23           ` Lincoln Dale
  0 siblings, 2 replies; 17+ messages in thread
From: Jeff V. Merkey @ 2004-11-23  0:36 UTC (permalink / raw)
  To: Lincoln Dale; +Cc: linux-kernel

Lincoln Dale wrote:

> Jeff,
>
> At 04:06 AM 23/11/2004, Jeff V. Merkey wrote:
>
>> I've studied these types of problems for years, and I think it's 
>> possible even for Linux.
>
>
> so you have the source code --if its such a big deal for you, how 
> about you contribute the work to make this possible ?


Bryan Sparks says no to open sourcing this code in Linux. Sorry -- I 
asked. I am allowed to open source any modifications
to public kernel sources like dev.c since we have an obligation to do 
so. I will provide source code enhancements for the kernel
for anyone who purchases our Linux based appliances and asks for the 
source code (so says Bryan Sparks). You can issue a purchase
request to Bryan Sparks (bryan@devicelogics.com) if you want any source 
code changes for the Linux kernel.

>
> the fact is, large-packet-per-second generation fits into two categories:
> (a) script kiddies / haxors who are interested in building DoS tools
> (b) folks that spend too much time benchmarking.
>
> for the (b) case, typically the PPS-generation is only part of it. 
> getting meaningful statistics on reordering (if any) as well as 
> accurate latency and ideally real-world traffic flows is important. 
> there are specialized tools out there to do this: Spirent, Ixia, 
> Agilent et al make them.


There are about four pages of listings of open source tools and scripts 
that do this -- we support all of them.

>> [..]
>> I see no other way for OS to sustain high packet loading about 
>> 500,000 packets per second on Linux
>> or even come close to dealing with small packets or full 10 gigabite 
>> ethernet without such a model.
>
>
> 10GbE NICs are an entirely different beast from 1GbE.
> as you pointed out, with real-world packet sizes today, one can 
> sustain wire-rate 1GbE today (same holds true for 2Gbps Fibre Channel 
> also).
>
> i wouldn't call pushing minimum-packet-size @ 1GbE (which is 46 
> payload, 72 bytes on the wire btw) "real world". and its 1.488M 
> packets/second.
>
I agree. I have also noticed that CISCO routers are not even able to 
withstand these rates with 64 byte packets without dropping them,
so I agree this is not real world. It is useful testing howevr, to 
determine the limits and bottlenecks of where things break.

>> The bus speeds are actually fine for dealing with this on current 
>> hardware.
>
>
> its fine when you have meaningful interrupt coalescing going on & 
> large packets to DMA.
> it fails when you have inefficient DMA (small) with the overhead of 
> setting up & tearing down the DMA and associated arbitration overhead.
>
>

I can sustain full line rate gigabit on two adapters at the tsame time 
with a 12 CLK interpacket gap time and 0 dropped packets at 64
byte sizes from a Smartbits to Linux provided the adapter ring buffer is 
loaded with static addresses. This demonstrates that it is
possible to sustain 64 byte packet rates at full line rate with current 
DMA architectures on 400 Mhz buses with Linux.
(which means it will handle any network loading scenario). The 
bottleneck from my measurements appears to be the
overhead of serializing writes to the adapter ring buffer IO memory. The 
current drivers also perform interrupt
coalescing very well with Linux. What's needed is a method for 
submission of ring buffer entries that can be sent in large
scatter gather listings rather than one at a time. Ring buffers exhibit 
sequential behavior so this method should work well
to support 1Gbe and 10Gbe at full line rate with small packet sizes.

Jeff


>
> cheers,
>
> lincoln.
>
> -
> To unsubscribe from this list: send the line "unsubscribe 
> linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning   and sickness
  2004-11-23  0:36         ` Jeff V. Merkey
@ 2004-11-23  1:06           ` Jeff V. Merkey
  2004-11-23  1:25             ` Lincoln Dale
  2004-11-23  1:23           ` Lincoln Dale
  1 sibling, 1 reply; 17+ messages in thread
From: Jeff V. Merkey @ 2004-11-23  1:06 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: Lincoln Dale, linux-kernel

Jeff V. Merkey wrote:

>
> Bryan Sparks says no to open sourcing this code in Linux. Sorry -- I 
> asked. I am allowed to open source any modifications
> to public kernel sources like dev.c since we have an obligation to do 
> so. I will provide source code enhancements for the kernel
> for anyone who purchases our Linux based appliances and asks for the 
> source code (so says Bryan Sparks). You can issue a purchase
> request to Bryan Sparks (bryan@devicelogics.com) if you want any 
> source code changes for the Linux kernel.
>
Lincoln,

Needless to say, we are not open sourcing any of our proprietary 
technology with the appliances, just the changes to the core
Linux kernel files as required by the GPL, just to clarify. It comes as 
a patch to linux-2.6.9 and does not include the appliance
core systems.

Jeff

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning   and sickness
  2004-11-23  0:36         ` Jeff V. Merkey
  2004-11-23  1:06           ` Jeff V. Merkey
@ 2004-11-23  1:23           ` Lincoln Dale
  1 sibling, 0 replies; 17+ messages in thread
From: Lincoln Dale @ 2004-11-23  1:23 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: linux-kernel

At 11:36 AM 23/11/2004, Jeff V. Merkey wrote:
>>>I've studied these types of problems for years, and I think it's 
>>>possible even for Linux.
>>
>>so you have the source code --if its such a big deal for you, how about 
>>you contribute the work to make this possible ?
>
>Bryan Sparks says no to open sourcing this code in Linux. Sorry -- I 
>asked. I am allowed to open source any modifications
>to public kernel sources like dev.c since we have an obligation to do so. 
>I will provide source code enhancements for the kernel
>for anyone who purchases our Linux based appliances and asks for the 
>source code (so says Bryan Sparks). You can issue a purchase
>request to Bryan Sparks (bryan@devicelogics.com) if you want any source 
>code changes for the Linux kernel.

LOL.  in wonderland again?

>>the fact is, large-packet-per-second generation fits into two categories:
>>(a) script kiddies / haxors who are interested in building DoS tools
>>(b) folks that spend too much time benchmarking.
>>
>>for the (b) case, typically the PPS-generation is only part of it. 
>>getting meaningful statistics on reordering (if any) as well as accurate 
>>latency and ideally real-world traffic flows is important. there are 
>>specialized tools out there to do this: Spirent, Ixia, Agilent et al make them.
>
>There are about four pages of listings of open source tools and scripts 
>that do this -- we support all of them.

so you're creating a packet-generation tool?
you mentioned already that you had to increase receive-buffers up to some 
large number.  doesn't sound like a very useful packet-generation tool if 
you're internally having to buffer >60K packets . . .
LOL.

>>i wouldn't call pushing minimum-packet-size @ 1GbE (which is 46 payload, 
>>72 bytes on the wire btw) "real world". and its 1.488M packets/second.
>I agree. I have also noticed that CISCO routers are not even able to 
>withstand these rates with 64 byte packets without dropping them,
>so I agree this is not real world. It is useful testing howevr, to 
>determine the limits and bottlenecks of where things break.

Cisco software-based routers?  sure ...
however, if you had an application which required wire-rate minimum-sized 
frames, then a software-based router wouldn't really be your platform of 
choice.

hint: go look at EANTC's testing of GbE and 10GbE L3 switches.

there's public test data of 10GbE with 10,000-line ACLs for both IPv4 & 
IPv6-based L3 switching.



cheers,

lincoln.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning   and sickness
  2004-11-23  1:06           ` Jeff V. Merkey
@ 2004-11-23  1:25             ` Lincoln Dale
  0 siblings, 0 replies; 17+ messages in thread
From: Lincoln Dale @ 2004-11-23  1:25 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: Jeff V. Merkey, linux-kernel

At 12:06 PM 23/11/2004, Jeff V. Merkey wrote:
>>Bryan Sparks says no to open sourcing this code in Linux. Sorry -- I 
>>asked. I am allowed to open source any modifications
>>to public kernel sources like dev.c since we have an obligation to do so. 
>>I will provide source code enhancements for the kernel
>>for anyone who purchases our Linux based appliances and asks for the 
>>source code (so says Bryan Sparks). You can issue a purchase
>>request to Bryan Sparks (bryan@devicelogics.com) if you want any source 
>>code changes for the Linux kernel.
>
>Needless to say, we are not open sourcing any of our proprietary 
>technology with the appliances, just the changes to the core
>Linux kernel files as required by the GPL, just to clarify. It comes as a 
>patch to linux-2.6.9 and does not include the appliance
>core systems.

got it - much clearer.

fair enough.


cheers,

lincoln.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning   and sickness
  2004-11-23 22:54             ` Jeff V. Merkey
@ 2004-11-25  2:17               ` Lincoln Dale
  0 siblings, 0 replies; 17+ messages in thread
From: Lincoln Dale @ 2004-11-25  2:17 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: Andi Kleen, linux-kernel

At 09:54 AM 24/11/2004, Jeff V. Merkey wrote:
[..]
>True. Without the proposed hardware change to the 1 GbE abd 10GbE adapter,
>I doubt this could be eliminated. There would still be the need to free 
>the descriptor
>from the ring buffer and this does require touching this memory. Scrap 
>that idea.
>The long term solution is for the card vendors to enable a batch mode for 
>submission
[..]

Jeff,

so the fact still remains: what is so bad about the current approach.
sure -- it can't do wire-rate 1GbE with minimal sized frames -- but even if 
it could -- would it be able to do bidirectional 1GbE with minimal sized 
frames?

even if you could, can you name a real-world application that would 
actually need that?


you make the point of "these things are necessary for 10GbE".
sure, but -- again -- 10GbE NICs are typically an entirely different beast, 
with far more offload, RAM , DMA & on-board firmware capabilities.

take a look at any of the 10GbE adapters, either already released, 
announced, or in development.  they all go well beyond 1GbE NICs for 
embedded smarts; they have to.

the ability to wire-rate minimum-packet-size 10GbE is still not going to be 
something that any real-world app (that i can think of) requires.
10GbE wire-rate is in the order of ~14.88 million packets/second.  that 
works out to approximately 1 packet every 67 nanoseconds.



cheers,

lincoln.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning   and sickness
  2004-11-23 22:27           ` Andi Kleen
@ 2004-11-23 22:54             ` Jeff V. Merkey
  2004-11-25  2:17               ` Lincoln Dale
  0 siblings, 1 reply; 17+ messages in thread
From: Jeff V. Merkey @ 2004-11-23 22:54 UTC (permalink / raw)
  To: Andi Kleen; +Cc: ltd, linux-kernel

Andi Kleen wrote:

>On Tue, Nov 23, 2004 at 02:57:16PM -0700, Jeff V. Merkey wrote:
>  
>
>>Implementation of this with skb's would not be trivial. M$ in their 
>>network drivers did this sort of circular list of pages
>>structure per adapter for receives and use it "pinned" to some of their 
>>proprietary drivers in W2K and would use their
>>version of an skb as a "pointer" of sorts that could dynamically assign 
>>a filled page from this list as a "receive" then perform
>>the user space copy from the page and release it back to the adapter. 
>>This allowed them to fill the ring buffers with static
>>addresses and copy into user space as fast as they could allocate 
>>control blocks.
>>    
>>
>
>The point is to eliminate the writes for the address and buffer
>fields in the ring descriptor right? I don't really see the point
>because you have to twiggle at least the owner bit, so you
>always have a cacheline sized transaction on the bus.
>And that would likely include the ring descriptor anyways, just
>implicitely in the read-modify-write cycle.
>  
>

True. Without the proposed hardware change to the 1 GbE abd 10GbE adapter,
I doubt this could be eliminated. There would still be the need to free 
the descriptor
from the ring buffer and this does require touching this memory. Scrap 
that idea.
The long term solution is for the card vendors to enable a batch mode 
for submission
of ring buffer entries that do not require clearing any fields, but that 
simply would
take an entire slate of newly allocated s/g entries and swap them 
between tables.

for sparse conditions, an interrupt when packet(s) are pending is 
already instrumented
in these adapters, so adding this capability would not be diffidult. 
I've probed around
with some of these vendors with these discussions, and for the Intel 
adapters, it would
require a change to the chipset, but not a major one. It's doable.

>If you're worried about the latencies of the separate writes
>you could always use write combining to combine the writes.
>
>If you write the full cache line you could possibly even
>avoid the read in this cae.
>
>On x86-64 it can be enabled for writel/writeq with CONFIG_UNORDERED_IO.
>You just have to be careful to add all the required memory
>barriers, but the driver should have that already if it works
>on IA64/sparc64/alpha/ppc64. 
>
>It's an experimental option not enabled by default on x86-64 because
>the performance implications haven't been really investigated well.
>You could probably do it on i386 too by setting the right MSR
>or adding a ioremap_wc() 
>  
>

I will look at this feature and see how much it helps. Long term, folks 
should
inquire from the board vendors if they would be willing to instrument 
something
like this. Then the OS's could actually use 10GbE. The buses support the
bandwidth today, and I have measured it.

Jeff

>-Andi
>
>  
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning   and sickness
  2004-11-23 21:57         ` Jeff V. Merkey
@ 2004-11-23 22:27           ` Andi Kleen
  2004-11-23 22:54             ` Jeff V. Merkey
  0 siblings, 1 reply; 17+ messages in thread
From: Andi Kleen @ 2004-11-23 22:27 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: Andi Kleen, ltd, linux-kernel

On Tue, Nov 23, 2004 at 02:57:16PM -0700, Jeff V. Merkey wrote:
> Implementation of this with skb's would not be trivial. M$ in their 
> network drivers did this sort of circular list of pages
> structure per adapter for receives and use it "pinned" to some of their 
> proprietary drivers in W2K and would use their
> version of an skb as a "pointer" of sorts that could dynamically assign 
> a filled page from this list as a "receive" then perform
> the user space copy from the page and release it back to the adapter. 
> This allowed them to fill the ring buffers with static
> addresses and copy into user space as fast as they could allocate 
> control blocks.

The point is to eliminate the writes for the address and buffer
fields in the ring descriptor right? I don't really see the point
because you have to twiggle at least the owner bit, so you
always have a cacheline sized transaction on the bus.
And that would likely include the ring descriptor anyways, just
implicitely in the read-modify-write cycle.

If you're worried about the latencies of the separate writes
you could always use write combining to combine the writes.

If you write the full cache line you could possibly even
avoid the read in this cae.

On x86-64 it can be enabled for writel/writeq with CONFIG_UNORDERED_IO.
You just have to be careful to add all the required memory
barriers, but the driver should have that already if it works
on IA64/sparc64/alpha/ppc64. 

It's an experimental option not enabled by default on x86-64 because
the performance implications haven't been really investigated well.
You could probably do it on i386 too by setting the right MSR
or adding a ioremap_wc() 

-Andi

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning   and sickness
  2004-11-23 20:40       ` Andi Kleen
@ 2004-11-23 21:57         ` Jeff V. Merkey
  2004-11-23 22:27           ` Andi Kleen
  0 siblings, 1 reply; 17+ messages in thread
From: Jeff V. Merkey @ 2004-11-23 21:57 UTC (permalink / raw)
  To: Andi Kleen; +Cc: ltd, linux-kernel


Andi,

For network forensics and analysis, it is almost a requirement if you 
are using Linux. The bus speeds on these systems
also support 450 MB/S throughput for disk and network I/O. I agree it's 
not that interesting if you are
deploying file servers that are remote attached on PPPoE and PPP as a 
network server or workstation, given
that NFS and userspace servers like SAMBA are predominant in Linux as 
file service. High performance real time
network analysis is a different story. High performance I/O file service 
and storage are also
interesting and I can see folks wanting it.

I guess I have a hard time understanding the following statement,

" ... perhaps [supporting 10 GbE and 1GbE for high performance beyond 
remote internet access ] is not that interesting ... "

Hope it's not too wet in Germany this time of year. I am heading back to 
Stolberg and Heinsberg
to show off our new baby boy born Oct 11, 2004 to his O-ma and O-O-ma (I 
guess this is how you spell this)
end of January (I hope). I might be even make it to Nurnberg while I'm 
at it. :-)

Implementation of this with skb's would not be trivial. M$ in their 
network drivers did this sort of circular list of pages
structure per adapter for receives and use it "pinned" to some of their 
proprietary drivers in W2K and would use their
version of an skb as a "pointer" of sorts that could dynamically assign 
a filled page from this list as a "receive" then perform
the user space copy from the page and release it back to the adapter. 
This allowed them to fill the ring buffers with static
addresses and copy into user space as fast as they could allocate 
control blocks.

For linux, I would guess the easiest way to do this same sort of thing 
would be to allocate a page per ring buffer
entry, pin the entries, and use allocated skb buffers to point into the 
buffer long enough to copy out the data. This would
**HELP** currently but not fix the problem completely, but the approach 
would allow linux to easily move to a table driven
method since it would switch from a ring of pinned pages to tables of 
pinned pages that could be swapped in and out.

We would need to logically detach the memory from the skb and make the 
skb a pointer block into the skb->data
area of the list. M$ does something similiar to what I described. It 
does make the whole skb_clone thing
a lot more complicated but for those apps that need to "hold" skb's 
which is infrequent for most cases,
someone could just call skb_clone() when they needed a private sopy of 
and skb->data pair.

Jeff

Andi Kleen wrote:

>"Jeff V. Merkey" <jmerkey@devicelogics.com> writes:
>  
>
>>I can sustain full line rate gigabit on two adapters at the tsame time
>>with a 12 CLK interpacket gap time and 0 dropped packets at 64
>>byte sizes from a Smartbits to Linux provided the adapter ring buffer
>>is loaded with static addresses. This demonstrates that it is
>>possible to sustain 64 byte packet rates at full line rate with
>>current DMA architectures on 400 Mhz buses with Linux.
>>(which means it will handle any network loading scenario). The
>>bottleneck from my measurements appears to be the
>>overhead of serializing writes to the adapter ring buffer IO
>>memory. The current drivers also perform interrupt
>>coalescing very well with Linux. What's needed is a method for
>>submission of ring buffer entries that can be sent in large
>>scatter gather listings rather than one at a time. Ring buffers
>>    
>>
>
>Batching would also decrease locking overhead on the Linux side (less
>spinlocks taken)
>
>We do it already for TCP using TSO for upto 64K packets when
>the hardware supports it. There were some ideas some time back
>to do it also for routing and other protocols - basically passing 
>lists of skbs to hard_start_xmit instead of always single ones - 
>but nobody implemented it so far.
>
>It was one entry in the "ideas to speed up the network stack" 
>list i posted some time back.
>
>With TSO working fine it doesn't seem to be that pressing.
>
>One problem with the TSO implementation is that TSO only works for a
>single connection. If you have hundreds that chatter in small packets
>it won't help batching that up. Problem is that batching data from
>separate sockets up would need more global lists and add possible SMP
>scalability problems from more locks and more shared state. This 
>is a real concern on Linux now - 512 CPU machines are really unforgiving.
>
>However in practice it doesn't seem to be that big a problem because
>it's extremly unlikely that you'll sustain even a gigabit ethernet
>with such a multi process load. It has far more non network CPU
>overhead than a simple packet generator or pktgen.
>
>So overall I agree with Lincoln that the small packet case is not
>that interesting except perhaps for DOS testing.
>
>-Andi
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at  http://www.tux.org/lkml/
>
>  
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning   and  sickness
       [not found]     ` <41A2862A.2000602@devicelogics.com.suse.lists.linux.kernel>
@ 2004-11-23 20:40       ` Andi Kleen
  2004-11-23 21:57         ` Jeff V. Merkey
  0 siblings, 1 reply; 17+ messages in thread
From: Andi Kleen @ 2004-11-23 20:40 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: ltd, linux-kernel

"Jeff V. Merkey" <jmerkey@devicelogics.com> writes:
> I can sustain full line rate gigabit on two adapters at the tsame time
> with a 12 CLK interpacket gap time and 0 dropped packets at 64
> byte sizes from a Smartbits to Linux provided the adapter ring buffer
> is loaded with static addresses. This demonstrates that it is
> possible to sustain 64 byte packet rates at full line rate with
> current DMA architectures on 400 Mhz buses with Linux.
> (which means it will handle any network loading scenario). The
> bottleneck from my measurements appears to be the
> overhead of serializing writes to the adapter ring buffer IO
> memory. The current drivers also perform interrupt
> coalescing very well with Linux. What's needed is a method for
> submission of ring buffer entries that can be sent in large
> scatter gather listings rather than one at a time. Ring buffers

Batching would also decrease locking overhead on the Linux side (less
spinlocks taken)

We do it already for TCP using TSO for upto 64K packets when
the hardware supports it. There were some ideas some time back
to do it also for routing and other protocols - basically passing 
lists of skbs to hard_start_xmit instead of always single ones - 
but nobody implemented it so far.

It was one entry in the "ideas to speed up the network stack" 
list i posted some time back.

With TSO working fine it doesn't seem to be that pressing.

One problem with the TSO implementation is that TSO only works for a
single connection. If you have hundreds that chatter in small packets
it won't help batching that up. Problem is that batching data from
separate sockets up would need more global lists and add possible SMP
scalability problems from more locks and more shared state. This 
is a real concern on Linux now - 512 CPU machines are really unforgiving.

However in practice it doesn't seem to be that big a problem because
it's extremly unlikely that you'll sustain even a gigabit ethernet
with such a multi process load. It has far more non network CPU
overhead than a simple packet generator or pktgen.

So overall I agree with Lincoln that the small packet case is not
that interesting except perhaps for DOS testing.

-Andi

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Linux 2.6.9 pktgen module causes INIT process respawning  and sickness
@ 2004-11-22 18:30 Jeff V. Merkey
  0 siblings, 0 replies; 17+ messages in thread
From: Jeff V. Merkey @ 2004-11-22 18:30 UTC (permalink / raw)
  To: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1 bytes --]



[-- Attachment #2: Re: Linux 2.6.9 pktgen module causes INIT process respawning  and sickness --]
[-- Type: message/rfc822, Size: 6018 bytes --]

From: "Jeff V. Merkey" <jmerkey@devicelogics.com>
To: Lincoln Dale <ltd@cisco.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: Linux 2.6.9 pktgen module causes INIT process respawning  and sickness
Date: Mon, 22 Nov 2004 10:06:10 -0700
Message-ID: <41A21C82.3060105@devicelogics.com>


Lincoln,

I've studied these types of problems for years, and I think it's 
possible even for Linux. The
problem with small packet sizes on x86 hardware is related to 
non-cachable writes to
the adapter ring buffer for preloading of addresses. From my 
measurements, I have observed
that the increased memory write traffic increases latency to the point 
that the OS is unable to
receive data off the card at high enough rates. With testing against 
Linux with a Spirent Smartbits,
at @ 3,00,000 packets per second for 64 byte packets aboutn 80% of the 
packets get dropped
at 1000 mbs rates. It's true that Linux is simply incapable of 
generating at these rates, but the
reason in Linux is due to poor design at the xmit layer. You see a lot 
better behavior at
1500 byte packet sizes, but this is because the card doesnt have to 
preload as many addresses
into the ring buffer since you are only dealing with 150,000 packets per 
second in the 1500
byte case, not in the millions for the 64 byte case.

Linux uses polling (bad) and the tx queue does not feed packets back to 
the adapter on tx cleaning
of the queue via tx complete (or terminal dma count) interrupts 
durectly, instead they go through
a semaphore to trigger the next send -- horribly broken for high speed 
communications. They should
just post the packets and allow tx complete interrupts to feed them off 
the queues. The queue depths
in qdisc are far too short before Linux starts dropping packets 
internally. I've had to increase
the depth of tx_queue_len for some apps to work properly without 
dropping all the skbs on the floor.

So how to get around this problem. At present, the design of the Intel 
drivers allow all the ripe ring buffers
to be reaped at once from a single interrupt. This is very efficient on 
the RX side and in fact, with static
tests, I have been able to program the Intel card to accept 64 byte 
packets at the maximum rate for
gigabit saturation on Linux provided the ring buffers are loaded with 
static addresses. This indicates
the problem in the design is related to the preloading anbd serializing 
memory behavior of Intel's
architecture at the ring buffer level on the card. This also means that 
Linux on current PC architecture,
(and most OS for that matter) will not be able to sustain 10 gigabit 
rates unless the packet sizes get larger
and larger due to the nature of this problem. The solution for he card 
vendors is to instrument the
ability to load a descriptor to the card once which contains the 
addresses of all the ring buffers
for a session of the card and reap them in A / B lists. i.e. two active 
preload memory tables which
contain a listing of preload addresses for receive and when the card 
fills one list, it switches to the second
for receives, sends an interruptr, and the ISR loads the next table into 
the card.

I see no other way for OS to sustain high packet loading about 500,000 
packets per second on Linux
or even come close to dealing with small packets or full 10 gigabite 
ethernet without such a model.
The bus speeds are actually fine for dealing with this on current 
hardware. The problem is realated
to the serializing behavior of non-cachable memory references on IO 
mapped card memory, and this
suggestion could be implemented in Intel Gigabit and 10 gE hardware with 
microcode and minor changes
to the DMA designs of their chipsets. It would allow all OS to reach 
performance levels of a Smartbits
or even a CISCO router without the need for custom hardware design.

My 2 cents.

Jeff






Lincoln Dale wrote:

> Jeff,
>
> you're using commodity x86 hardware. what do you expect?
>
> while the speed of PCs has increased significantly, there are still 
> significant bottlenecks when it comes to PCI bandwidth, PCI 
> arbitration efficiency & # of interrupts/second.
> linux ain't bad -- but there are other OSes which still do slightly 
> better given equivalent hardware.
>
> with a PC comes flexibility.
> that won't match the speed of the FPGAs in a Spirent Smartbits, 
> Agilent RouterTester, IXIA et al ...
>
> cheers,
>
> lincoln.
>
> At 09:06 AM 20/11/2004, Jeff V. Merkey wrote:
>
>> Additionally, when packets sizes 64, 128, and 256 are selected, 
>> pktgen is unable to achieve > 500,000 pps (349,000 only on my system).
>> A Smartbits generator can achieve over 1 million pps with 64 byte 
>> packets on gigabit. This is one performance
>> issue for this app. However, at 1500 and 1048 sizes, gigabit 
>> saturation is achievable.
>> Jeff
>>
>> Jeff V. Merkey wrote:
>>
>>>
>>> With pktgen.o configured to send 123MB/S on a gigabit on a system 
>>> using pktgen set to the following parms:
>>>
>>> pgset "odev eth1"
>>> pgset "pkt_size 1500"
>>> pgset "count 0"
>>> pgset "ipg 5000"
>>> pgset "src_min 10.0.0.1"
>>> pgset "src_max 10.0.0.254"
>>> pgset "dst_min 192.168.0.1"
>>> pgset "dst_max 192.168.0.254"
>>>
>>> After 37 hours of continual packet generation into a gigabit 
>>> regeneration tap device,
>>> the server system console will start to respawn the INIT process 
>>> about every 10-12
>>> hours of continuous packet generation.
>>>
>>> As a side note, this module in Linux is extremely useful and the 
>>> "USE WITH CAUTION" warnings
>>> are certainly will stated. The performance of this tool is excellent.
>>>
>>> Jeff
>>
>
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2004-11-25  2:20 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-11-19 21:53 Linux 2.6.9 pktgen module causes INIT process respawning and sickness Jeff V. Merkey
2004-11-19 22:06 ` Jeff V. Merkey
2004-11-22  3:44   ` Lincoln Dale
2004-11-22 17:06     ` Jeff V. Merkey
2004-11-22 22:50       ` Lincoln Dale
2004-11-23  0:36         ` Jeff V. Merkey
2004-11-23  1:06           ` Jeff V. Merkey
2004-11-23  1:25             ` Lincoln Dale
2004-11-23  1:23           ` Lincoln Dale
2004-11-22 17:19   ` Martin Josefsson
2004-11-22 18:33     ` Jeff V. Merkey
2004-11-22 18:30 Jeff V. Merkey
     [not found] <5.1.0.14.2.20041122144144.04e3d9f0@171.71.163.14.suse.lists.linux.kernel>
     [not found] ` <419E6B44.8050505@devicelogics.com.suse.lists.linux.kernel>
     [not found]   ` <5.1.0.14.2.20041123094109.04003720@171.71.163.14.suse.lists.linux.kernel>
     [not found]     ` <41A2862A.2000602@devicelogics.com.suse.lists.linux.kernel>
2004-11-23 20:40       ` Andi Kleen
2004-11-23 21:57         ` Jeff V. Merkey
2004-11-23 22:27           ` Andi Kleen
2004-11-23 22:54             ` Jeff V. Merkey
2004-11-25  2:17               ` Lincoln Dale

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).