linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [announce] [patch] limiting IRQ load, irq-rewrite-2.4.11-B5
@ 2001-10-02  0:41 jamal
  2001-10-02  1:04 ` Benjamin LaHaise
  0 siblings, 1 reply; 151+ messages in thread
From: jamal @ 2001-10-02  0:41 UTC (permalink / raw)
  To: linux-kernel; +Cc: kuznet, Robert Olsson, Ingo Molnar, netdev


>The new mechanizm:
>
>- the irq handling code has been extended to support 'soft mitigation',
>  ie. to mitigate the rate of hardware interrupts, without support from
>  the actual hardware. There is a reasonable default, but the value can
>  also be decreased/increased on a per-irq basis via
> /proc/irq/NR/max_rate.

I am sorry, but this is bogus. There is no _reasonable value_. Reasonable
value is dependent on system load and has never been and never
will be measured by interupt rates. Even in non-work conserving schemes
There is already a feedback system that is built into 2.4 that
measures system load by the rate at which the system processes the backlog
queue. Look at netif_rx return values. The only driver that utilizes this
is currently the tulip. Look at the tulip code.
This in conjuction with h/ware flow control should give you sustainable
system.
[Granted that mitigation is a hardware specific solution; the scheme we
presented at the kernel summit is the next level to this and will be
non-dependednt on h/ware.]

>(note that in case of shared interrupts, another 'innocent' device might
>stay disabled for some short amount of time as well - but this is not an
>issue because this mitigation does not make that device inoperable, it
>just delays its interrupt by up to 10 msecs. Plus, modern systems have
>properly distributed interrupts.)

This is a _really bad_ idea. not just because you are punishing other
devices.
Lets take network devices as examples: we dont want to disable interupts;
we want to disable offending actions within the device. For example, it is
ok to disable/mitigate receive interupts because they are overloading the
system but not transmit completion because that will add to the overall
latency.

cheers,
jamal


PS: we have been testing what was presented at the kernel summit for the
last few months with very promising results. Both on live and setups which
are experimental where data is generated at very high rates with hardware
traffic generators


^ permalink raw reply	[flat|nested] 151+ messages in thread
* Re: [announce] [patch] limiting IRQ load, irq-rewrite-2.4.11-B5
@ 2001-10-08 14:45 jamal
  2001-10-09  0:36 ` Scott Laird
  2001-10-09  4:04 ` Werner Almesberger
  0 siblings, 2 replies; 151+ messages in thread
From: jamal @ 2001-10-08 14:45 UTC (permalink / raw)
  To: linux-kernel, netdev; +Cc: Bernd Eckenfels



>Yes, have a look at the work of the Click Modular Router PPL from MIT,
>having a Polling Router Module Implementatin which outperforms Linux
>Kernel Routing by far (according to their paper :)

I have read the click paper; i also just looked at the code and it seems
the tulip driver they use has the same roots as us (based on Alexey's
initial HFC driver)

Several things to note/observe:
- They use some very specialized piece of hardware (with two PCI buses).
- Roberts results on a single PCI bus hardware was showing ~360Kpps
routing vs clicks 435Kpps. This is not "far off" given the differences in
hardware. What would be really interesting is to have the click folks
post their latency results. I am curious as to what a purely polling
scheme they have would achieve (as opposed to NAPI which is a mixture of
interupts and polls).
- Linux is already "very modular" as a router with both the traffic
control framework and netfilter. I like their language specification etc;
ours is a little more primitive in comparison.
- Click seems to only run on a system that is designated as a router (as
you seem to point out).

Linux has a few other perks, but the above were to compare the two.

> You can find the Link to Click somewhere on my Page:
> http://www.freefire.org/tools/index.en.php3in the Operating System
> section (i think)

Nice web page and collection, btw. The right web page seems to be:
http://www.freefire.org/tools/index.en.php3

I looked at the latest click paper on SMP. It would help if they were
aware of whats happening on Linux (since it seems to be their primary OS).
softnet does what they are asking for sans the scheduling (which in Linux
proper is done via the IRQ scheduling). They also have a way for the
admin to specify the scheduling scheme; which is nice, but i am not sure
to be very valuable; I'll read the paper again to avoid hasty judgement.
It would be nice to work with the click people (at least to avoid
redundant work and maybe to get Linux mentioned in their paper -- they
even mention ALTQ but forget Linux, which is more advanced ;->).

cheers,
jamal


^ permalink raw reply	[flat|nested] 151+ messages in thread
* Re: [announce] [patch] limiting IRQ load, irq-rewrite-2.4.11-B5
@ 2001-10-04  8:25 Magnus Redin
  2001-10-04 11:39 ` Trever L. Adams
  0 siblings, 1 reply; 151+ messages in thread
From: Magnus Redin @ 2001-10-04  8:25 UTC (permalink / raw)
  To: linux-kernel


Linus writes:
> Note that the big question here is WHO CARES?

Everybody building firewalls, routers, high performance web servers
and broadband content servers with a Linux kernel.
Everyody having a 100 Mbit/s external connection. 

100 Mbit/s access is not uncommon for broadband access, at least in
Sweden. There are right now a few hundred thousand twisted pair Cat 5
and 5E installations into peoples homes with 100 Mbit/s
equipment. Most of them are right now throttled to 10 Mbit/s to save
upstream bandwidth but that will change as soon as we get more TV
channels on the broadband nets. Cat 5E cabling is specified to be able
to get gigabit into the homes to minimise the risk of the cabling
becoming worthless in 10 or 20 years.

A 100 Mbit/s untrusted connection is a reality for quite some people
and its not unreasonable for linux users when it cost $20-$30 per
month. The peering connection will probably be too weak with that
price but you still get thousandss of untrusted neighbours with a full
100 Mbit/s to your computer.

Btw, I work with production and customer support at a company building
linux based firewalls. I am unfortunately not a developer but it is
great fun to read the kernel mailinglist and watch misfeatures and
bugs being discovered, discussed and eradicated. Who need to watch
football when there is Linux VM battle of wits and engineering?

Best regards,
---
Magnus Redin  <redin@ingate.com>   Ingate - Firewall with SIP & NAT
Ingate System AB  +46 13 214600    http://www.ingate.com/


^ permalink raw reply	[flat|nested] 151+ messages in thread
[parent not found: <200110031811.f93IBoN10026@penguin.transmeta.com>]
* Re: [announce] [patch] limiting IRQ load, irq-rewrite-2.4.11-B5
@ 2001-10-03 14:15 Manfred Spraul
  2001-10-03 15:09 ` jamal
  0 siblings, 1 reply; 151+ messages in thread
From: Manfred Spraul @ 2001-10-03 14:15 UTC (permalink / raw)
  To: jamal; +Cc: linux-kernel, Ingo Molnar, Andreas Dilger, linux-netdev

> On Wed, 3 Oct 2001, jamal wrote:
> > On Wed, 3 Oct 2001, Ingo Molnar wrote:
> > >
> > > but the objectives, judging from the description you gave, are i
> > > think largely orthogonal,  with some overlapping in the polling
> > > part.
> >
> > yes. Weve done a lot of thoroughly thought work in that area and i
> > think it will be a sin to throw it out.
> >
>
> I hit the send button to fast..
> The dynamic irq limiting (it must not be set by a system admin to
> conserve the principle of work) could be used as a last resort.
> The point is, if you are not generating a lot of interupts to begin
> with (as is the case with NAPI), i dont see the irq rate limiting
> kicking in at all.

A few notes as seen for low-end nics:

Forcing an irq limit without asking the driver is bad - it must be the
opposite way around.
e.g. the winbond nic contains a bug that forces it to 1 interrupt/packet
tx, but I can switch to rx polling/mitigation.
I'm sure the ne2k-pci users would also complain if a fixed irq limit is
added - I bet the majority of the drivers perform worse with a fixed
limit, only some perform better, and most perform best if they are given
a notice that they should reduce their irq rate. (e.g. disable
rx_packet, tx_packet. Leave the error interrupts on, and do the
rx_packet, tx_packet work in the poll handler)

But a hint for the driver ("now switch mitigation on/off") seems to be a
good idea. And that hint should not be the return value of netif_rx -
what if the driver is only sending packets?
What if it's not even a network driver?

NAPI seems to be very promising to fix the total system overload case
(so many packets arrive that despite irq mitigation the system is still
overloaded).

But the implementation of irq mitigation is driver specific, and a 10
millisecond stop is far too long.

--
    Manfred




^ permalink raw reply	[flat|nested] 151+ messages in thread
* [announce] [patch] limiting IRQ load, irq-rewrite-2.4.11-B5
@ 2001-10-01 22:16 Ingo Molnar
  2001-10-01 22:26 ` Tim Hockin
                   ` (4 more replies)
  0 siblings, 5 replies; 151+ messages in thread
From: Ingo Molnar @ 2001-10-01 22:16 UTC (permalink / raw)
  To: linux-kernel
  Cc: Linus Torvalds, Alan Cox, Alexey Kuznetsov, Andrea Arcangeli,
	Simon Kirby

[-- Attachment #1: Type: TEXT/PLAIN, Size: 6202 bytes --]


to sum things up, we have three main problem areas that are connected to
hardirq and softirq processing:

- a little utility written by Simon Kirby proved that no matter how much
  softirq throttling, it's easy to lock up a pretty powerful Linux
  box via a high rate of network interrupts, from relatively low-powered
  clients as well. 2.4.6, 2.4.7, 2.4.10 all lock up. Alexey said it as
  well that it's still easy to lock up low-powered Linux routers via more
  or less normal traffic.

- prior 2.4.7 we used to 'leak' softirq handling => we ended up missing
  softirqs in a number of circumstances. Stock 2.4.10 still has a number
  of places that do this too.

- a number of people have reported gigabit performance problems (some
  people reported a 10-20% drop in performance under load) since
  ksoftirqd was added - which was added to fix some of the 2.4.6-
  softirq-handling latency problems.

we also have another problem that often pops up when the BIOS goes bad or
a device driver does some mistake:

- Linux often 'locks up' if it gets into a 'interrupt storm' - when
  interrupt sources that send a very high rate of interrupts. This can be
  seen as boot-time hangs and module-insert-time hangs as well.

the attached patch, while a bit radical, is i believe a robust solution to
all four problems. It gives gigabit performance back, avoids the lockups
and attempts to reach as short softirq-processing latency as possible.

the new mechanizm:

- the irq handling code has been extended to support 'soft mitigation',
  ie. to mitigate the rate of hardware interrupts, without support from
  the actual hardware. There is a reasonable default, but the value can
  also be decreased/increased on a per-irq basis via /proc/irq/NR/max_rate.

the method is the following. We count the number of interrupts serviced,
and if within a jiffy there are more than max_rate interrupts, the code
disables the IRQ source and marks it as IRQ_MITIGATED. On the next timer
interrupt the irq_rate_check() function is called, which makes sure that
'blocked' irqs are restarted & handled properly. The interrupt is disabled
in the interrupt controller, which has the nice side-effect of fixing and
blocking interrupt storms. (The support code for 'soft mitigation' is
designed to be very lightweight, it's a decrement and a test in the IRQ
handling hot path.)

(note that in case of shared interrupts, another 'innocent' device might
stay disabled for some short amount of time as well - but this is not an
issue because this mitigation does not make that device inoperable, it
just delays its interrupt by up to 10 msecs. Plus, modern systems have
properly distributed interrupts.)

- softirq code got simplified significantly. The concept is to 'handle all
  pending softirqs' - just as the hardware IRQ code 'handles all hardware
  interrupts that were passed to it'. Since most of the time there is a
  direct relationship between softirq work and hardirq work, the
  mitigation of hardirqs mitigates softirq load as well.

- ksoftirqd is gone, there is never any softirq pending while
  softirq-unaware code is executing.

- the tasklet code needed some cleanup along the way, and it also won some
  restart-on-enable and restart-on-unlock properties that it lacked
  before. (but which is desired.)

due to these changes, the linecount in softirq.c got smaller by 25%.
[i dropped the unwakeup change - but that one could be useful in the VM,
to eg. unwakeup bdflush or kswapd.]

- drivers can optionally use the set_irq_rate(irq, new_rate) call to
  change the current IRQ rate. Drivers are the ones who know best what
  kind of loads to expect from the hardware, so they might want to
  influence this value. Also, drivers that implement IRQ mitigation
  themselves in hardware, can effectively disable the soft-mitigation code
  by using a very high rate value.

what is the concept behind all this? Simplicity, and concept. We were
clearly heading in the wrong direction: putting more complexity into the
core softirq code to handle some really extreme and unusual cases. Also,
softirqs were slowly morphing into something process-ish - but in Linux we
already have a concept of processes, so we'd have two dualling concepts.
(We still have tasklets, which are not really processes - they are
single-threaded paths of execution.)

with this patch, softirqs can again be what they should be: lightweight
'interrupt code' that processes hard-IRQ events but still does this with
interrupts enabled, to allow for low hard-IRQ latencies. Anything that is
conceptually heavyweight IMO does not belong into softirqs, it should be
moved into process contexts. That will take care of CPU-time usage
accounting and CPU-time-limiting and priority issues as well.

(the patch also imports the latency and softirq-restart fixes from my
previous softirq patches.)

i've tested the patch on both UP, SMP, XT-PIC and APIC systems, it
correctly limits network interrupt rates (and other device interrupt
rates) to the given limit. I've done stress-testing as well. The patch is
against 2.4.11-pre1, but it applies just fine to the -ac tree as well.

with a high irq-rate limit set, ping flooding has this effect on the
test-system:

 [root@mars /root]# vmstat 1
    procs                      memory    swap          io
  r  b  w   swpd   free   buff  cache  si  so    bi    bo   in
  0  0  0      0 877024   1140  11364   0   0    12     0 30960
  0  0  0      0 877024   1140  11364   0   0     0     0 30950
  0  0  0      0 877024   1140  11364   0   0     0     0 30520

ie. 30k interrupts/sec. With the max_rate set to 1000 interrupts/sec:

 [root@mars /root]# echo 1000 > /proc/irq/21/max_rate
 [root@mars /root]# vmstat 1
    procs                      memory    swap          io
  r  b  w   swpd   free   buff  cache  si  so    bi    bo   in
  0  0  0      0 877004   1144  11372   0   0     0     0 1112
  0  0  0      0 877004   1144  11372   0   0     0     0 1111
  0  0  0      0 877004   1144  11372   0   0     0     0 1111

so it works just fine here. Interactive tasks are still snappy over the
same interface.

Comments, reports, suggestions and testing feedback is more than welcome,

	Ingo

[-- Attachment #2: Type: TEXT/PLAIN, Size: 26601 bytes --]

--- linux/kernel/ksyms.c.orig	Mon Oct  1 21:52:32 2001
+++ linux/kernel/ksyms.c	Mon Oct  1 21:52:43 2001
@@ -538,8 +538,6 @@
 EXPORT_SYMBOL(tasklet_kill);
 EXPORT_SYMBOL(__run_task_queue);
 EXPORT_SYMBOL(do_softirq);
-EXPORT_SYMBOL(raise_softirq);
-EXPORT_SYMBOL(cpu_raise_softirq);
 EXPORT_SYMBOL(__tasklet_schedule);
 EXPORT_SYMBOL(__tasklet_hi_schedule);
 
--- linux/kernel/softirq.c.orig	Mon Oct  1 21:52:32 2001
+++ linux/kernel/softirq.c	Mon Oct  1 21:53:52 2001
@@ -44,26 +44,11 @@
 
 static struct softirq_action softirq_vec[32] __cacheline_aligned;
 
-/*
- * we cannot loop indefinitely here to avoid userspace starvation,
- * but we also don't want to introduce a worst case 1/HZ latency
- * to the pending events, so lets the scheduler to balance
- * the softirq load for us.
- */
-static inline void wakeup_softirqd(unsigned cpu)
-{
-	struct task_struct * tsk = ksoftirqd_task(cpu);
-
-	if (tsk && tsk->state != TASK_RUNNING)
-		wake_up_process(tsk);
-}
-
 asmlinkage void do_softirq()
 {
 	int cpu = smp_processor_id();
 	__u32 pending;
 	long flags;
-	__u32 mask;
 
 	if (in_interrupt())
 		return;
@@ -75,7 +60,6 @@
 	if (pending) {
 		struct softirq_action *h;
 
-		mask = ~pending;
 		local_bh_disable();
 restart:
 		/* Reset the pending bitmask before enabling irqs */
@@ -95,152 +79,130 @@
 		local_irq_disable();
 
 		pending = softirq_pending(cpu);
-		if (pending & mask) {
-			mask &= ~pending;
+		if (pending)
 			goto restart;
-		}
 		__local_bh_enable();
-
-		if (pending)
-			wakeup_softirqd(cpu);
 	}
 
 	local_irq_restore(flags);
 }
 
-/*
- * This function must run with irq disabled!
- */
-inline void cpu_raise_softirq(unsigned int cpu, unsigned int nr)
-{
-	__cpu_raise_softirq(cpu, nr);
-
-	/*
-	 * If we're in an interrupt or bh, we're done
-	 * (this also catches bh-disabled code). We will
-	 * actually run the softirq once we return from
-	 * the irq or bh.
-	 *
-	 * Otherwise we wake up ksoftirqd to make sure we
-	 * schedule the softirq soon.
-	 */
-	if (!(local_irq_count(cpu) | local_bh_count(cpu)))
-		wakeup_softirqd(cpu);
-}
-
-void raise_softirq(unsigned int nr)
-{
-	long flags;
-
-	local_irq_save(flags);
-	cpu_raise_softirq(smp_processor_id(), nr);
-	local_irq_restore(flags);
-}
-
 void open_softirq(int nr, void (*action)(struct softirq_action*), void *data)
 {
 	softirq_vec[nr].data = data;
 	softirq_vec[nr].action = action;
 }
 
-
 /* Tasklets */
 
 struct tasklet_head tasklet_vec[NR_CPUS] __cacheline_aligned;
 struct tasklet_head tasklet_hi_vec[NR_CPUS] __cacheline_aligned;
 
-void __tasklet_schedule(struct tasklet_struct *t)
+static inline void __tasklet_enable(struct tasklet_struct *t,
+					struct tasklet_head *vec, int softirq)
 {
 	int cpu = smp_processor_id();
-	unsigned long flags;
 
-	local_irq_save(flags);
-	t->next = tasklet_vec[cpu].list;
-	tasklet_vec[cpu].list = t;
-	cpu_raise_softirq(cpu, TASKLET_SOFTIRQ);
-	local_irq_restore(flags);
+	smp_mb__before_atomic_dec();
+	if (!atomic_dec_and_test(&t->count))
+		return;
+
+	local_irq_disable();
+	/*
+	 * Being able to clear the SCHED bit from 1 to 0 means
+	 * we got the right to handle this tasklet.
+	 * Setting it from 0 to 1 means we can queue it.
+	 */
+	if (test_and_clear_bit(TASKLET_STATE_SCHED, &t->state) && !t->next) {
+		if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
+
+			t->next = (vec + cpu)->list;
+			(vec + cpu)->list = t;
+			__cpu_raise_softirq(cpu, softirq);
+		}
+	}
+	local_irq_enable();
+	rerun_softirqs(cpu);
 }
 
-void __tasklet_hi_schedule(struct tasklet_struct *t)
+void tasklet_enable(struct tasklet_struct *t)
+{
+	__tasklet_enable(t, tasklet_vec, TASKLET_SOFTIRQ);
+}
+
+void tasklet_hi_enable(struct tasklet_struct *t)
+{
+	__tasklet_enable(t, tasklet_hi_vec, HI_SOFTIRQ);
+}
+
+static inline void __tasklet_sched(struct tasklet_struct *t,
+					struct tasklet_head *vec, int softirq)
 {
 	int cpu = smp_processor_id();
 	unsigned long flags;
 
 	local_irq_save(flags);
-	t->next = tasklet_hi_vec[cpu].list;
-	tasklet_hi_vec[cpu].list = t;
-	cpu_raise_softirq(cpu, HI_SOFTIRQ);
+	t->next = (vec + cpu)->list;
+	(vec + cpu)->list = t;
+	__cpu_raise_softirq(cpu, softirq);
 	local_irq_restore(flags);
+	rerun_softirqs(cpu);
 }
 
-static void tasklet_action(struct softirq_action *a)
+void __tasklet_schedule(struct tasklet_struct *t)
 {
-	int cpu = smp_processor_id();
-	struct tasklet_struct *list;
-
-	local_irq_disable();
-	list = tasklet_vec[cpu].list;
-	tasklet_vec[cpu].list = NULL;
-	local_irq_enable();
-
-	while (list) {
-		struct tasklet_struct *t = list;
-
-		list = list->next;
-
-		if (tasklet_trylock(t)) {
-			if (!atomic_read(&t->count)) {
-				if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
-					BUG();
-				t->func(t->data);
-				tasklet_unlock(t);
-				continue;
-			}
-			tasklet_unlock(t);
-		}
+	__tasklet_sched(t, tasklet_vec, TASKLET_SOFTIRQ);
+}
 
-		local_irq_disable();
-		t->next = tasklet_vec[cpu].list;
-		tasklet_vec[cpu].list = t;
-		__cpu_raise_softirq(cpu, TASKLET_SOFTIRQ);
-		local_irq_enable();
-	}
+void __tasklet_hi_schedule(struct tasklet_struct *t)
+{
+	__tasklet_sched(t, tasklet_hi_vec, HI_SOFTIRQ);
 }
 
-static void tasklet_hi_action(struct softirq_action *a)
+static inline void __tasklet_action(struct softirq_action *a,
+					struct tasklet_head *vec)
 {
 	int cpu = smp_processor_id();
 	struct tasklet_struct *list;
 
 	local_irq_disable();
-	list = tasklet_hi_vec[cpu].list;
-	tasklet_hi_vec[cpu].list = NULL;
+	list = (vec + cpu)->list;
+	(vec + cpu)->list = NULL;
 	local_irq_enable();
 
 	while (list) {
 		struct tasklet_struct *t = list;
 
 		list = list->next;
+		t->next = NULL;
 
-		if (tasklet_trylock(t)) {
-			if (!atomic_read(&t->count)) {
-				if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
-					BUG();
-				t->func(t->data);
-				tasklet_unlock(t);
-				continue;
-			}
+repeat:
+		if (!tasklet_trylock(t))
+			continue;
+		if (atomic_read(&t->count)) {
 			tasklet_unlock(t);
+			continue;
 		}
-
-		local_irq_disable();
-		t->next = tasklet_hi_vec[cpu].list;
-		tasklet_hi_vec[cpu].list = t;
-		__cpu_raise_softirq(cpu, HI_SOFTIRQ);
-		local_irq_enable();
+		if (test_and_clear_bit(TASKLET_STATE_SCHED, &t->state)) {
+			t->func(t->data);
+			tasklet_unlock(t);
+			if (test_bit(TASKLET_STATE_SCHED, &t->state))
+				goto repeat;
+			continue;
+		}
+		tasklet_unlock(t);
 	}
 }
 
+static void tasklet_action(struct softirq_action *a)
+{
+	__tasklet_action(a, tasklet_vec);
+}
+
+static void tasklet_hi_action(struct softirq_action *a)
+{
+	__tasklet_action(a, tasklet_hi_vec);
+}
 
 void tasklet_init(struct tasklet_struct *t,
 		  void (*func)(unsigned long), unsigned long data)
@@ -268,8 +230,6 @@
 	clear_bit(TASKLET_STATE_SCHED, &t->state);
 }
 
-
-
 /* Old style BHs */
 
 static void (*bh_base[32])(void);
@@ -325,7 +285,7 @@
 {
 	int i;
 
-	for (i=0; i<32; i++)
+	for (i = 0; i < 32; i++)
 		tasklet_init(bh_task_vec+i, bh_action, i);
 
 	open_softirq(TASKLET_SOFTIRQ, tasklet_action, NULL);
@@ -358,61 +318,3 @@
 			f(data);
 	}
 }
-
-static int ksoftirqd(void * __bind_cpu)
-{
-	int bind_cpu = *(int *) __bind_cpu;
-	int cpu = cpu_logical_map(bind_cpu);
-
-	daemonize();
-	current->nice = 19;
-	sigfillset(&current->blocked);
-
-	/* Migrate to the right CPU */
-	current->cpus_allowed = 1UL << cpu;
-	while (smp_processor_id() != cpu)
-		schedule();
-
-	sprintf(current->comm, "ksoftirqd_CPU%d", bind_cpu);
-
-	__set_current_state(TASK_INTERRUPTIBLE);
-	mb();
-
-	ksoftirqd_task(cpu) = current;
-
-	for (;;) {
-		if (!softirq_pending(cpu))
-			schedule();
-
-		__set_current_state(TASK_RUNNING);
-
-		while (softirq_pending(cpu)) {
-			do_softirq();
-			if (current->need_resched)
-				schedule();
-		}
-
-		__set_current_state(TASK_INTERRUPTIBLE);
-	}
-}
-
-static __init int spawn_ksoftirqd(void)
-{
-	int cpu;
-
-	for (cpu = 0; cpu < smp_num_cpus; cpu++) {
-		if (kernel_thread(ksoftirqd, (void *) &cpu,
-				  CLONE_FS | CLONE_FILES | CLONE_SIGNAL) < 0)
-			printk("spawn_ksoftirqd() failed for cpu %d\n", cpu);
-		else {
-			while (!ksoftirqd_task(cpu_logical_map(cpu))) {
-				current->policy |= SCHED_YIELD;
-				schedule();
-			}
-		}
-	}
-
-	return 0;
-}
-
-__initcall(spawn_ksoftirqd);
--- linux/kernel/timer.c.orig	Tue Aug 21 14:26:19 2001
+++ linux/kernel/timer.c	Mon Oct  1 21:52:43 2001
@@ -674,6 +674,7 @@
 void do_timer(struct pt_regs *regs)
 {
 	(*(unsigned long *)&jiffies)++;
+	irq_rate_check();
 #ifndef CONFIG_SMP
 	/* SMP process accounting uses the local APIC timer */
 
--- linux/include/linux/netdevice.h.orig	Mon Oct  1 21:52:28 2001
+++ linux/include/linux/netdevice.h	Mon Oct  1 23:07:44 2001
@@ -486,8 +486,9 @@
 		local_irq_save(flags);
 		dev->next_sched = softnet_data[cpu].output_queue;
 		softnet_data[cpu].output_queue = dev;
-		cpu_raise_softirq(cpu, NET_TX_SOFTIRQ);
+		__cpu_raise_softirq(cpu, NET_TX_SOFTIRQ);
 		local_irq_restore(flags);
+		rerun_softirqs(cpu);
 	}
 }
 
@@ -535,8 +536,9 @@
 		local_irq_save(flags);
 		skb->next = softnet_data[cpu].completion_queue;
 		softnet_data[cpu].completion_queue = skb;
-		cpu_raise_softirq(cpu, NET_TX_SOFTIRQ);
+		__cpu_raise_softirq(cpu, NET_TX_SOFTIRQ);
 		local_irq_restore(flags);
+		rerun_softirqs(cpu);
 	}
 }
 
--- linux/include/linux/interrupt.h.orig	Mon Oct  1 21:52:32 2001
+++ linux/include/linux/interrupt.h	Mon Oct  1 23:07:33 2001
@@ -74,9 +74,15 @@
 asmlinkage void do_softirq(void);
 extern void open_softirq(int nr, void (*action)(struct softirq_action*), void *data);
 extern void softirq_init(void);
-#define __cpu_raise_softirq(cpu, nr) do { softirq_pending(cpu) |= 1UL << (nr); } while (0)
-extern void FASTCALL(cpu_raise_softirq(unsigned int cpu, unsigned int nr));
-extern void FASTCALL(raise_softirq(unsigned int nr));
+extern void show_stack(unsigned long* esp);
+#define __cpu_raise_softirq(cpu, nr) \
+		do { softirq_pending(cpu) |= 1UL << (nr); } while (0)
+
+#define rerun_softirqs(cpu) 					\
+do {								\
+	if (!(local_irq_count(cpu) | local_bh_count(cpu)))	\
+		do_softirq();					\
+} while (0);
 
 
 
@@ -182,18 +188,8 @@
 	smp_mb();
 }
 
-static inline void tasklet_enable(struct tasklet_struct *t)
-{
-	smp_mb__before_atomic_dec();
-	atomic_dec(&t->count);
-}
-
-static inline void tasklet_hi_enable(struct tasklet_struct *t)
-{
-	smp_mb__before_atomic_dec();
-	atomic_dec(&t->count);
-}
-
+extern void tasklet_enable(struct tasklet_struct *t);
+extern void tasklet_hi_enable(struct tasklet_struct *t);
 extern void tasklet_kill(struct tasklet_struct *t);
 extern void tasklet_init(struct tasklet_struct *t,
 			 void (*func)(unsigned long), unsigned long data);
@@ -263,5 +259,6 @@
 extern unsigned long probe_irq_on(void);	/* returns 0 on failure */
 extern int probe_irq_off(unsigned long);	/* returns 0 or negative on failure */
 extern unsigned int probe_irq_mask(unsigned long);	/* returns mask of ISA interrupts */
+extern void irq_rate_check(void);
 
 #endif
--- linux/include/linux/irq.h.orig	Mon Oct  1 21:52:32 2001
+++ linux/include/linux/irq.h	Mon Oct  1 23:07:19 2001
@@ -31,6 +31,7 @@
 #define IRQ_LEVEL	64	/* IRQ level triggered */
 #define IRQ_MASKED	128	/* IRQ masked - shouldn't be seen again */
 #define IRQ_PER_CPU	256	/* IRQ is per CPU */
+#define IRQ_MITIGATED	512	/* IRQ got rate-limited */
 
 /*
  * Interrupt controller descriptor. This is all we need
@@ -62,6 +63,7 @@
 	struct irqaction *action;	/* IRQ action list */
 	unsigned int depth;		/* nested irq disables */
 	spinlock_t lock;
+	unsigned int count;
 } ____cacheline_aligned irq_desc_t;
 
 extern irq_desc_t irq_desc [NR_IRQS];
--- linux/include/asm-i386/irq.h.orig	Mon Oct  1 23:06:53 2001
+++ linux/include/asm-i386/irq.h	Mon Oct  1 23:07:06 2001
@@ -33,6 +33,7 @@
 extern void disable_irq(unsigned int);
 extern void disable_irq_nosync(unsigned int);
 extern void enable_irq(unsigned int);
+extern void set_irq_rate(unsigned int irq, unsigned int rate);
 
 #ifdef CONFIG_X86_LOCAL_APIC
 #define ARCH_HAS_NMI_WATCHDOG		/* See include/linux/nmi.h */
--- linux/include/asm-mips/softirq.h.orig	Mon Oct  1 21:52:32 2001
+++ linux/include/asm-mips/softirq.h	Mon Oct  1 21:52:43 2001
@@ -40,6 +40,4 @@
 
 #define in_softirq() (local_bh_count(smp_processor_id()) != 0)
 
-#define __cpu_raise_softirq(cpu, nr)	set_bit(nr, &softirq_pending(cpu))
-
 #endif /* _ASM_SOFTIRQ_H */
--- linux/include/asm-mips64/softirq.h.orig	Mon Oct  1 21:52:32 2001
+++ linux/include/asm-mips64/softirq.h	Mon Oct  1 21:52:43 2001
@@ -39,19 +39,4 @@
 
 #define in_softirq() (local_bh_count(smp_processor_id()) != 0)
 
-extern inline void __cpu_raise_softirq(int cpu, int nr)
-{
-	unsigned int *m = (unsigned int *) &softirq_pending(cpu);
-	unsigned int temp;
-
-	__asm__ __volatile__(
-		"1:\tll\t%0, %1\t\t\t# __cpu_raise_softirq\n\t"
-		"or\t%0, %2\n\t"
-		"sc\t%0, %1\n\t"
-		"beqz\t%0, 1b"
-		: "=&r" (temp), "=m" (*m)
-		: "ir" (1UL << nr), "m" (*m)
-		: "memory");
-}
-
 #endif /* _ASM_SOFTIRQ_H */
--- linux/net/core/dev.c.orig	Mon Oct  1 21:52:32 2001
+++ linux/net/core/dev.c	Mon Oct  1 21:52:43 2001
@@ -1218,8 +1218,9 @@
 			dev_hold(skb->dev);
 			__skb_queue_tail(&queue->input_pkt_queue,skb);
 			/* Runs from irqs or BH's, no need to wake BH */
-			cpu_raise_softirq(this_cpu, NET_RX_SOFTIRQ);
+			__cpu_raise_softirq(this_cpu, NET_RX_SOFTIRQ);
 			local_irq_restore(flags);
+			rerun_softirqs(this_cpu);
 #ifndef OFFLINE_SAMPLE
 			get_sample_stats(this_cpu);
 #endif
@@ -1529,8 +1530,9 @@
 	local_irq_disable();
 	netdev_rx_stat[this_cpu].time_squeeze++;
 	/* This already runs in BH context, no need to wake up BH's */
-	cpu_raise_softirq(this_cpu, NET_RX_SOFTIRQ);
+	__cpu_raise_softirq(this_cpu, NET_RX_SOFTIRQ);
 	local_irq_enable();
+	rerun_softirqs(this_cpu);
 
 	NET_PROFILE_LEAVE(softnet_process);
 	return;
--- linux/arch/i386/kernel/irq.c.orig	Mon Oct  1 21:52:28 2001
+++ linux/arch/i386/kernel/irq.c	Mon Oct  1 23:06:26 2001
@@ -18,6 +18,7 @@
  */
 
 #include <linux/config.h>
+#include <linux/compiler.h>
 #include <linux/ptrace.h>
 #include <linux/errno.h>
 #include <linux/signal.h>
@@ -68,7 +69,24 @@
 irq_desc_t irq_desc[NR_IRQS] __cacheline_aligned =
 	{ [0 ... NR_IRQS-1] = { 0, &no_irq_type, NULL, 0, SPIN_LOCK_UNLOCKED}};
 
-static void register_irq_proc (unsigned int irq);
+#define DEFAULT_IRQ_RATE 20000
+
+/*
+ * Maximum number of interrupts allowed, per second.
+ * Individual values can be set via echoing the new
+ * decimal value into /proc/irq/IRQ/max_rate.
+ */
+static unsigned int irq_rate [NR_IRQS] =
+		{ [0 ... NR_IRQS-1] = DEFAULT_IRQ_RATE };
+
+/*
+ * Print warnings only once. We reset it to 1 if rate
+ * limit has been changed.
+ */
+static unsigned int rate_warning [NR_IRQS] =
+		{ [0 ... NR_IRQS-1] = 1 };
+
+static void register_irq_proc(unsigned int irq);
 
 /*
  * Special irq handlers.
@@ -230,35 +248,8 @@
 	show_stack(NULL);
 	printk("\n");
 }
-	
-#define MAXCOUNT 100000000
 
-/*
- * I had a lockup scenario where a tight loop doing
- * spin_unlock()/spin_lock() on CPU#1 was racing with
- * spin_lock() on CPU#0. CPU#0 should have noticed spin_unlock(), but
- * apparently the spin_unlock() information did not make it
- * through to CPU#0 ... nasty, is this by design, do we have to limit
- * 'memory update oscillation frequency' artificially like here?
- *
- * Such 'high frequency update' races can be avoided by careful design, but
- * some of our major constructs like spinlocks use similar techniques,
- * it would be nice to clarify this issue. Set this define to 0 if you
- * want to check whether your system freezes.  I suspect the delay done
- * by SYNC_OTHER_CORES() is in correlation with 'snooping latency', but
- * i thought that such things are guaranteed by design, since we use
- * the 'LOCK' prefix.
- */
-#define SUSPECTED_CPU_OR_CHIPSET_BUG_WORKAROUND 0
-
-#if SUSPECTED_CPU_OR_CHIPSET_BUG_WORKAROUND
-# define SYNC_OTHER_CORES(x) udelay(x+1)
-#else
-/*
- * We have to allow irqs to arrive between __sti and __cli
- */
-# define SYNC_OTHER_CORES(x) __asm__ __volatile__ ("nop")
-#endif
+#define MAXCOUNT 100000000
 
 static inline void wait_on_irq(int cpu)
 {
@@ -276,7 +267,7 @@
 				break;
 
 		/* Duh, we have to loop. Release the lock to avoid deadlocks */
-		clear_bit(0,&global_irq_lock);
+		clear_bit(0, &global_irq_lock);
 
 		for (;;) {
 			if (!--count) {
@@ -284,7 +275,8 @@
 				count = ~0;
 			}
 			__sti();
-			SYNC_OTHER_CORES(cpu);
+			/* Allow irqs to arrive */
+			__asm__ __volatile__ ("nop");
 			__cli();
 			if (irqs_running())
 				continue;
@@ -467,6 +459,13 @@
  * controller lock. 
  */
  
+inline void __disable_irq(irq_desc_t *desc, unsigned int irq)
+{
+	if (!desc->depth++) {
+		desc->status |= IRQ_DISABLED;
+		desc->handler->disable(irq);
+	}
+}
 /**
  *	disable_irq_nosync - disable an irq without waiting
  *	@irq: Interrupt to disable
@@ -485,10 +484,7 @@
 	unsigned long flags;
 
 	spin_lock_irqsave(&desc->lock, flags);
-	if (!desc->depth++) {
-		desc->status |= IRQ_DISABLED;
-		desc->handler->disable(irq);
-	}
+	__disable_irq(desc, irq);
 	spin_unlock_irqrestore(&desc->lock, flags);
 }
 
@@ -516,23 +512,8 @@
 	}
 }
 
-/**
- *	enable_irq - enable handling of an irq
- *	@irq: Interrupt to enable
- *
- *	Undoes the effect of one call to disable_irq().  If this
- *	matches the last disable, processing of interrupts on this
- *	IRQ line is re-enabled.
- *
- *	This function may be called from IRQ context.
- */
- 
-void enable_irq(unsigned int irq)
+static inline void __enable_irq(irq_desc_t *desc, unsigned int irq)
 {
-	irq_desc_t *desc = irq_desc + irq;
-	unsigned long flags;
-
-	spin_lock_irqsave(&desc->lock, flags);
 	switch (desc->depth) {
 	case 1: {
 		unsigned int status = desc->status & ~IRQ_DISABLED;
@@ -551,9 +532,69 @@
 		printk("enable_irq(%u) unbalanced from %p\n", irq,
 		       __builtin_return_address(0));
 	}
+}
+
+/**
+ *	enable_irq - enable handling of an irq
+ *	@irq: Interrupt to enable
+ *
+ *	Undoes the effect of one call to disable_irq().  If this
+ *	matches the last disable, processing of interrupts on this
+ *	IRQ line is re-enabled.
+ *
+ *	This function may be called from IRQ context.
+ */
+ 
+void enable_irq(unsigned int irq)
+{
+	irq_desc_t *desc = irq_desc + irq;
+	unsigned long flags;
+
+	spin_lock_irqsave(&desc->lock, flags);
+	__enable_irq(desc, irq);
 	spin_unlock_irqrestore(&desc->lock, flags);
 }
 
+void set_irq_rate(unsigned int irq, unsigned int rate)
+{
+	if (rate < 2*HZ)
+		rate = 2*HZ;
+	if (irq_rate[irq] != rate)
+		rate_warning[irq] = 1;
+	irq_rate[irq] = rate;
+}
+
+static inline void __handle_mitigated(irq_desc_t *desc, unsigned int irq)
+{
+	desc->status &= ~IRQ_MITIGATED;
+	__enable_irq(desc, irq);
+}
+
+/*
+ * This function, provided by every architecture, resets
+ * the irq-limit counters in every jiffy. Overhead is
+ * fairly small, since it gets the spinlock only if the IRQ
+ * got mitigated.
+ */
+
+void irq_rate_check(void)
+{
+	unsigned long flags;
+	irq_desc_t *desc;
+	int i;
+
+	for (i = 0; i < NR_IRQS; i++) {
+		desc = irq_desc + i;
+		if (desc->count <= 1) {
+			spin_lock_irqsave(&desc->lock, flags);
+			if (desc->status & IRQ_MITIGATED)
+				__handle_mitigated(desc, i);
+			spin_unlock_irqrestore(&desc->lock, flags);
+		}
+		desc->count = irq_rate[i] / HZ;
+	}
+}
+
 /*
  * do_IRQ handles all normal device IRQ's (the special
  * SMP cross-CPU interrupts have their own specific
@@ -585,6 +626,13 @@
 	   WAITING is used by probe to mark irqs that are being tested
 	   */
 	status = desc->status & ~(IRQ_REPLAY | IRQ_WAITING);
+	/*
+	 * One decrement and one branch (test for zero) into
+	 * an unlikely-predicted branch. It cannot be cheaper
+	 * than this.
+	 */
+	if (unlikely(!--desc->count))
+		goto mitigate;
 	status |= IRQ_PENDING; /* we _want_ to handle it */
 
 	/*
@@ -639,6 +687,27 @@
 	if (softirq_pending(cpu))
 		do_softirq();
 	return 1;
+
+mitigate:
+	/*
+	 * We take a slightly longer path here to not put
+	 * overhead into the IRQ hotpath:
+	 */
+	desc->count = 1;
+	if (status & IRQ_MITIGATED)
+		goto out;
+	/*
+	 * Disable interrupt source. It will be re-enabled
+	 * by the next timer interrupt - and possibly be
+	 * restarted if needed.
+	 */
+	desc->status |= IRQ_MITIGATED | IRQ_PENDING;
+	__disable_irq(desc, irq);
+	if (rate_warning[irq]) {
+		printk(KERN_WARNING "Rate limit of %d irqs/sec exceeded for IRQ%d! Throttling irq source.\n", irq_rate[irq], irq);
+		rate_warning[irq] = 0;
+	}
+	goto out;
 }
 
 /**
@@ -809,7 +878,7 @@
 	 * something may have generated an irq long ago and we want to
 	 * flush such a longstanding irq before considering it as spurious. 
 	 */
-	for (i = NR_IRQS-1; i > 0; i--)  {
+	for (i = NR_IRQS-1; i > 0; i--) {
 		desc = irq_desc + i;
 
 		spin_lock_irq(&desc->lock);
@@ -1030,9 +1099,49 @@
 static struct proc_dir_entry * root_irq_dir;
 static struct proc_dir_entry * irq_dir [NR_IRQS];
 
+#define DEC_DIGITS 9
+
+/*
+ * Parses from 0 to 999999999. More than enough for IRQ purposes.
+ */
+static unsigned int parse_dec_value(const char *buffer,
+		unsigned long count, unsigned long *ret)
+{
+	unsigned char decnum [DEC_DIGITS];
+	unsigned long value;
+	int i;
+
+	if (!count)
+		return -EINVAL;
+	if (count > DEC_DIGITS)
+		count = DEC_DIGITS;
+	if (copy_from_user(decnum, buffer, count))
+		return -EFAULT;
+
+	/*
+	 * Parse the first 9 characters as a decimal string,
+	 * any non-decimal char is end-of-string.
+	 */
+	value = 0;
+
+	for (i = 0; i < count; i++) {
+		unsigned int c = decnum[i];
+
+		switch (c) {
+			case '0' ... '9': c -= '0'; break;
+		default:
+			goto out;
+		}
+		value = value * 10 + c;
+	}
+out:
+	*ret = value;
+	return 0;
+}
+
 #define HEX_DIGITS 8
 
-static unsigned int parse_hex_value (const char *buffer,
+static unsigned int parse_hex_value(const char *buffer,
 		unsigned long count, unsigned long *ret)
 {
 	unsigned char hexnum [HEX_DIGITS];
@@ -1071,18 +1180,17 @@
 
 #if CONFIG_SMP
 
-static struct proc_dir_entry * smp_affinity_entry [NR_IRQS];
-
 static unsigned long irq_affinity [NR_IRQS] = { [0 ... NR_IRQS-1] = ~0UL };
-static int irq_affinity_read_proc (char *page, char **start, off_t off,
+
+static int irq_affinity_read_proc(char *page, char **start, off_t off,
 			int count, int *eof, void *data)
 {
-	if (count < HEX_DIGITS+1)
+	if (count <= HEX_DIGITS)
 		return -EINVAL;
 	return sprintf (page, "%08lx\n", irq_affinity[(long)data]);
 }
 
-static int irq_affinity_write_proc (struct file *file, const char *buffer,
+static int irq_affinity_write_proc(struct file *file, const char *buffer,
 					unsigned long count, void *data)
 {
 	int irq = (long) data, full_count = count, err;
@@ -1109,16 +1217,16 @@
 
 #endif
 
-static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
+static int prof_cpu_mask_read_proc(char *page, char **start, off_t off,
 			int count, int *eof, void *data)
 {
 	unsigned long *mask = (unsigned long *) data;
-	if (count < HEX_DIGITS+1)
+	if (count <= HEX_DIGITS)
 		return -EINVAL;
 	return sprintf (page, "%08lx\n", *mask);
 }
 
-static int prof_cpu_mask_write_proc (struct file *file, const char *buffer,
+static int prof_cpu_mask_write_proc(struct file *file, const char *buffer,
 					unsigned long count, void *data)
 {
 	unsigned long *mask = (unsigned long *) data, full_count = count, err;
@@ -1132,10 +1240,45 @@
 	return full_count;
 }
 
+static int irq_rate_read_proc(char *page, char **start, off_t off,
+			int count, int *eof, void *data)
+{
+	int irq = (int) data;
+	if (count <= DEC_DIGITS)
+		return -EINVAL;
+	return sprintf (page, "%d\n", irq_rate[irq]);
+}
+
+static int irq_rate_write_proc(struct file *file, const char *buffer,
+					unsigned long count, void *data)
+{
+	int irq = (int) data;
+	unsigned long full_count = count, err;
+	unsigned long new_value;
+
+	/* do not allow the timer interrupt to be rate-limited ... :-| */
+	if (!irq)
+		return -EINVAL;
+	err = parse_dec_value(buffer, count, &new_value);
+	if (err)
+		return err;
+
+	/*
+	 * Do not allow a frequency to be lower than 1 interrupt
+	 * per jiffy.
+	 */
+	if (!new_value)
+		return -EINVAL;
+
+	set_irq_rate(irq, new_value);
+	return full_count;
+}
+
 #define MAX_NAMELEN 10
 
-static void register_irq_proc (unsigned int irq)
+static void register_irq_proc(unsigned int irq)
 {
+	struct proc_dir_entry *entry;
 	char name [MAX_NAMELEN];
 
 	if (!root_irq_dir || (irq_desc[irq].handler == &no_irq_type) ||
@@ -1148,28 +1291,32 @@
 	/* create /proc/irq/1234 */
 	irq_dir[irq] = proc_mkdir(name, root_irq_dir);
 
-#if CONFIG_SMP
-	{
-		struct proc_dir_entry *entry;
+	/* create /proc/irq/1234/max_rate */
+	entry = create_proc_entry("max_rate", 0600, irq_dir[irq]);
 
-		/* create /proc/irq/1234/smp_affinity */
-		entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
+	if (entry) {
+		entry->nlink = 1;
+		entry->data = (void *)irq;
+		entry->read_proc = irq_rate_read_proc;
+		entry->write_proc = irq_rate_write_proc;
+	}
 
-		if (entry) {
-			entry->nlink = 1;
-			entry->data = (void *)(long)irq;
-			entry->read_proc = irq_affinity_read_proc;
-			entry->write_proc = irq_affinity_write_proc;
-		}
+#if CONFIG_SMP
+	/* create /proc/irq/1234/smp_affinity */
+	entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
 
-		smp_affinity_entry[irq] = entry;
+	if (entry) {
+		entry->nlink = 1;
+		entry->data = (void *)(long)irq;
+		entry->read_proc = irq_affinity_read_proc;
+		entry->write_proc = irq_affinity_write_proc;
 	}
 #endif
 }
 
 unsigned long prof_cpu_mask = -1;
 
-void init_irq_proc (void)
+void init_irq_proc(void)
 {
 	struct proc_dir_entry *entry;
 	int i;
@@ -1181,7 +1328,7 @@
 	entry = create_proc_entry("prof_cpu_mask", 0600, root_irq_dir);
 
 	if (!entry)
-	    return;
+		return;
 
 	entry->nlink = 1;
 	entry->data = (void *)&prof_cpu_mask;

^ permalink raw reply	[flat|nested] 151+ messages in thread

end of thread, other threads:[~2001-10-13 19:36 UTC | newest]

Thread overview: 151+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-10-02  0:41 [announce] [patch] limiting IRQ load, irq-rewrite-2.4.11-B5 jamal
2001-10-02  1:04 ` Benjamin LaHaise
2001-10-02  1:54   ` jamal
2001-10-02  5:13     ` Benjamin LaHaise
2001-10-02  5:55       ` Ben Greear
2001-10-03  9:22         ` Ingo Molnar
2001-10-03 14:06           ` David Brownell
2001-10-02 12:10       ` jamal
2001-10-02 22:00         ` jamal
2001-10-03  8:34           ` Ingo Molnar
2001-10-03  9:29             ` Helge Hafting
2001-10-03 12:49             ` jamal
2001-10-03 14:51               ` Ingo Molnar
2001-10-03 15:14                 ` jamal
2001-10-03 17:28                   ` Ingo Molnar
2001-10-04  0:53                     ` jamal
2001-10-04  6:28                       ` Ingo Molnar
2001-10-04 11:34                         ` jamal
2001-10-04 17:40                           ` Andreas Dilger
2001-10-04 18:33                             ` jamal
2001-10-04  6:50                       ` Ben Greear
2001-10-04  6:52                         ` Ingo Molnar
2001-10-04 11:50                           ` jamal
2001-10-04  6:55                         ` Jeff Garzik
2001-10-04  6:56                           ` Ingo Molnar
2001-10-04 21:28                 ` Alex Bligh - linux-kernel
2001-10-04 21:49                   ` Benjamin LaHaise
2001-10-04 23:20                     ` Alex Bligh - linux-kernel
2001-10-04 23:26                       ` Benjamin LaHaise
2001-10-04 23:47                       ` Robert Love
2001-10-04 23:51                         ` Linus Torvalds
2001-10-05  0:00                           ` Ben Greear
2001-10-05  0:18                             ` Davide Libenzi
2001-10-05  2:01                             ` jamal
2001-10-04 22:01                   ` Simon Kirby
2001-10-04 23:25                     ` Alex Bligh - linux-kernel
2001-10-04 23:34                       ` Simon Kirby
2001-10-04 22:10                   ` Alan Cox
2001-10-04 23:28                     ` Alex Bligh - linux-kernel
2001-10-05 15:22                   ` Robert Olsson
2001-10-03  9:38           ` Ingo Molnar
2001-10-03 13:03             ` jamal
2001-10-03 13:25               ` jamal
2001-10-03 15:28               ` Ingo Molnar
2001-10-03 15:56                 ` jamal
2001-10-03 16:51                   ` Ingo Molnar
2001-10-04  0:46                     ` jamal
2001-10-08  0:31                     ` Andrea Arcangeli
2001-10-08  4:58                       ` Bernd Eckenfels
2001-10-08 15:00                       ` Alan Cox
2001-10-08 15:03                         ` Jeff Garzik
2001-10-08 15:12                           ` Alan Cox
2001-10-08 15:09                             ` jamal
2001-10-08 15:22                               ` Alan Cox
2001-10-08 15:20                                 ` jamal
2001-10-08 15:35                                   ` Alan Cox
2001-10-08 15:57                                     ` jamal
2001-10-08 16:11                                       ` Alan Cox
2001-10-08 16:11                                         ` jamal
2001-10-10 16:26                                         ` Pavel Machek
2001-10-10 16:25                                     ` Pavel Machek
2001-10-08 15:24                             ` Andrea Arcangeli
2001-10-08 15:35                               ` Alan Cox
2001-10-08 15:19                         ` Andrea Arcangeli
2001-10-08 15:10                       ` bill davidsen
2001-10-03 21:08                   ` Robert Olsson
2001-10-03 22:22                     ` Andreas Dilger
2001-10-04 17:32                       ` Davide Libenzi
2001-10-05 14:52                     ` Robert Olsson
2001-10-05 18:48                       ` Andreas Dilger
2001-10-05 19:07                         ` Davide Libenzi
2001-10-05 19:17                         ` kuznet
2001-10-08 13:58                           ` jamal
2001-10-08 17:42                           ` Robert Olsson
2001-10-08 17:39                             ` jamal
2001-10-07  6:11                         ` Robert Olsson
2001-10-03 16:53                 ` kuznet
2001-10-03 17:06                   ` Ingo Molnar
2001-10-04  0:44                     ` jamal
2001-10-04  6:35                       ` Ingo Molnar
2001-10-04 11:41                         ` jamal
2001-10-05 16:42                         ` kuznet
2001-10-04 13:05                       ` Robert Olsson
2001-10-03 19:03                   ` Benjamin LaHaise
2001-10-04  1:10                     ` jamal
2001-10-04  1:30                       ` Benjamin LaHaise
2001-10-03 22:31                         ` Rob Landley
2001-10-04  1:39                         ` jamal
2001-10-03 15:42               ` Ben Greear
2001-10-03 15:58                 ` jamal
2001-10-03 16:09                   ` Ben Greear
2001-10-03 16:14                     ` Ingo Molnar
2001-10-03 16:20                     ` Jeff Garzik
2001-10-03 16:33                 ` Linus Torvalds
2001-10-03 17:25                   ` Ingo Molnar
2001-10-03 18:11                     ` Linus Torvalds
2001-10-03 20:41                       ` Jeremy Hansen
2001-10-03 20:02                   ` Simon Kirby
2001-10-04  1:04                     ` jamal
2001-10-04  6:47                       ` Ben Greear
2001-10-04  7:41                         ` Henning P. Schmiedehausen
2001-10-04 16:09                           ` Ben Greear
2001-10-04 17:32                             ` Henning P. Schmiedehausen
2001-10-04 18:03                               ` Ben Greear
2001-10-04 18:30                           ` Christopher E. Brown
2001-10-04 11:47                         ` jamal
2001-10-04 15:56                           ` Ben Greear
2001-10-04 18:23                             ` jamal
2001-10-04  6:50                       ` Ingo Molnar
2001-10-04 11:49                         ` jamal
2001-10-04  8:45                       ` Simon Kirby
2001-10-04 11:54                         ` jamal
2001-10-04 15:03                           ` Tim Hockin
2001-10-04 18:55                           ` Ion Badulescu
2001-10-04 19:00                             ` jamal
2001-10-04 21:16                               ` Ion Badulescu
2001-10-04  4:12                   ` bill davidsen
2001-10-04 18:16                   ` Alan Cox
2001-10-03 13:38             ` Robert Olsson
2001-10-04 21:22               ` Alex Bligh - linux-kernel
2001-10-05 14:32               ` Robert Olsson
2001-10-03  8:38         ` Ingo Molnar
2001-10-04  3:50           ` bill davidsen
2001-10-02 17:03       ` Robert Olsson
2001-10-02 17:37         ` jamal
2001-10-02 19:46         ` Andreas Dilger
  -- strict thread matches above, loose matches on Subject: below --
2001-10-08 14:45 jamal
2001-10-09  0:36 ` Scott Laird
2001-10-09  3:17   ` jamal
2001-10-09  4:04 ` Werner Almesberger
2001-10-04  8:25 Magnus Redin
2001-10-04 11:39 ` Trever L. Adams
     [not found] <200110031811.f93IBoN10026@penguin.transmeta.com>
2001-10-03 18:23 ` Ingo Molnar
2001-10-04  9:19   ` BALBIR SINGH
2001-10-04  9:22     ` Ingo Molnar
2001-10-04  9:49       ` BALBIR SINGH
2001-10-04 10:25         ` Ingo Molnar
2001-10-07 20:37           ` Andrea Arcangeli
2001-10-03 14:15 Manfred Spraul
2001-10-03 15:09 ` jamal
2001-10-03 18:37   ` Davide Libenzi
2001-10-01 22:16 Ingo Molnar
2001-10-01 22:26 ` Tim Hockin
2001-10-01 22:50   ` Ingo Molnar
2001-10-01 22:36 ` Andreas Dilger
2001-10-01 22:50 ` Ben Greear
2001-10-02 14:30   ` Alan Cox
2001-10-02 20:51     ` Ingo Molnar
2001-10-01 23:03 ` Linus Torvalds
2001-10-02  6:50 ` Marcus Sundberg
2001-10-03  8:47   ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).