linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Galbraith <efault@gmx.de>
To: linux-kernel@vger.kernel.org
Cc: Linus Torvalds <torvalds@transmeta.com>
Subject: Houston, I think we have a problem
Date: Sun, 27 Apr 2003 12:52:49 +0200	[thread overview]
Message-ID: <5.2.0.9.2.20030427090009.01f89870@pop.gmx.net> (raw)
In-Reply-To: <Pine.LNX.4.44.0304232012400.19176-100000@home.transmeta.co m>

[-- Attachment #1: Type: text/plain, Size: 3408 bytes --]

<SQUEAK!  SQUEAK!  SQUEAK!>

Hi Folks,

I don't generally squeak unless I'm pretty darn sure I see a genuine 
problem.  I think I see one right now, so here I am squeaking my little 
lungs out ;-)  Perhaps I'm being stupid, and if that's the case, someone 
please apply a size 15EE boot vigorously to my tail-feathers (jump-start 
brain), and I'll shut up.

The problem I see is terrible terrible semaphore starvation.  It comes in 
two varieties, and might apply to other locks as well [1].  Variety 1 is 
owners of semaphores being sent off to the expired array, which happens 
with remarkable frequency.  This variant is the lesser of the two evils, 
because here at least you have _some_ protection via EXPIRED_STARVING(), 
even if you have interactive tasks doing round robin.  The worst variant is 
when you have a steady stream of tasks being upgraded to TASK_INTERACTIVE() 
while someone of low/modest priority has a semaphore downed... the poor guy 
can (seemingly) wait for _ages_ to get a chance to release it, and will 
starve all comers in the meantime.  I regularly see a SCHED_RR and 
mlockall() vmstat stall for several seconds, and _sometimes_ my poor little 
box goes utterly insane and stalls vmstat for over a MINUTE [2].

To reproduce this 100% of the time, simply compile virgin 2.5.68 
up/preempt, reduce your ram to 128mb, and using gcc-2.95.3 as to not 
overload the vm, run a make -j30 bzImage in an ext3 partition on a P3/500 
single ide disk box.  No, you don't really need to meet all of those 
restrictions... you'll see the problem on a big hairy chested box as well, 
just not as bad as I see it on my little box.  The first symptom of the 
problem you will notice is a complete lack of swap activity along with 
highly improbable quantities of unused ram were all those hungry cc1's 
getting regular CPU feedings.

If the huge increase in hold time (induced by a stream of elevated priority 
tasks who may even achieve their elevated status via _one_ wakeup), is the 
desired behavior now, so be it.  If that's the case, someone please say so, 
that I may cease and desist fighting with the dang thing.  I'm having lots 
of fun mind you, but testing is supposed to be mind-numbingly boring ;-)

Anyway, grep for pid:prio pair 301:-2 in the attached log to see vmstat 
being nailed for over 8 seconds.  Then, grep for pid:prio pair 1119:23 to 
see a task holding up a parade for 7 seconds.  The patch I used to generate 
this log is also attached for idiot-reproachment purposes.

(um, don't anyone try running it on an SMP or NUMA beast [those folks would 
surely know better, but...] as it's highly likely to explode violently)

	halbaderi,

	-Mike

1.  I'm pretty sure it does... might really be that Heisenberg fellow 
messing with me again.

2.  The 100% simple and effective way to "fix" this problem for this work 
load is to "just say no" to coughing up more than HZ worth of cpu time in 
activate_task().  This seems perfectly obvious and correct to me... though 
I'll admit it would seem much _more_ perfectly obvious and correct if 
MAX_SLEEP_AVG were 11000 instead of 10000... or maybe even 
40000.  Whatever.  I posted one X-patch that worked pretty darn well, but 
nobody tried it.  Not even the folks who were _griping_ about 
interactivity, fairness and whatnot. How boring.

btw, what happens when kjournald yields and goes off to expired land?  see 
log2.txt

[-- Attachment #2: log.txt --]
[-- Type: text/plain, Size: 14389 bytes --]

1131:18 starved 5 secs by 1119:23 who last ran 5152 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1132:18 starved 5 secs by 1119:23 who last ran 5178 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1093:18 starved 5 secs by 1119:23 who last ran 5204 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1067:18 starved 5 secs by 1119:23 who last ran 5230 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1142:22 starved 1 secs by 1133:23 who last ran 5613 ticks ago.
semaphore downed at fs/namei.c:1249.

396:15 starved 2 secs by 1119:23 who last ran 5767 ticks ago.
semaphore downed at fs/namei.c:1249.

1139:21 starved 2 secs by 1119:23 who last ran 5767 ticks ago.
semaphore downed at fs/namei.c:1249.

301:-2 starved 3 secs by 1137:21 who last ran 3667 ticks ago.
semaphore downed at arch/i386/kernel/sys_i386.c:58.

1079:16 starved 6 secs by 1119:23 who last ran 6155 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1131:17 starved 6 secs by 1119:23 who last ran 6181 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] <7>1132:17 starved 6 secs by 1119:23 who last ran 6205 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1093:17 starved 6 secs by 1119:23 who last ran 6233 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1067:17 starved 6 secs by 1119:23 who last ran 6259 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1142:21 starved 2 secs by 1133:23 who last ran 6708 ticks ago.
semaphore downed at fs/namei.c:1249.

396:15 starved 3 secs by 1119:23 who last ran 6773 ticks ago.
semaphore downed at fs/namei.c:1249.

1139:20 starved 3 secs by 1119:23 who last ran 6773 ticks ago.
semaphore downed at fs/namei.c:1249.

301:-2 starved 4 secs by 1137:21 who last ran 4667 ticks ago.
semaphore downed at arch/i386/kernel/sys_i386.c:58.

1079:15 starved 7 secs by 1119:23 who last ran 7181 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1131:16 starved 7 secs by 1119:23 who last ran 7228 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1132:16 starved 7 secs by 1119:23 who last ran 7254 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1093:16 starved 7 secs by 1119:23 who last ran 7281 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1067:16 starved 7 secs by 1119:23 who last ran 7307 ticks ago.
sh            R C5D95E10 92758188  1119   1118                     (NOTLB)
Call Trace:
 [<c011677a>] io_schedule+0xe/0x18
 [<c0143bee>] __wait_on_buffer+0xa2/0xbc
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0116e34>] autoremove_wake_function+0x0/0x38
 [<c0144c83>] __bread_slow+0x73/0x94
 [<c0144ebe>] __bread+0x2a/0x34
 [<c0177c31>] ext3_get_inode_loc+0xe1/0x140
 [<c0178619>] ext3_reserve_inode_write+0x1d/0x94
 [<c01786aa>] ext3_mark_inode_dirty+0x1a/0x34
 [<c0174f74>] ext3_new_inode+0x714/0x7f0
 [<c017ac95>] ext3_create+0x8d/0x180
 [<c014f46a>] vfs_create+0xae/0xd4
 [<c014f82d>] open_namei+0x1ad/0x4a8
 [<c0142073>] filp_open+0x3b/0x5c
 [<c014249b>] sys_open+0x37/0x70
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at fs/namei.c:1249.

1142:20 starved 3 secs by 1133:23 who last ran 7716 ticks ago.
semaphore downed at fs/namei.c:1249.

396:15 starved 4 secs by 1119:23 who last ran 28 ticks ago.
semaphore downed at fs/namei.c:1249.

1139:19 starved 4 secs by 1119:23 who last ran 42 ticks ago.
semaphore downed at fs/namei.c:1249.

301:-2 starved 5 secs by 1137:21 who last ran 5667 ticks ago.
sh            R 00000073 1546400  1137    396                1090 (NOTLB)
Call Trace:
 [<c0108c09>] need_resched+0x27/0x32
 [<c013007b>] __set_page_dirty_buffers+0xd3/0x114
 [<c0138314>] do_mmap_pgoff+0x14c/0x608
 [<c0139735>] mprotect_fixup+0x119/0x134
 [<c010e011>] old_mmap+0x115/0x164
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at arch/i386/kernel/sys_i386.c:58.

301:-2 starved 6 secs by 1137:21 who last ran 6672 ticks ago.
sh            R 00000073 1546400  1137    396                1090 (NOTLB)
Call Trace:
 [<c0108c09>] need_resched+0x27/0x32
 [<c013007b>] __set_page_dirty_buffers+0xd3/0x114
 [<c0138314>] do_mmap_pgoff+0x14c/0x608
 [<c0139735>] mprotect_fixup+0x119/0x134
 [<c010e011>] old_mmap+0x115/0x164
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at arch/i386/kernel/sys_i386.c:58.

301:-2 starved 7 secs by 1137:21 who last ran 7677 ticks ago.
sh            R 00000073 1546400  1137    396                1090 (NOTLB)
Call Trace:
 [<c0108c09>] need_resched+0x27/0x32
 [<c013007b>] __set_page_dirty_buffers+0xd3/0x114
 [<c0138314>] do_mmap_pgoff+0x14c/0x608
 [<c0139735>] mprotect_fixup+0x119/0x134
 [<c010e011>] old_mmap+0x115/0x164
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at arch/i386/kernel/sys_i386.c:58.

301:-2 starved 8 secs by 1137:21 who last ran 8682 ticks ago.
sh            R 00000073 1546400  1137    396                1090 (NOTLB)
Call Trace:
 [<c0108c09>] need_resched+0x27/0x32
 [<c013007b>] __set_page_dirty_buffers+0xd3/0x114
 [<c0138314>] do_mmap_pgoff+0x14c/0x608
 [<c0139735>] mprotect_fixup+0x119/0x134
 [<c010e011>] old_mmap+0x115/0x164
 [<c0108cbf>] syscall_call+0x7/0xb

semaphore downed at arch/i386/kernel/sys_i386.c:58.


[-- Attachment #3: diag.diff --]
[-- Type: application/octet-stream, Size: 13472 bytes --]

--- ./lib/rwsem.c.org	Mon Feb 10 19:37:57 2003
+++ ./lib/rwsem.c	Sun Apr 27 06:56:41 2003
@@ -15,6 +15,8 @@
 #define RWSEM_WAITING_FOR_WRITE	0x00000002
 };
 
+extern void show_task(struct task_struct *tsk);
+
 #if RWSEM_DEBUG
 #undef rwsemtrace
 void rwsemtrace(struct rw_semaphore *sem, const char *str)
@@ -126,6 +128,7 @@
 {
 	struct task_struct *tsk = current;
 	signed long count;
+	int ticks = 0;
 
 	set_task_state(tsk,TASK_UNINTERRUPTIBLE);
 
@@ -150,7 +153,17 @@
 	for (;;) {
 		if (!waiter->flags)
 			break;
-		schedule();
+		if (!schedule_timeout(1000) && sem->owner) {
+			struct task_struct *tsk = (struct task_struct *) sem->owner;
+			ticks += 1000;
+			printk(KERN_DEBUG "%d:%d starved %d secs by %d:%d who last ran %lu ticks ago.\n",
+				current->pid, current->prio - 100, ticks/HZ, tsk->pid,
+				tsk->prio - 100, jiffies - tsk->last_run);
+			if (ticks >= 5000)
+				show_task(sem->owner);
+			printk(KERN_DEBUG "semaphore downed at %s:%d.\n\n",
+					sem->file, sem->line);
+		}
 		set_task_state(tsk, TASK_UNINTERRUPTIBLE);
 	}
 
--- ./arch/i386/kernel/semaphore.c.org	Mon Feb 10 19:38:28 2003
+++ ./arch/i386/kernel/semaphore.c	Sun Apr 27 06:29:27 2003
@@ -58,6 +58,7 @@
 	struct task_struct *tsk = current;
 	DECLARE_WAITQUEUE(wait, tsk);
 	unsigned long flags;
+	int ticks = 0;
 
 	tsk->state = TASK_UNINTERRUPTIBLE;
 	spin_lock_irqsave(&sem->wait.lock, flags);
@@ -79,7 +80,17 @@
 		sem->sleepers = 1;	/* us - see -1 above */
 		spin_unlock_irqrestore(&sem->wait.lock, flags);
 
-		schedule();
+		if (!schedule_timeout(1000) && sem->owner) {
+			struct task_struct *tsk = (struct task_struct *) sem->owner;
+			ticks += 1000;
+			printk(KERN_DEBUG "%d:%d starved %d secs by %d:%d who last ran %lu ticks ago.\n",
+				current->pid, current->prio - 100, ticks/HZ, tsk->pid,
+				tsk->prio - 100, jiffies - tsk->last_run);
+			if (ticks >= 5000)
+				show_task(sem->owner);
+			printk(KERN_DEBUG "semaphore downed at %s:%d.\n\n",
+					sem->file, sem->line);
+		}
 
 		spin_lock_irqsave(&sem->wait.lock, flags);
 		tsk->state = TASK_UNINTERRUPTIBLE;
@@ -96,6 +107,7 @@
 	struct task_struct *tsk = current;
 	DECLARE_WAITQUEUE(wait, tsk);
 	unsigned long flags;
+	int ticks = 0;
 
 	tsk->state = TASK_INTERRUPTIBLE;
 	spin_lock_irqsave(&sem->wait.lock, flags);
@@ -132,7 +144,17 @@
 		sem->sleepers = 1;	/* us - see -1 above */
 		spin_unlock_irqrestore(&sem->wait.lock, flags);
 
-		schedule();
+		if (!schedule_timeout(1000) && sem->owner) {
+			struct task_struct *tsk = (struct task_struct *) sem->owner;
+			ticks += 1000;
+			printk(KERN_DEBUG "%d:%d starved %d secs by %d:%d who last ran %lu ticks ago.\n",
+				current->pid, current->prio - 100, ticks/HZ, tsk->pid,
+				tsk->prio - 100, jiffies - tsk->last_run);
+			if (ticks >= 5000)
+				show_task(sem->owner);
+			printk(KERN_DEBUG "semaphore downed at %s:%d.\n\n",
+					sem->file, sem->line);
+		}
 
 		spin_lock_irqsave(&sem->wait.lock, flags);
 		tsk->state = TASK_INTERRUPTIBLE;
--- ./include/linux/fs.h.org	Fri Apr 25 11:35:25 2003
+++ ./include/linux/fs.h	Fri Apr 25 12:29:03 2003
@@ -19,6 +19,7 @@
 #include <linux/cache.h>
 #include <linux/radix-tree.h>
 #include <linux/kobject.h>
+#include <linux/sched.h>
 #include <asm/atomic.h>
 
 struct iovec;
--- ./include/linux/rwsem.h.org	Mon Feb 10 19:38:17 2003
+++ ./include/linux/rwsem.h	Fri Apr 25 07:55:15 2003
@@ -38,18 +38,26 @@
 /*
  * lock for reading
  */
-static inline void down_read(struct rw_semaphore *sem)
+static inline void _down_read(struct rw_semaphore *sem)
 {
 	might_sleep();
 	rwsemtrace(sem,"Entering down_read");
 	__down_read(sem);
 	rwsemtrace(sem,"Leaving down_read");
 }
+#define down_read(sem)     \
+do {                       \
+	_down_read((sem));      \
+	current->sem_count++;   \
+	(sem)->owner = current; \
+	(sem)->file = __FILE__; \
+	(sem)->line = __LINE__; \
+} while(0);
 
 /*
  * trylock for reading -- returns 1 if successful, 0 if contention
  */
-static inline int down_read_trylock(struct rw_semaphore *sem)
+static inline int _down_read_trylock(struct rw_semaphore *sem)
 {
 	int ret;
 	rwsemtrace(sem,"Entering down_read_trylock");
@@ -57,22 +65,41 @@
 	rwsemtrace(sem,"Leaving down_read_trylock");
 	return ret;
 }
+#define down_read_trylock(sem)          \
+({                                      \
+	int _ret =_down_read_trylock((sem)); \
+	if (_ret) {                          \
+		current->sem_count++;             \
+		(sem)->owner = current;           \
+		(sem)->file = __FILE__;           \
+		(sem)->line = __LINE__;           \
+	}                                    \
+	_ret;                                \
+})
 
 /*
  * lock for writing
  */
-static inline void down_write(struct rw_semaphore *sem)
+static inline void _down_write(struct rw_semaphore *sem)
 {
 	might_sleep();
 	rwsemtrace(sem,"Entering down_write");
 	__down_write(sem);
 	rwsemtrace(sem,"Leaving down_write");
 }
+#define down_write(sem)    \
+do {                       \
+	_down_write((sem));     \
+	current->sem_count++;   \
+	(sem)->owner = current; \
+	(sem)->file = __FILE__; \
+	(sem)->line = __LINE__; \
+} while (0);
 
 /*
  * trylock for writing -- returns 1 if successful, 0 if contention
  */
-static inline int down_write_trylock(struct rw_semaphore *sem)
+static inline int _down_write_trylock(struct rw_semaphore *sem)
 {
 	int ret;
 	rwsemtrace(sem,"Entering down_write_trylock");
@@ -80,26 +107,49 @@
 	rwsemtrace(sem,"Leaving down_write_trylock");
 	return ret;
 }
+#define down_write_trylock(sem)           \
+({                                        \
+	int _ret = _down_write_trylock((sem)); \
+	if (_ret) {                            \
+		current->sem_count++;               \
+		(sem)->owner = current;             \
+		(sem)->file = __FILE__;             \
+		(sem)->line = __LINE__;             \
+	}                                      \
+	_ret;                                  \
+})
 
 /*
  * release a read lock
  */
-static inline void up_read(struct rw_semaphore *sem)
+static inline void _up_read(struct rw_semaphore *sem)
 {
 	rwsemtrace(sem,"Entering up_read");
 	__up_read(sem);
 	rwsemtrace(sem,"Leaving up_read");
 }
+#define up_read(sem)       \
+do {                       \
+	current->sem_count--;   \
+	(sem)->owner = NULL;    \
+	_up_read((sem));        \
+} while(0);
 
 /*
  * release a write lock
  */
-static inline void up_write(struct rw_semaphore *sem)
+static inline void _up_write(struct rw_semaphore *sem)
 {
 	rwsemtrace(sem,"Entering up_write");
 	__up_write(sem);
 	rwsemtrace(sem,"Leaving up_write");
 }
+#define up_write(sem)         \
+do {                          \
+	current->sem_count--;      \
+	(sem)->owner = NULL;       \
+	_up_write((sem));          \
+} while(0);
 
 /*
  * downgrade write lock to read lock
--- ./include/linux/sched.h.org	Fri Apr 25 06:24:33 2003
+++ ./include/linux/sched.h	Fri Apr 25 07:59:50 2003
@@ -167,6 +167,7 @@
 #define	MAX_SCHEDULE_TIMEOUT	LONG_MAX
 extern signed long FASTCALL(schedule_timeout(signed long timeout));
 asmlinkage void schedule(void);
+extern void show_task(task_t *p);
 
 struct namespace;
 
@@ -322,6 +323,7 @@
 	unsigned long ptrace;
 
 	int lock_depth;		/* Lock depth */
+	int sem_count;		/* NR semaphores held */
 
 	int prio, static_prio;
 	struct list_head run_list;
--- ./include/asm-i386/rwsem.h.org	Fri Apr 25 06:22:19 2003
+++ ./include/asm-i386/rwsem.h	Fri Apr 25 07:56:57 2003
@@ -64,6 +64,9 @@
 #if RWSEM_DEBUG
 	int			debug;
 #endif
+	void			*owner;
+	char			*file;
+	int			line;
 };
 
 /*
--- ./include/asm-i386/semaphore.h.org	Fri Apr 25 10:50:46 2003
+++ ./include/asm-i386/semaphore.h	Fri Apr 25 11:57:28 2003
@@ -48,6 +48,9 @@
 #ifdef WAITQUEUE_DEBUG
 	long __magic;
 #endif
+	void			*owner;
+	char			*file;
+	int			line;
 };
 
 #ifdef WAITQUEUE_DEBUG
@@ -111,7 +114,7 @@
  * "__down_failed" is a special asm handler that calls the C
  * routine that actually waits. See arch/i386/kernel/semaphore.c
  */
-static inline void down(struct semaphore * sem)
+static inline void _down(struct semaphore * sem)
 {
 #ifdef WAITQUEUE_DEBUG
 	CHECK_MAGIC(sem->__magic);
@@ -130,12 +133,20 @@
 		:"c" (sem)
 		:"memory");
 }
+#define down(sem)          \
+do {                       \
+	_down((sem));           \
+	current->sem_count++;   \
+	(sem)->owner = current; \
+	(sem)->file = __FILE__; \
+	(sem)->line = __LINE__; \
+} while (0);
 
 /*
  * Interruptible try to acquire a semaphore.  If we obtained
  * it, return zero.  If we were interrupted, returns -EINTR
  */
-static inline int down_interruptible(struct semaphore * sem)
+static inline int _down_interruptible(struct semaphore * sem)
 {
 	int result;
 
@@ -158,12 +169,23 @@
 		:"memory");
 	return result;
 }
+#define down_interruptible(sem)           \
+({                                        \
+	int _ret = _down_interruptible((sem)); \
+	if (!_ret) {                           \
+		(sem)->owner = current;             \
+		current->sem_count++;               \
+		(sem)->file = __FILE__;             \
+		(sem)->line = __LINE__;             \
+	}                                      \
+	_ret;                                  \
+})
 
 /*
  * Non-blockingly attempt to down() a semaphore.
  * Returns zero if we acquired it
  */
-static inline int down_trylock(struct semaphore * sem)
+static inline int _down_trylock(struct semaphore * sem)
 {
 	int result;
 
@@ -186,6 +208,17 @@
 		:"memory");
 	return result;
 }
+#define down_trylock(sem)           \
+({                                  \
+	int _ret = _down_trylock((sem)); \
+	if (!_ret) {                     \
+		(sem)->owner = current;       \
+		current->sem_count++;         \
+		(sem)->file = __FILE__;       \
+		(sem)->line = __LINE__;       \
+	}                                \
+	_ret;                            \
+})
 
 /*
  * Note! This is subtle. We jump to wake people up only if
@@ -193,7 +226,7 @@
  * The default case (no contention) will result in NO
  * jumps for both down() and up().
  */
-static inline void up(struct semaphore * sem)
+static inline void _up(struct semaphore * sem)
 {
 #ifdef WAITQUEUE_DEBUG
 	CHECK_MAGIC(sem->__magic);
@@ -212,6 +245,12 @@
 		:"c" (sem)
 		:"memory");
 }
+#define up(sem)          \
+do {                     \
+	current->sem_count--; \
+	(sem)->owner = NULL;  \
+	_up((sem));           \
+} while (0);
 
 #endif
 #endif
--- ./fs/fat/misc.c.org	Fri Apr 25 12:24:19 2003
+++ ./fs/fat/misc.c	Fri Apr 25 12:24:33 2003
@@ -6,6 +6,7 @@
  *		 and date_dos2unix for date==0 by Igor Zhbanov(bsg@uniyar.ac.ru)
  */
 
+#include <linux/sched.h>
 #include <linux/fs.h>
 #include <linux/msdos_fs.h>
 #include <linux/buffer_head.h>
--- ./kernel/fork.c.org	Fri Apr 25 06:24:34 2003
+++ ./kernel/fork.c	Fri Apr 25 07:41:35 2003
@@ -854,6 +854,7 @@
 	p->cutime = p->cstime = 0;
 	p->array = NULL;
 	p->lock_depth = -1;		/* -1 = no lock */
+	p->sem_count = 0;
 	p->start_time = get_jiffies_64();
 	p->security = NULL;
 
--- ./kernel/printk.c.org	Fri Apr 25 06:24:34 2003
+++ ./kernel/printk.c	Fri Apr 25 08:04:29 2003
@@ -510,8 +510,10 @@
 	console_may_schedule = 0;
 	up(&console_sem);
 	spin_unlock_irqrestore(&logbuf_lock, flags);
+#if 0
 	if (wake_klogd && !oops_in_progress && waitqueue_active(&log_wait))
 		wake_up_interruptible(&log_wait);
+#endif
 }
 
 /** console_conditional_schedule - yield the CPU if required
--- ./kernel/sched.c.org	Fri Apr 25 06:24:34 2003
+++ ./kernel/sched.c	Sun Apr 27 07:00:04 2003
@@ -75,6 +75,8 @@
 #define STARVATION_LIMIT	(10*HZ)
 #define NODE_THRESHOLD		125
 
+#define TIMESLICE_GRANULARITY	(HZ/20 ?: 1)
+
 /*
  * If a task is 'interactive' then we reinsert it in the active
  * array after it has expired its current timeslice. (it will not
@@ -1248,6 +1250,27 @@
 			enqueue_task(p, rq->expired);
 		} else
 			enqueue_task(p, rq->active);
+	} else {
+		/*
+		 * Prevent a too long timeslice allowing a task to monopolize
+		 * the CPU. We do this by splitting up the timeslice into
+		 * smaller pieces.
+		 *
+		 * Note: this does not mean the task's timeslices expire or
+		 * get lost in any way, they just might be preempted by
+		 * another task of equal priority. (one with higher
+		 * priority would have preempted this task already.) We
+		 * requeue this task to the end of the list on this priority
+		 * level, which is in essence a round-robin of tasks with
+		 * equal priority.
+		 */
+		if (0 && !(p->time_slice % TIMESLICE_GRANULARITY) &&
+			       		(p->array == rq->active)) {
+			dequeue_task(p, rq->active);
+			set_tsk_need_resched(p);
+			p->prio = effective_prio(p);
+			enqueue_task(p, rq->active);
+		}
 	}
 out:
 	spin_unlock(&rq->lock);
@@ -1993,6 +2016,10 @@
 	if (likely(!rt_task(current))) {
 		dequeue_task(current, array);
 		enqueue_task(current, rq->expired);
+		if (current->sem_count) {
+			printk(KERN_DEBUG "%d yielded holding sem.\n", current->pid);
+			show_task(current);
+		}
 	} else {
 		list_del(&current->run_list);
 		list_add_tail(&current->run_list, array->queue + current->prio);
@@ -2155,7 +2182,7 @@
 	return list_entry(p->sibling.next,struct task_struct,sibling);
 }
 
-static void show_task(task_t * p)
+void show_task(task_t * p)
 {
 	unsigned long free = 0;
 	task_t *relative;
--- ./kernel/ksyms.c.org	Fri Apr 25 06:24:34 2003
+++ ./kernel/ksyms.c	Fri Apr 25 07:49:02 2003
@@ -466,6 +466,7 @@
 EXPORT_SYMBOL(interruptible_sleep_on);
 EXPORT_SYMBOL(interruptible_sleep_on_timeout);
 EXPORT_SYMBOL(schedule);
+EXPORT_SYMBOL(show_task);
 #ifdef CONFIG_PREEMPT
 EXPORT_SYMBOL(preempt_schedule);
 #endif

[-- Attachment #4: log2.txt --]
[-- Type: text/plain, Size: 1845 bytes --]

29 yielded holding sem.
kjournald     R current  4182741476    29      1           123    28 (L-TLB)
Call Trace:
 [<c01348e9>] shrink_cache+0x229/0x284
 [<c011985a>] release_console_sem+0x4e/0xa0
 [<c011977f>] printk+0x147/0x178
 [<c011977f>] printk+0x147/0x178
 [<c012bb9e>] __print_symbol+0x106/0x113
 [<c012bb9e>] __print_symbol+0x106/0x113
 [<c012bb9e>] __print_symbol+0x106/0x113
 [<c012728b>] kernel_text_address+0x2f/0x3b
 [<c01092ac>] show_trace+0x3c/0x8c
 [<c010931a>] show_trace_task+0x1e/0x24
 [<c0116f23>] show_task+0x1cb/0x1d4
 [<c0116b45>] sys_sched_yield+0x95/0xe4
 [<c0116bc7>] yield+0x17/0x1c
 [<c012fb08>] __alloc_pages+0x218/0x254
 [<c012c810>] find_or_create_page+0x3c/0xac
 [<c0145699>] grow_dev_page+0x21/0x128
 [<c0145831>] __getblk_slow+0x91/0xfc
 [<c0145be2>] __getblk+0x2e/0x38
 [<c0187bb9>] journal_get_descriptor_buffer+0x35/0x54
 [<c0184cb9>] journal_commit_transaction+0x675/0x1253
 [<c0115ea2>] schedule+0x28e/0x338
 [<c0187547>] kjournald+0xff/0x1e8
 [<c0187448>] kjournald+0x0/0x1e8
 [<c0187430>] commit_timeout+0x0/0x10
 [<c0107111>] kernel_thread_helper+0x5/0xc

29 yielded holding sem.
kjournald     R current  4182741476    29      1           123    28 (L-TLB)
Call Trace:
 [<c010931a>] show_trace_task+0x1e/0x24
 [<c0116f23>] show_task+0x1cb/0x1d4
 [<c0116b45>] sys_sched_yield+0x95/0xe4
 [<c0116bc7>] yield+0x17/0x1c
 [<c012fb08>] __alloc_pages+0x218/0x254
 [<c012c810>] find_or_create_page+0x3c/0xac
 [<c0145699>] grow_dev_page+0x21/0x128
 [<c0145831>] __getblk_slow+0x91/0xfc
 [<c0145be2>] __getblk+0x2e/0x38
 [<c0187bb9>] journal_get_descriptor_buffer+0x35/0x54
 [<c0184cb9>] journal_commit_transaction+0x675/0x1253
 [<c0115ea2>] schedule+0x28e/0x338
 [<c0187547>] kjournald+0xff/0x1e8
 [<c0187448>] kjournald+0x0/0x1e8
 [<c0187430>] commit_timeout+0x0/0x10
 [<c0107111>] kernel_thread_helper+0x5/0xc


       reply	other threads:[~2003-04-27 10:36 UTC|newest]

Thread overview: 204+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <Pine.LNX.4.44.0304232012400.19176-100000@home.transmeta.co m>
2003-04-27 10:52 ` Mike Galbraith [this message]
2003-04-27 14:41   ` Houston, I think we have a problem Martin J. Bligh
2003-04-27 17:25     ` Mike Galbraith
2003-04-27 17:29       ` Martin J. Bligh
2003-04-27 17:41         ` Mike Galbraith
2003-04-27 17:54           ` Mike Galbraith
2003-04-28  5:17     ` Mike Galbraith
2003-04-28  6:15   ` Jan Harkes
2003-04-24  3:59 Flame Linus to a crisp! Linus Torvalds
2003-04-24  4:40 ` Joel Jaeggli
2003-04-24  4:43 ` Greg KH
2003-04-24  4:57   ` Linus Torvalds
2003-04-24  5:02     ` Clemens Schwaighofer
2003-04-24  5:39       ` viro
2003-04-24  5:56         ` Valdis.Kletnieks
2003-04-24  8:46           ` Dax Kelson
2003-04-24  9:46         ` Clemens Schwaighofer
2003-04-24 10:54       ` Felipe Alfaro Solana
2003-04-25  0:07         ` Clemens Schwaighofer
2003-04-24  4:54 ` Andre Hedrick
2003-04-24  5:16   ` Linus Torvalds
2003-04-24 13:08     ` Shawn
2003-04-24 20:12       ` Kenneth Johansson
2003-04-24 17:32     ` Andreas Boman
2003-04-24 17:41       ` William Lee Irwin III
2003-04-24 19:39         ` Balram Adlakha
2003-04-26 17:05       ` Riley Williams
2003-04-24  5:02 ` Mark J Roberts
2003-04-24  5:13   ` Clemens Schwaighofer
2003-04-24  5:15 ` William Lee Irwin III
2003-04-24  5:43   ` Linus Torvalds
2003-04-24  6:15     ` William Lee Irwin III
2003-04-24  7:44       ` Jamie Lokier
2003-04-24  8:03         ` Jan-Benedict Glaw
2003-04-25  1:16           ` Jan Harkes
2003-04-25  1:35             ` Stan Bubrouski
2003-04-24  8:16         ` John Bradford
2003-04-24  8:31           ` Jamie Lokier
2003-04-24  8:59             ` John Bradford
2003-04-24  8:50           ` Jamie Lokier
2003-04-24 14:45           ` Linus Torvalds
2003-04-24 15:00             ` Jeff Garzik
2003-04-24 19:03             ` Daniel Phillips
2003-04-24 19:32               ` Timothy Miller
2003-04-24 19:22                 ` Linus Torvalds
2003-04-24 20:19                   ` Jamie Lokier
2003-04-24 20:35                   ` Timothy Miller
2003-04-24 19:39                 ` Balram Adlakha
2003-04-24 21:02                   ` Jamie Lokier
2003-04-24 18:58         ` Daniel Phillips
2003-04-24 21:08           ` Jamie Lokier
2003-04-24 21:37             ` Timothy Miller
2003-04-24 21:30               ` Jamie Lokier
2003-04-24 21:38                 ` John Bradford
2003-04-25  3:20                   ` Shawn
2003-04-25  5:47                     ` Jamie Lokier
2003-04-25  7:02                       ` John Bradford
2003-04-25  8:05                         ` Simple x86 Simulator (was: Re: Flame Linus to a crisp!) Steven Augart
2003-04-25 15:38                           ` Timothy Miller
2003-04-25 16:10                             ` John Bradford
2003-04-25 11:44                               ` Antonio Vargas
2003-04-25  8:52                         ` Flame Linus to a crisp! Helge Hafting
2003-04-25 14:03                   ` Mike Dresser
2003-04-24 21:42                 ` Russell King
2003-04-25  6:08               ` Jan-Benedict Glaw
2003-04-25 11:46                 ` Antonio Vargas
2003-04-24 10:57     ` Giuliano Pochini
2003-04-24 22:51     ` Adrian Bunk
2003-04-24  7:55 ` Jamie Lokier
2003-04-24  8:37 ` Andreas Jellinghaus
2003-04-24  8:59   ` Jamie Lokier
2003-04-24 12:52     ` Andreas Jellinghaus
2003-04-24 15:37     ` Timothy Miller
2003-04-24 18:35       ` Alan Cox
2003-04-24 20:46         ` Timothy Miller
2003-04-24 20:50           ` Jamie Lokier
2003-04-24 21:03             ` Chris Adams
2003-04-24 22:29         ` Werner Almesberger
2003-04-24 22:41           ` Jamie Lokier
2003-04-24 22:54             ` Werner Almesberger
2003-04-25  0:26               ` Jamie Lokier
2003-04-24 22:41           ` Alan Cox
2003-04-27 14:21           ` Matthias Andree
2003-04-27 16:13             ` Stephan von Krawczynski
2003-04-27 16:59             ` Why DRM exists [was Re: Flame Linus to a crisp!] Larry McVoy
2003-04-27 17:04               ` Ben Collins
2003-04-27 17:34               ` Michael Buesch
2003-04-27 18:41                 ` Henrik Persson
2003-04-27 17:35               ` Måns Rullgård
2003-04-27 17:49                 ` Mirar
2003-04-27 23:15                   ` H. Peter Anvin
2003-04-27 17:59                 ` Michael Buesch
2003-04-27 21:28                 ` Alan Cox
2003-04-28  1:48                 ` rmoser
2003-04-28  9:05                   ` Måns Rullgård
2003-04-28 10:44                   ` The X-Window System John Bradford
2003-04-28 14:37                     ` Herman Oosthuysen
2003-04-28 16:28                       ` uaca
2003-05-06  3:55                         ` Miles Bader
2003-04-27 18:07               ` Why DRM exists [was Re: Flame Linus to a crisp!] Matthias Schniedermeyer
2003-04-27 18:35               ` Chris Adams
2003-04-27 18:50                 ` Larry McVoy
2003-04-27 19:11                   ` Davide Libenzi
2003-04-27 20:13                   ` Frank van Maarseveen
2003-04-27 20:34                   ` walt
2003-04-27 21:26                   ` Alan Cox
2003-04-27 22:07                   ` Ross Vandegrift
2003-04-27 22:32                     ` Larry McVoy
2003-04-27 22:05                       ` Alan Cox
2003-04-27 23:28                         ` Larry McVoy
2003-04-28  0:06                           ` Ross Vandegrift
2003-04-28 11:03                           ` Alan Cox
2003-04-29 18:06                           ` Timothy Miller
2003-04-28  9:06                       ` Eric W. Biederman
2003-04-28 14:55                       ` Michael Buesch
2003-04-28 20:04                       ` Matthias Schniedermeyer
2003-04-28 20:18                         ` Larry McVoy
2003-04-28 20:22                           ` Chris Adams
2003-04-28 21:24                             ` Larry McVoy
2003-04-28 21:40                               ` Roman Zippel
2003-04-28 22:13                               ` Alan Cox
2003-04-28 22:16                           ` Alan Cox
2003-04-29  0:09                             ` Larry McVoy
2003-04-29  4:07                               ` Dax Kelson
2003-04-29  5:08                                 ` Larry McVoy
2003-04-29 16:40                                 ` Scott Robert Ladd
2003-04-29 21:45                                   ` Helge Hafting
2003-04-30  9:58                                   ` Jamie Lokier
2003-04-30 15:06                                     ` Scott Robert Ladd
2003-04-29  5:59                               ` Theodore Ts'o
2003-04-29 16:41                                 ` Scott Robert Ladd
2003-04-29 14:35                               ` Alan Cox
2003-04-27 22:34                   ` Matthias Andree
2003-04-27 22:51                   ` Matthew Kirkwood
2003-04-27 23:53                     ` Larry McVoy
2003-04-28  0:00                       ` rmoser
     [not found]                         ` <20030428001001.GP23068@work.bitmover.com>
2003-04-28  0:19                           ` rmoser
2003-04-28  0:37                             ` Larry McVoy
2003-04-28  0:40                               ` rmoser
2003-04-28 11:38                   ` Jan-Benedict Glaw
2003-04-29 14:21                   ` Timothy Miller
2003-04-29 14:27                     ` Henrik Persson
2003-04-29 19:56                       ` Timothy Miller
2003-04-29 20:35                         ` Henrik Persson
2003-04-30  8:39                     ` Jamie Lokier
2003-04-27 18:47               ` William Lee Irwin III
2003-04-27 18:56               ` Werner Almesberger
2003-04-27 19:20               ` Geert Uytterhoeven
2003-04-27 21:30               ` Jon Portnoy
2003-04-27 21:32               ` Alan Cox
2003-04-27 22:36                 ` Larry McVoy
2003-04-27 21:56                   ` Alan Cox
2003-04-27 23:08                     ` Matthew Kirkwood
2003-04-27 22:16                       ` Alan Cox
2003-04-27 23:35                   ` Matthias Andree
2003-04-27 22:07               ` Matthias Andree
2003-04-28  0:36               ` Scott Robert Ladd
2003-04-28  9:57               ` Stephan von Krawczynski
2003-05-06 15:58                 ` Henning P. Schmiedehausen
2003-05-07 14:44                   ` Stephan von Krawczynski
2003-05-07 14:28                     ` Alan Cox
2003-05-07 21:40                     ` Henning P. Schmiedehausen
2003-05-07 22:16                       ` Alan Cox
2003-05-08  0:33                       ` Kurt Wall
2003-04-28 11:26               ` Jan-Benedict Glaw
2003-05-06 15:59                 ` Henning P. Schmiedehausen
2003-04-28 22:50               ` Timothy Miller
2003-04-29 14:46               ` Jeffrey Souza
2003-04-29 15:16                 ` venom
2003-04-30  9:35                 ` Jamie Lokier
     [not found]             ` <20030427171007$6d24@gated-at.bofh.it>
2003-04-27 20:08               ` Why DRM exists Florian Weimer
2003-04-24 19:23       ` Flame Linus to a crisp! Jamie Lokier
2003-04-24 19:50         ` Balram Adlakha
2003-04-24  8:57 ` Arjan van de Ven
2003-04-24  9:19   ` Russell King
2003-04-24 11:38     ` Shachar Shemesh
2003-04-24 17:46       ` Shachar Shemesh
2003-04-24 14:59   ` Linus Torvalds
2003-04-24 12:39 ` Mark Mielke
2003-04-24 15:53 ` Elladan
2003-04-24 18:31 ` Daniel Phillips
2003-04-24 23:15   ` Werner Almesberger
2003-04-25 11:28     ` Eric W. Biederman
2003-04-27  1:31       ` Werner Almesberger
2003-04-27  1:59         ` David Wagner
2003-04-25 14:37     ` Daniel Phillips
2003-04-25 15:17       ` Valdis.Kletnieks
2003-04-25 17:37       ` Werner Almesberger
2003-04-26 21:59         ` Daniel Phillips
2003-04-26 13:00     ` Geert Uytterhoeven
2003-04-26 18:22       ` Linus Torvalds
2003-04-26 18:41         ` viro
2003-04-26 18:48           ` Linus Torvalds
2003-04-28 14:20           ` John Stoffel
2003-04-26 19:23         ` Michael Buesch
2003-04-28 10:35         ` Andre Hedrick
2003-04-28 12:12           ` Jörn Engel
2003-04-28 14:01           ` Zack Gilburd
2003-04-28 14:30             ` Geert Uytterhoeven
2003-04-26 18:21   ` Rik van Riel
2003-04-26 23:34     ` Jamie Lokier
2003-04-27  3:59     ` Werner Almesberger
2003-04-24 20:16 ` Nils Holland
2003-04-25  4:46 ` My take on Trusted Computing and DRM Joseph Pingenot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5.2.0.9.2.20030427090009.01f89870@pop.gmx.net \
    --to=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=torvalds@transmeta.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).