* RE: PROBLEM: possible proceses leak
@ 2003-12-08 19:34 Andrew Volkov
0 siblings, 0 replies; 8+ messages in thread
From: Andrew Volkov @ 2003-12-08 19:34 UTC (permalink / raw)
To: William Lee Irwin III, Linus Torvalds; +Cc: linux-kernel
Yeah, really magic, sorry for troubles.
Andrey
>
> On Mon, 8 Dec 2003, William Lee Irwin III wrote:
> >> Heh, no wonder everyone wants to get rid of the things.
>
> On Mon, Dec 08, 2003 at 11:09:41AM -0800, Linus Torvalds wrote:
> > No, they are all correct. No bug here, move along folks,
> nothing to see..
> > Linus
>
> Looks like I missed that bit of preempt magic the first time around
> the need_resched: path in entry.S. Easy enough to drop this one.
>
>
> -- wli
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: PROBLEM: possible proceses leak
2003-12-08 19:09 ` Linus Torvalds
@ 2003-12-08 19:23 ` William Lee Irwin III
0 siblings, 0 replies; 8+ messages in thread
From: William Lee Irwin III @ 2003-12-08 19:23 UTC (permalink / raw)
To: Linus Torvalds; +Cc: Andrew Volkov, linux-kernel
On Mon, 8 Dec 2003, William Lee Irwin III wrote:
>> Heh, no wonder everyone wants to get rid of the things.
On Mon, Dec 08, 2003 at 11:09:41AM -0800, Linus Torvalds wrote:
> No, they are all correct. No bug here, move along folks, nothing to see..
> Linus
Looks like I missed that bit of preempt magic the first time around
the need_resched: path in entry.S. Easy enough to drop this one.
-- wli
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: PROBLEM: possible proceses leak
2003-12-08 18:51 ` William Lee Irwin III
@ 2003-12-08 19:09 ` Linus Torvalds
2003-12-08 19:23 ` William Lee Irwin III
0 siblings, 1 reply; 8+ messages in thread
From: Linus Torvalds @ 2003-12-08 19:09 UTC (permalink / raw)
To: William Lee Irwin III; +Cc: Andrew Volkov, linux-kernel
On Mon, 8 Dec 2003, William Lee Irwin III wrote:
> On Mon, Dec 08, 2003 at 09:45:17PM +0300, Andrew Volkov wrote:
> > Yes.
> > And same bug in kernel/sched.c in ALL *_sleep_on
> > Andrey
>
> Heh, no wonder everyone wants to get rid of the things.
No, they are all correct. No bug here, move along folks, nothing to see..
Linus
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: PROBLEM: possible proceses leak
2003-12-08 18:01 Andrew Volkov
2003-12-08 18:25 ` William Lee Irwin III
@ 2003-12-08 19:08 ` Linus Torvalds
1 sibling, 0 replies; 8+ messages in thread
From: Linus Torvalds @ 2003-12-08 19:08 UTC (permalink / raw)
To: Andrew Volkov; +Cc: linux-kernel
On Mon, 8 Dec 2003, Andrew Volkov wrote:
>
> In all kernels (up to 2.6-test11) next sequence of code
> in __down/__down_interruptible function
> (arch/i386/kernel/semaphore.c) may cause processes or threads leaking.
Nope. Quite the reverse, in fact. Having the "state" setting above the
wait-queue is usually a good idea, since it means that there are less
races to worry about.
> |-----tsk->state = TASK_UNINTERRUPTIBLE; <----- BUG:
> | -- If "do_schedule" from kernel/schedule will calling
> | at this point, due to expire of time slice,
> | then current task will removed from run_queue,
> | but doesn't added to any waiting queue, and hence
> | will never run again. --
No.
Kernel preemption is special, and it will preempt the process WITHOUT
CHANGING the state of it. See the PREEMPT_ACTIVE test in schedule(). The
process will still be left on the run-queue, and it will be run again,
even if it is marked sleeping.
Linus
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: PROBLEM: possible proceses leak
2003-12-08 18:45 Andrew Volkov
@ 2003-12-08 18:51 ` William Lee Irwin III
2003-12-08 19:09 ` Linus Torvalds
0 siblings, 1 reply; 8+ messages in thread
From: William Lee Irwin III @ 2003-12-08 18:51 UTC (permalink / raw)
To: Andrew Volkov; +Cc: linux-kernel
On Mon, Dec 08, 2003 at 09:45:17PM +0300, Andrew Volkov wrote:
> Yes.
> And same bug in kernel/sched.c in ALL *_sleep_on
> Andrey
Heh, no wonder everyone wants to get rid of the things.
-- wli
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: PROBLEM: possible proceses leak
@ 2003-12-08 18:45 Andrew Volkov
2003-12-08 18:51 ` William Lee Irwin III
0 siblings, 1 reply; 8+ messages in thread
From: Andrew Volkov @ 2003-12-08 18:45 UTC (permalink / raw)
To: William Lee Irwin III; +Cc: linux-kernel
Yes.
And same bug in kernel/sched.c in ALL *_sleep_on
Andrey
==========================================================
--- kernel/sched.c.old 2003-12-08 21:39:08.000000000 +0300
+++ kernel/sched.c 2003-12-08 21:40:19.000000000 +0300
@@ -819,10 +819,8 @@
void interruptible_sleep_on(wait_queue_head_t *q)
{
SLEEP_ON_VAR
-
- current->state = TASK_INTERRUPTIBLE;
-
SLEEP_ON_HEAD
+ current->state = TASK_INTERRUPTIBLE;
schedule();
SLEEP_ON_TAIL
}
@@ -831,9 +829,8 @@
{
SLEEP_ON_VAR
- current->state = TASK_INTERRUPTIBLE;
-
SLEEP_ON_HEAD
+ current->state = TASK_INTERRUPTIBLE;
timeout = schedule_timeout(timeout);
SLEEP_ON_TAIL
@@ -844,9 +841,8 @@
{
SLEEP_ON_VAR
- current->state = TASK_UNINTERRUPTIBLE;
-
SLEEP_ON_HEAD
+ current->state = TASK_UNINTERRUPTIBLE;
schedule();
SLEEP_ON_TAIL
}
@@ -855,9 +851,8 @@
{
SLEEP_ON_VAR
- current->state = TASK_UNINTERRUPTIBLE;
-
SLEEP_ON_HEAD
+ current->state = TASK_UNINTERRUPTIBLE;
timeout = schedule_timeout(timeout);
SLEEP_ON_TAIL
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: PROBLEM: possible proceses leak
2003-12-08 18:01 Andrew Volkov
@ 2003-12-08 18:25 ` William Lee Irwin III
2003-12-08 19:08 ` Linus Torvalds
1 sibling, 0 replies; 8+ messages in thread
From: William Lee Irwin III @ 2003-12-08 18:25 UTC (permalink / raw)
To: Andrew Volkov; +Cc: linux-kernel
On Mon, Dec 08, 2003 at 09:01:40PM +0300, Andrew Volkov wrote:
> In all kernels (up to 2.6-test11) next sequence of code
> in __down/__down_interruptible function
> (arch/i386/kernel/semaphore.c) may cause processes or threads leaking.
Something like this?
-- wli
===== arch/alpha/kernel/semaphore.c 1.5 vs edited =====
--- 1.5/arch/alpha/kernel/semaphore.c Thu Apr 3 14:49:57 2003
+++ edited/arch/alpha/kernel/semaphore.c Mon Dec 8 10:19:08 2003
@@ -62,9 +62,9 @@
current->comm, current->pid, sem);
#endif
+ add_wait_queue_exclusive(&sem->wait, &wait);
current->state = TASK_UNINTERRUPTIBLE;
wmb();
- add_wait_queue_exclusive(&sem->wait, &wait);
/* At this point we know that sem->count is negative. In order
to avoid racing with __up, we must check for wakeup before
@@ -94,8 +94,8 @@
set_task_state(current, TASK_UNINTERRUPTIBLE);
}
- remove_wait_queue(&sem->wait, &wait);
current->state = TASK_RUNNING;
+ remove_wait_queue(&sem->wait, &wait);
#ifdef CONFIG_DEBUG_SEMAPHORE
printk("%s(%d): down acquired(%p)\n",
@@ -114,9 +114,9 @@
current->comm, current->pid, sem);
#endif
+ add_wait_queue_exclusive(&sem->wait, &wait);
current->state = TASK_INTERRUPTIBLE;
wmb();
- add_wait_queue_exclusive(&sem->wait, &wait);
while (1) {
long tmp, tmp2, tmp3;
@@ -181,8 +181,8 @@
set_task_state(current, TASK_INTERRUPTIBLE);
}
- remove_wait_queue(&sem->wait, &wait);
current->state = TASK_RUNNING;
+ remove_wait_queue(&sem->wait, &wait);
wake_up(&sem->wait);
#ifdef CONFIG_DEBUG_SEMAPHORE
===== arch/arm/kernel/semaphore.c 1.8 vs edited =====
--- 1.8/arch/arm/kernel/semaphore.c Sat Sep 20 02:38:17 2003
+++ edited/arch/arm/kernel/semaphore.c Mon Dec 8 10:19:27 2003
@@ -58,8 +58,8 @@
{
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_UNINTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
sem->sleepers++;
@@ -82,8 +82,8 @@
spin_lock_irq(&semaphore_lock);
}
spin_unlock_irq(&semaphore_lock);
- remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
+ remove_wait_queue(&sem->wait, &wait);
wake_up(&sem->wait);
}
@@ -92,8 +92,8 @@
int retval = 0;
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_INTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_INTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
sem->sleepers ++;
===== arch/arm26/kernel/semaphore.c 1.1 vs edited =====
--- 1.1/arch/arm26/kernel/semaphore.c Wed Jun 4 04:15:45 2003
+++ edited/arch/arm26/kernel/semaphore.c Mon Dec 8 10:19:41 2003
@@ -60,8 +60,8 @@
{
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_UNINTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
sem->sleepers++;
@@ -84,8 +84,8 @@
spin_lock_irq(&semaphore_lock);
}
spin_unlock_irq(&semaphore_lock);
- remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
+ remove_wait_queue(&sem->wait, &wait);
wake_up(&sem->wait);
}
@@ -94,8 +94,8 @@
int retval = 0;
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_INTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_INTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
sem->sleepers ++;
===== arch/cris/kernel/semaphore.c 1.2 vs edited =====
--- 1.2/arch/cris/kernel/semaphore.c Mon Feb 4 23:39:23 2002
+++ edited/arch/cris/kernel/semaphore.c Mon Dec 8 10:17:51 2003
@@ -68,8 +68,8 @@
#define DOWN_HEAD(task_state) \
\
\
- tsk->state = (task_state); \
add_wait_queue(&sem->wait, &wait); \
+ tsk->state = (task_state); \
\
/* \
* Ok, we're set up. sem->count is known to be less than zero \
@@ -91,8 +91,8 @@
#define DOWN_TAIL(task_state) \
tsk->state = (task_state); \
} \
- tsk->state = TASK_RUNNING; \
remove_wait_queue(&sem->wait, &wait);
+ tsk->state = TASK_RUNNING; \
void __down(struct semaphore * sem)
{
===== arch/h8300/kernel/semaphore.c 1.1 vs edited =====
--- 1.1/arch/h8300/kernel/semaphore.c Sun Feb 16 16:01:58 2003
+++ edited/arch/h8300/kernel/semaphore.c Mon Dec 8 10:18:07 2003
@@ -69,8 +69,8 @@
#define DOWN_HEAD(task_state) \
\
\
- current->state = (task_state); \
add_wait_queue(&sem->wait, &wait); \
+ current->state = (task_state); \
\
/* \
* Ok, we're set up. sem->count is known to be less than zero \
===== arch/i386/kernel/semaphore.c 1.8 vs edited =====
--- 1.8/arch/i386/kernel/semaphore.c Thu Nov 21 09:55:02 2002
+++ edited/arch/i386/kernel/semaphore.c Mon Dec 8 10:18:43 2003
@@ -59,9 +59,9 @@
DECLARE_WAITQUEUE(wait, tsk);
unsigned long flags;
- tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irqsave(&sem->wait.lock, flags);
add_wait_queue_exclusive_locked(&sem->wait, &wait);
+ tsk->state = TASK_UNINTERRUPTIBLE;
sem->sleepers++;
for (;;) {
@@ -84,10 +84,10 @@
spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_UNINTERRUPTIBLE;
}
+ tsk->state = TASK_RUNNING;
remove_wait_queue_locked(&sem->wait, &wait);
wake_up_locked(&sem->wait);
spin_unlock_irqrestore(&sem->wait.lock, flags);
- tsk->state = TASK_RUNNING;
}
int __down_interruptible(struct semaphore * sem)
@@ -97,9 +97,9 @@
DECLARE_WAITQUEUE(wait, tsk);
unsigned long flags;
- tsk->state = TASK_INTERRUPTIBLE;
spin_lock_irqsave(&sem->wait.lock, flags);
add_wait_queue_exclusive_locked(&sem->wait, &wait);
+ tsk->state = TASK_INTERRUPTIBLE;
sem->sleepers++;
for (;;) {
@@ -137,11 +137,11 @@
spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_INTERRUPTIBLE;
}
+ tsk->state = TASK_RUNNING;
remove_wait_queue_locked(&sem->wait, &wait);
wake_up_locked(&sem->wait);
spin_unlock_irqrestore(&sem->wait.lock, flags);
- tsk->state = TASK_RUNNING;
return retval;
}
===== arch/ia64/kernel/semaphore.c 1.4 vs edited =====
--- 1.4/arch/ia64/kernel/semaphore.c Sat Aug 31 06:16:01 2002
+++ edited/arch/ia64/kernel/semaphore.c Mon Dec 8 10:20:25 2003
@@ -51,9 +51,9 @@
DECLARE_WAITQUEUE(wait, tsk);
unsigned long flags;
- tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irqsave(&sem->wait.lock, flags);
add_wait_queue_exclusive_locked(&sem->wait, &wait);
+ tsk->state = TASK_UNINTERRUPTIBLE;
sem->sleepers++;
for (;;) {
@@ -76,10 +76,10 @@
spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_UNINTERRUPTIBLE;
}
+ tsk->state = TASK_RUNNING;
remove_wait_queue_locked(&sem->wait, &wait);
wake_up_locked(&sem->wait);
spin_unlock_irqrestore(&sem->wait.lock, flags);
- tsk->state = TASK_RUNNING;
}
int
@@ -90,9 +90,9 @@
DECLARE_WAITQUEUE(wait, tsk);
unsigned long flags;
- tsk->state = TASK_INTERRUPTIBLE;
spin_lock_irqsave(&sem->wait.lock, flags);
add_wait_queue_exclusive_locked(&sem->wait, &wait);
+ tsk->state = TASK_INTERRUPTIBLE;
sem->sleepers ++;
for (;;) {
@@ -130,11 +130,11 @@
spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_INTERRUPTIBLE;
}
+ tsk->state = TASK_RUNNING;
remove_wait_queue_locked(&sem->wait, &wait);
wake_up_locked(&sem->wait);
spin_unlock_irqrestore(&sem->wait.lock, flags);
- tsk->state = TASK_RUNNING;
return retval;
}
===== arch/m68k/kernel/semaphore.c 1.2 vs edited =====
--- 1.2/arch/m68k/kernel/semaphore.c Mon Feb 4 23:39:24 2002
+++ edited/arch/m68k/kernel/semaphore.c Mon Dec 8 10:21:00 2003
@@ -69,8 +69,8 @@
#define DOWN_HEAD(task_state) \
\
\
- current->state = (task_state); \
add_wait_queue(&sem->wait, &wait); \
+ current->state = (task_state); \
\
/* \
* Ok, we're set up. sem->count is known to be less than zero \
===== arch/m68knommu/kernel/semaphore.c 1.1 vs edited =====
--- 1.1/arch/m68knommu/kernel/semaphore.c Fri Nov 1 07:19:55 2002
+++ edited/arch/m68knommu/kernel/semaphore.c Mon Dec 8 10:21:10 2003
@@ -70,8 +70,8 @@
#define DOWN_HEAD(task_state) \
\
\
- current->state = (task_state); \
add_wait_queue(&sem->wait, &wait); \
+ current->state = (task_state); \
\
/* \
* Ok, we're set up. sem->count is known to be less than zero \
===== arch/mips/kernel/semaphore.c 1.2 vs edited =====
--- 1.2/arch/mips/kernel/semaphore.c Mon Feb 4 23:39:24 2002
+++ edited/arch/mips/kernel/semaphore.c Mon Dec 8 10:21:19 2003
@@ -68,8 +68,8 @@
#define DOWN_HEAD(task_state) \
\
\
- tsk->state = (task_state); \
add_wait_queue(&sem->wait, &wait); \
+ tsk->state = (task_state); \
\
/* \
* Ok, we're set up. sem->count is known to be less than zero \
===== arch/parisc/kernel/semaphore.c 1.3 vs edited =====
--- 1.3/arch/parisc/kernel/semaphore.c Sun Jul 21 14:20:31 2002
+++ edited/arch/parisc/kernel/semaphore.c Mon Dec 8 10:21:52 2003
@@ -49,8 +49,8 @@
spin_lock_irq(&sem->sentry); \
if (wakers(sem->count) == 0 && ret == 0) \
goto lost_race; /* Someone stole our wakeup */ \
- __remove_wait_queue(&sem->wait, &wait); \
current->state = TASK_RUNNING; \
+ __remove_wait_queue(&sem->wait, &wait); \
if (!waitqueue_active(&sem->wait) && (sem->count < 0)) \
sem->count = wakers(sem->count);
===== arch/ppc/kernel/semaphore.c 1.7 vs edited =====
--- 1.7/arch/ppc/kernel/semaphore.c Sun Sep 15 21:51:59 2002
+++ edited/arch/ppc/kernel/semaphore.c Mon Dec 8 10:22:15 2003
@@ -74,8 +74,8 @@
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_UNINTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_UNINTERRUPTIBLE;
smp_wmb();
/*
@@ -88,8 +88,8 @@
schedule();
tsk->state = TASK_UNINTERRUPTIBLE;
}
- remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
+ remove_wait_queue(&sem->wait, &wait);
/*
* If there are any more sleepers, wake one of them up so
@@ -105,8 +105,8 @@
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_INTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_INTERRUPTIBLE;
smp_wmb();
while (__sem_update_count(sem, -1) <= 0) {
===== arch/ppc64/kernel/semaphore.c 1.3 vs edited =====
--- 1.3/arch/ppc64/kernel/semaphore.c Wed Aug 27 21:23:26 2003
+++ edited/arch/ppc64/kernel/semaphore.c Mon Dec 8 10:22:33 2003
@@ -75,8 +75,8 @@
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- __set_task_state(tsk, TASK_UNINTERRUPTIBLE);
add_wait_queue_exclusive(&sem->wait, &wait);
+ __set_task_state(tsk, TASK_UNINTERRUPTIBLE);
/*
* Try to get the semaphore. If the count is > 0, then we've
@@ -88,8 +88,8 @@
schedule();
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
}
- remove_wait_queue(&sem->wait, &wait);
__set_task_state(tsk, TASK_RUNNING);
+ remove_wait_queue(&sem->wait, &wait);
/*
* If there are any more sleepers, wake one of them up so
@@ -105,8 +105,8 @@
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- __set_task_state(tsk, TASK_INTERRUPTIBLE);
add_wait_queue_exclusive(&sem->wait, &wait);
+ __set_task_state(tsk, TASK_INTERRUPTIBLE);
while (__sem_update_count(sem, -1) <= 0) {
if (signal_pending(current)) {
@@ -122,8 +122,8 @@
schedule();
set_task_state(tsk, TASK_INTERRUPTIBLE);
}
- remove_wait_queue(&sem->wait, &wait);
__set_task_state(tsk, TASK_RUNNING);
+ remove_wait_queue(&sem->wait, &wait);
wake_up(&sem->wait);
return retval;
===== arch/s390/kernel/semaphore.c 1.4 vs edited =====
--- 1.4/arch/s390/kernel/semaphore.c Wed Jun 5 10:43:26 2002
+++ edited/arch/s390/kernel/semaphore.c Mon Dec 8 10:22:46 2003
@@ -64,14 +64,14 @@
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_UNINTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_UNINTERRUPTIBLE;
while (__sem_update_count(sem, -1) <= 0) {
schedule();
tsk->state = TASK_UNINTERRUPTIBLE;
}
- remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
+ remove_wait_queue(&sem->wait, &wait);
wake_up(&sem->wait);
}
@@ -87,8 +87,8 @@
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_INTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_INTERRUPTIBLE;
while (__sem_update_count(sem, -1) <= 0) {
if (signal_pending(current)) {
__sem_update_count(sem, 0);
@@ -98,8 +98,8 @@
schedule();
tsk->state = TASK_INTERRUPTIBLE;
}
- remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
+ remove_wait_queue(&sem->wait, &wait);
wake_up(&sem->wait);
return retval;
}
===== arch/sh/kernel/semaphore.c 1.3 vs edited =====
--- 1.3/arch/sh/kernel/semaphore.c Wed Apr 10 00:09:38 2002
+++ edited/arch/sh/kernel/semaphore.c Mon Dec 8 10:22:54 2003
@@ -77,8 +77,8 @@
#define DOWN_HEAD(task_state) \
\
\
- tsk->state = (task_state); \
add_wait_queue(&sem->wait, &wait); \
+ tsk->state = (task_state); \
\
/* \
* Ok, we're set up. sem->count is known to be less than zero \
===== arch/sparc/kernel/semaphore.c 1.5 vs edited =====
--- 1.5/arch/sparc/kernel/semaphore.c Tue Jul 16 17:12:12 2002
+++ edited/arch/sparc/kernel/semaphore.c Mon Dec 8 10:23:11 2003
@@ -49,8 +49,8 @@
{
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_UNINTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
sem->sleepers++;
@@ -73,8 +73,8 @@
spin_lock_irq(&semaphore_lock);
}
spin_unlock_irq(&semaphore_lock);
- remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
+ remove_wait_queue(&sem->wait, &wait);
wake_up(&sem->wait);
}
===== arch/sparc64/kernel/semaphore.c 1.7 vs edited =====
--- 1.7/arch/sparc64/kernel/semaphore.c Wed Sep 3 23:40:12 2003
+++ edited/arch/sparc64/kernel/semaphore.c Mon Dec 8 10:23:37 2003
@@ -95,15 +95,15 @@
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_UNINTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_UNINTERRUPTIBLE;
while (__sem_update_count(sem, -1) <= 0) {
schedule();
tsk->state = TASK_UNINTERRUPTIBLE;
}
- remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
+ remove_wait_queue(&sem->wait, &wait);
wake_up(&sem->wait);
}
@@ -198,8 +198,8 @@
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_INTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_INTERRUPTIBLE;
while (__sem_update_count(sem, -1) <= 0) {
if (signal_pending(current)) {
===== arch/v850/kernel/semaphore.c 1.2 vs edited =====
--- 1.2/arch/v850/kernel/semaphore.c Wed Dec 18 18:20:47 2002
+++ edited/arch/v850/kernel/semaphore.c Mon Dec 8 10:23:58 2003
@@ -60,8 +60,8 @@
{
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_UNINTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
sem->sleepers++;
@@ -84,8 +84,8 @@
spin_lock_irq(&semaphore_lock);
}
spin_unlock_irq(&semaphore_lock);
- remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
+ remove_wait_queue(&sem->wait, &wait);
wake_up(&sem->wait);
}
@@ -94,8 +94,8 @@
int retval = 0;
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_INTERRUPTIBLE;
add_wait_queue_exclusive(&sem->wait, &wait);
+ tsk->state = TASK_INTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
sem->sleepers ++;
===== arch/x86_64/kernel/semaphore.c 1.3 vs edited =====
--- 1.3/arch/x86_64/kernel/semaphore.c Fri Oct 11 16:52:38 2002
+++ edited/arch/x86_64/kernel/semaphore.c Mon Dec 8 10:24:24 2003
@@ -60,9 +60,9 @@
DECLARE_WAITQUEUE(wait, tsk);
unsigned long flags;
- tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irqsave(&sem->wait.lock, flags);
add_wait_queue_exclusive_locked(&sem->wait, &wait);
+ tsk->state = TASK_UNINTERRUPTIBLE;
sem->sleepers++;
for (;;) {
@@ -85,10 +85,10 @@
spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_UNINTERRUPTIBLE;
}
+ tsk->state = TASK_RUNNING;
remove_wait_queue_locked(&sem->wait, &wait);
wake_up_locked(&sem->wait);
spin_unlock_irqrestore(&sem->wait.lock, flags);
- tsk->state = TASK_RUNNING;
}
int __down_interruptible(struct semaphore * sem)
@@ -98,9 +98,9 @@
DECLARE_WAITQUEUE(wait, tsk);
unsigned long flags;
- tsk->state = TASK_INTERRUPTIBLE;
spin_lock_irqsave(&sem->wait.lock, flags);
add_wait_queue_exclusive_locked(&sem->wait, &wait);
+ tsk->state = TASK_INTERRUPTIBLE;
sem->sleepers++;
for (;;) {
@@ -138,11 +138,11 @@
spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_INTERRUPTIBLE;
}
+ tsk->state = TASK_RUNNING;
remove_wait_queue_locked(&sem->wait, &wait);
wake_up_locked(&sem->wait);
spin_unlock_irqrestore(&sem->wait.lock, flags);
- tsk->state = TASK_RUNNING;
return retval;
}
^ permalink raw reply [flat|nested] 8+ messages in thread
* PROBLEM: possible proceses leak
@ 2003-12-08 18:01 Andrew Volkov
2003-12-08 18:25 ` William Lee Irwin III
2003-12-08 19:08 ` Linus Torvalds
0 siblings, 2 replies; 8+ messages in thread
From: Andrew Volkov @ 2003-12-08 18:01 UTC (permalink / raw)
To: linux-kernel
Hi all,
In all kernels (up to 2.6-test11) next sequence of code
in __down/__down_interruptible function
(arch/i386/kernel/semaphore.c) may cause processes or threads leaking.
void __down(struct semaphore * sem)
{
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
|-----tsk->state = TASK_UNINTERRUPTIBLE; <----- BUG:
| -- If "do_schedule" from kernel/schedule will calling
| at this point, due to expire of time slice,
| then current task will removed from run_queue,
| but doesn't added to any waiting queue, and hence
| will never run again. --
| add_wait_queue_exclusive(&sem->wait, &wait);
|
|-> --- This code must be here. ---
spin_lock_irq(&semaphore_lock);
sem->sleepers++;
for (;;) {
int sleepers = sem->sleepers;
/*
* Add "everybody else" into it. They aren't
* playing, because we own the spinlock.
*/
if (!atomic_add_negative(sleepers - 1, &sem->count)) {
sem->sleepers = 0;
break;
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irq(&semaphore_lock);
schedule();
tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
}
spin_unlock_irq(&semaphore_lock);
---> Must be here.
|
| remove_wait_queue(&sem->wait, &wait); <----- SAME BUG
------tsk->state = TASK_RUNNING;
wake_up(&sem->wait);
This bug in all arch/*/kernel/semaphore.c files.
Regards
Andrey Volkov
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2003-12-08 19:34 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-12-08 19:34 PROBLEM: possible proceses leak Andrew Volkov
-- strict thread matches above, loose matches on Subject: below --
2003-12-08 18:45 Andrew Volkov
2003-12-08 18:51 ` William Lee Irwin III
2003-12-08 19:09 ` Linus Torvalds
2003-12-08 19:23 ` William Lee Irwin III
2003-12-08 18:01 Andrew Volkov
2003-12-08 18:25 ` William Lee Irwin III
2003-12-08 19:08 ` Linus Torvalds
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).