linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch 0/7] 2.6.24.7-rt24 bugfixes
@ 2008-12-19 16:02 Thomas Gleixner
  2008-12-19 16:02 ` [patch 1/7] ftrace: fix task state printout Thomas Gleixner
                   ` (6 more replies)
  0 siblings, 7 replies; 14+ messages in thread
From: Thomas Gleixner @ 2008-12-19 16:02 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Steven Rostedt, Peter Zijlstra, Clark Williams,
	Gregory Haskins, Linux-rt

The following patchset fixes a long standing wakeup bug and a state
printout problem in the tracer.

Thanks,

	tglx
-- 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 1/7] ftrace: fix task state printout
  2008-12-19 16:02 [patch 0/7] 2.6.24.7-rt24 bugfixes Thomas Gleixner
@ 2008-12-19 16:02 ` Thomas Gleixner
  2008-12-19 16:25   ` Frédéric Weisbecker
  2008-12-19 16:02 ` [patch 2/7] sched: remove useless nointeractive state Thomas Gleixner
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Thomas Gleixner @ 2008-12-19 16:02 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Steven Rostedt, Peter Zijlstra, Clark Williams,
	Gregory Haskins, Linux-rt

[-- Attachment #1: ftrace-fix-task-state-printout.patch --]
[-- Type: text/plain, Size: 3801 bytes --]

Impact: tracer task state decoding is wrong, size check is buggy

The tracing code has interesting varieties of printing out task state.
Unfortunalely only one of the instances is correct as it copies the
code from sched.c:sched_show_task(). The others are plain wrong as
they treatthe bitfield as an integer offset into the character
array. Also the size check of the character array is wrong as it
includes the trailing \0.

Use a common state decoder inline which does the Right Thing.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/trace/trace.c |   40 ++++++++++++++++++----------------------
 1 file changed, 18 insertions(+), 22 deletions(-)

Index: linux-2.6.24/kernel/trace/trace.c
===================================================================
--- linux-2.6.24.orig/kernel/trace/trace.c
+++ linux-2.6.24/kernel/trace/trace.c
@@ -1626,6 +1626,13 @@ lat_print_timestamp(struct trace_seq *s,
 
 static const char state_to_char[] = TASK_STATE_TO_CHAR_STR;
 
+static int task_state_char(unsigned long state)
+{
+	int bit = state ? __ffs(state) + 1 : 0;
+
+	return bit < sizeof(state_to_char) - 1 ? state_to_char[bit] : '?';
+}
+
 extern unsigned long sys_call_table[NR_syscalls];
 
 #if defined(CONFIG_COMPAT) && defined(CONFIG_X86)
@@ -1654,7 +1661,6 @@ print_lat_fmt(struct trace_iterator *ite
 	char *comm;
 	int S, T;
 	int i;
-	unsigned state;
 
 	if (!next_entry)
 		next_entry = entry;
@@ -1685,11 +1691,8 @@ print_lat_fmt(struct trace_iterator *ite
 		break;
 	case TRACE_CTX:
 	case TRACE_WAKE:
-		T = entry->ctx.next_state < sizeof(state_to_char) ?
-			state_to_char[entry->ctx.next_state] : 'X';
-
-		state = entry->ctx.prev_state ? __ffs(entry->ctx.prev_state) + 1 : 0;
-		S = state < sizeof(state_to_char) - 1 ? state_to_char[state] : 'X';
+		T = task_state_char(entry->ctx.next_state);
+		S = task_state_char(entry->ctx.prev_state);
 		comm = trace_find_cmdline(entry->ctx.next_pid);
 		trace_seq_printf(s, " %5d:%3d:%c %s %5d:%3d:%c %s\n",
 				 entry->ctx.prev_pid,
@@ -1862,10 +1865,8 @@ static int print_trace_fmt(struct trace_
 		break;
 	case TRACE_CTX:
 	case TRACE_WAKE:
-		S = entry->ctx.prev_state < sizeof(state_to_char) ?
-			state_to_char[entry->ctx.prev_state] : 'X';
-		T = entry->ctx.next_state < sizeof(state_to_char) ?
-			state_to_char[entry->ctx.next_state] : 'X';
+		T = task_state_char(entry->ctx.next_state);
+		S = task_state_char(entry->ctx.prev_state);
 		ret = trace_seq_printf(s, " %5d:%3d:%c %s %5d:%3d:%c\n",
 				       entry->ctx.prev_pid,
 				       entry->ctx.prev_prio,
@@ -2012,12 +2013,10 @@ static int print_raw_fmt(struct trace_it
 		break;
 	case TRACE_CTX:
 	case TRACE_WAKE:
-		S = entry->ctx.prev_state < sizeof(state_to_char) ?
-			state_to_char[entry->ctx.prev_state] : 'X';
-		T = entry->ctx.next_state < sizeof(state_to_char) ?
-			state_to_char[entry->ctx.next_state] : 'X';
-		if (entry->type == TRACE_WAKE)
-			S = '+';
+		T = task_state_char(entry->ctx.next_state);
+		S = entry->type == TRACE_WAKE ? '+' :
+			task_state_char(entry->ctx.prev_state);
+
 		ret = trace_seq_printf(s, "%d %d %c %d %d %c\n",
 				       entry->ctx.prev_pid,
 				       entry->ctx.prev_prio,
@@ -2073,12 +2072,9 @@ static int print_hex_fmt(struct trace_it
 		break;
 	case TRACE_CTX:
 	case TRACE_WAKE:
-		S = entry->ctx.prev_state < sizeof(state_to_char) ?
-			state_to_char[entry->ctx.prev_state] : 'X';
-		T = entry->ctx.next_state < sizeof(state_to_char) ?
-			state_to_char[entry->ctx.next_state] : 'X';
-		if (entry->type == TRACE_WAKE)
-			S = '+';
+		T = task_state_char(entry->ctx.next_state);
+		S = entry->type == TRACE_WAKE ? '+' :
+			task_state_char(entry->ctx.prev_state);
 		SEQ_PUT_HEX_FIELD_RET(s, entry->ctx.prev_pid);
 		SEQ_PUT_HEX_FIELD_RET(s, entry->ctx.prev_prio);
 		SEQ_PUT_HEX_FIELD_RET(s, S);

-- 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 2/7] sched: remove useless nointeractive state
  2008-12-19 16:02 [patch 0/7] 2.6.24.7-rt24 bugfixes Thomas Gleixner
  2008-12-19 16:02 ` [patch 1/7] ftrace: fix task state printout Thomas Gleixner
@ 2008-12-19 16:02 ` Thomas Gleixner
  2008-12-19 16:02 ` [patch 3/7] rtmutex: remove unused variable Thomas Gleixner
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Thomas Gleixner @ 2008-12-19 16:02 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Steven Rostedt, Peter Zijlstra, Clark Williams,
	Gregory Haskins, Linux-rt

[-- Attachment #1: sched-remove-useless-nointeractive-state.patch --]
[-- Type: text/plain, Size: 698 bytes --]

gone task state reintroduced by rtmutex patches.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/sched.h |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Index: linux-2.6.24/include/linux/sched.h
===================================================================
--- linux-2.6.24.orig/include/linux/sched.h
+++ linux-2.6.24/include/linux/sched.h
@@ -202,8 +202,7 @@ extern struct semaphore kernel_sem;
 #define EXIT_ZOMBIE		32
 #define EXIT_DEAD		64
 /* in tsk->state again */
-#define TASK_NONINTERACTIVE	128
-#define TASK_DEAD		256
+#define TASK_DEAD		128
 
 #define __set_task_state(tsk, state_value)		\
 	do { (tsk)->state = (state_value); } while (0)

-- 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 3/7] rtmutex: remove unused variable
  2008-12-19 16:02 [patch 0/7] 2.6.24.7-rt24 bugfixes Thomas Gleixner
  2008-12-19 16:02 ` [patch 1/7] ftrace: fix task state printout Thomas Gleixner
  2008-12-19 16:02 ` [patch 2/7] sched: remove useless nointeractive state Thomas Gleixner
@ 2008-12-19 16:02 ` Thomas Gleixner
  2008-12-19 16:03 ` [patch 4/7] rtmutex: unify state manipulation Thomas Gleixner
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Thomas Gleixner @ 2008-12-19 16:02 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Steven Rostedt, Peter Zijlstra, Clark Williams,
	Gregory Haskins, Linux-rt

[-- Attachment #1: rtmutex-remove-unused-variable.patch --]
[-- Type: text/plain, Size: 605 bytes --]

Remove the leftover of the rwlock fixes.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/rtmutex.c |    3 ---
 1 file changed, 3 deletions(-)

Index: linux-2.6.24/kernel/rtmutex.c
===================================================================
--- linux-2.6.24.orig/kernel/rtmutex.c
+++ linux-2.6.24/kernel/rtmutex.c
@@ -1950,9 +1950,6 @@ rt_write_fastunlock(struct rw_mutex *rwm
 					    int mtx),
 		    int mtx)
 {
-	struct task_struct *val = (void *)((unsigned long)current |
-					   RT_RWLOCK_WRITER);
-
 	WARN_ON(rt_rwlock_owner(rwm) != current);
 	slowfn(rwm, mtx);
 }

-- 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 4/7] rtmutex: unify state manipulation
  2008-12-19 16:02 [patch 0/7] 2.6.24.7-rt24 bugfixes Thomas Gleixner
                   ` (2 preceding siblings ...)
  2008-12-19 16:02 ` [patch 3/7] rtmutex: remove unused variable Thomas Gleixner
@ 2008-12-19 16:03 ` Thomas Gleixner
  2008-12-19 18:36   ` Steven Rostedt
  2008-12-19 16:03 ` [patch 5/7] rtmutex: remove uber optimization Thomas Gleixner
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Thomas Gleixner @ 2008-12-19 16:03 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Steven Rostedt, Peter Zijlstra, Clark Williams,
	Gregory Haskins, Linux-rt

[-- Attachment #1: rtmutex-unify-state-manipulation.patch --]
[-- Type: text/plain, Size: 6756 bytes --]

The manipulation of the waiter task state is copied all over the place
with slightly different details. Use one set of functions to reduce
duplicated code and make the handling consistent for all instances.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/rtmutex.c |  104 ++++++++++++++++++++++++++++++-------------------------
 1 file changed, 57 insertions(+), 47 deletions(-)

Index: linux-2.6.24/kernel/rtmutex.c
===================================================================
--- linux-2.6.24.orig/kernel/rtmutex.c
+++ linux-2.6.24/kernel/rtmutex.c
@@ -765,13 +765,6 @@ rt_spin_lock_fastunlock(struct rt_mutex 
 		slowfn(lock);
 }
 
-static inline void
-update_current(unsigned long new_state, unsigned long *saved_state)
-{
-	unsigned long state = xchg(&current->state, new_state);
-	if (unlikely(state == TASK_RUNNING))
-		*saved_state = TASK_RUNNING;
-}
 
 #ifdef CONFIG_SMP
 static int adaptive_wait(struct rt_mutex_waiter *waiter,
@@ -803,6 +796,43 @@ static int adaptive_wait(struct rt_mutex
 #endif
 
 /*
+ * The state setting needs to preserve the original state and needs to
+ * take care of non rtmutex wakeups.
+ *
+ * The special case here is TASK_INTERRUPTIBLE: We cannot set the
+ * blocked state to TASK_UNINTERRUPTIBLE as we would miss wakeups from
+ * wake_up_interruptible().
+ */
+static inline unsigned long
+rt_set_current_blocked_state(unsigned long saved_state)
+{
+	unsigned long state, block_state;
+
+	do {
+		state = current->state;
+		/*
+		 * Take care of non rtmutex wakeups. rtmutex wakeups
+		 * set the state to TASK_RUNNING_MUTEX.
+		 */
+		if (state == TASK_RUNNING)
+			saved_state = TASK_RUNNING;
+
+		block_state = TASK_UNINTERRUPTIBLE;
+
+	} while (cmpxchg(&current->state, state, block_state) != state);
+
+	return saved_state;
+}
+
+static inline void rt_restore_current_state(unsigned long saved_state)
+{
+	unsigned long state = xchg(&current->state, saved_state);
+
+	if (state == TASK_RUNNING)
+		current->state = TASK_RUNNING;
+}
+
+/*
  * Slow path lock function spin_lock style: this variant is very
  * careful not to miss any non-lock wakeups.
  *
@@ -816,7 +846,7 @@ static void fastcall noinline __sched
 rt_spin_lock_slowlock(struct rt_mutex *lock)
 {
 	struct rt_mutex_waiter waiter;
-	unsigned long saved_state, state, flags;
+	unsigned long saved_state, flags;
 	struct task_struct *orig_owner;
 	int missed = 0;
 
@@ -836,7 +866,9 @@ rt_spin_lock_slowlock(struct rt_mutex *l
 	 * of the lock sleep/wakeup mechanism. When we get a real
 	 * wakeup the task->state is TASK_RUNNING and we change
 	 * saved_state accordingly. If we did not get a real wakeup
-	 * then we return with the saved state.
+	 * then we return with the saved state. We need to be careful
+	 * about original state TASK_INTERRUPTIBLE as well, as we
+	 * could miss a wakeup_interruptible()
 	 */
 	saved_state = current->state;
 
@@ -880,7 +912,8 @@ rt_spin_lock_slowlock(struct rt_mutex *l
 
 		if (adaptive_wait(&waiter, orig_owner)) {
 			put_task_struct(orig_owner);
-			update_current(TASK_UNINTERRUPTIBLE, &saved_state);
+
+			saved_state = rt_set_current_blocked_state(saved_state);
 			/*
 			 * The xchg() in update_current() is an implicit
 			 * barrier which we rely upon to ensure current->state
@@ -896,9 +929,7 @@ rt_spin_lock_slowlock(struct rt_mutex *l
 		current->lock_depth = saved_lock_depth;
 	}
 
-	state = xchg(&current->state, saved_state);
-	if (unlikely(state == TASK_RUNNING))
-		current->state = TASK_RUNNING;
+	rt_restore_current_state(saved_state);
 
 	/*
 	 * Extremely rare case, if we got woken up by a non-mutex wakeup,
@@ -1333,7 +1364,7 @@ rt_read_slowlock(struct rw_mutex *rwm, i
 	struct rt_mutex_waiter waiter;
 	struct rt_mutex *mutex = &rwm->mutex;
 	int saved_lock_depth = -1;
-	unsigned long saved_state = -1, state, flags;
+	unsigned long saved_state, flags;
 
 	spin_lock_irqsave(&mutex->wait_lock, flags);
 	init_rw_lists(rwm);
@@ -1357,13 +1388,13 @@ rt_read_slowlock(struct rw_mutex *rwm, i
 		 */
 		if (unlikely(current->lock_depth >= 0))
 			saved_lock_depth = rt_release_bkl(mutex, flags);
-		set_current_state(TASK_UNINTERRUPTIBLE);
 	} else {
 		/* Spin lock must preserve BKL */
-		saved_state = xchg(&current->state, TASK_UNINTERRUPTIBLE);
 		saved_lock_depth = current->lock_depth;
 	}
 
+	saved_state = rt_set_current_blocked_state(current->state);
+
 	for (;;) {
 		unsigned long saved_flags;
 
@@ -1398,23 +1429,12 @@ rt_read_slowlock(struct rw_mutex *rwm, i
 		spin_lock_irqsave(&mutex->wait_lock, flags);
 
 		current->flags |= saved_flags;
-		if (mtx)
-			set_current_state(TASK_UNINTERRUPTIBLE);
-		else {
+		if (!mtx)
 			current->lock_depth = saved_lock_depth;
-			state = xchg(&current->state, TASK_UNINTERRUPTIBLE);
-			if (unlikely(state == TASK_RUNNING))
-				saved_state = TASK_RUNNING;
-		}
+		saved_state = rt_set_current_blocked_state(saved_state);
 	}
 
-	if (mtx)
-		set_current_state(TASK_RUNNING);
-	else {
-		state = xchg(&current->state, saved_state);
-		if (unlikely(state == TASK_RUNNING))
-			current->state = TASK_RUNNING;
-	}
+	rt_restore_current_state(saved_state);
 
 	if (unlikely(waiter.task))
 		remove_waiter(mutex, &waiter, flags);
@@ -1490,7 +1510,7 @@ rt_write_slowlock(struct rw_mutex *rwm, 
 	struct rt_mutex *mutex = &rwm->mutex;
 	struct rt_mutex_waiter waiter;
 	int saved_lock_depth = -1;
-	unsigned long flags, saved_state = -1, state;
+	unsigned long flags, saved_state;
 
 	debug_rt_mutex_init_waiter(&waiter);
 	waiter.task = NULL;
@@ -1514,13 +1534,13 @@ rt_write_slowlock(struct rw_mutex *rwm, 
 		 */
 		if (unlikely(current->lock_depth >= 0))
 			saved_lock_depth = rt_release_bkl(mutex, flags);
-		set_current_state(TASK_UNINTERRUPTIBLE);
 	} else {
 		/* Spin locks must preserve the BKL */
 		saved_lock_depth = current->lock_depth;
-		saved_state = xchg(&current->state, TASK_UNINTERRUPTIBLE);
 	}
 
+	saved_state = rt_set_current_blocked_state(current->state);
+
 	for (;;) {
 		unsigned long saved_flags;
 
@@ -1555,24 +1575,14 @@ rt_write_slowlock(struct rw_mutex *rwm, 
 		spin_lock_irqsave(&mutex->wait_lock, flags);
 
 		current->flags |= saved_flags;
-		if (mtx)
-			set_current_state(TASK_UNINTERRUPTIBLE);
-		else {
+		if (!mtx)
 			current->lock_depth = saved_lock_depth;
-			state = xchg(&current->state, TASK_UNINTERRUPTIBLE);
-			if (unlikely(state == TASK_RUNNING))
-				saved_state = TASK_RUNNING;
-		}
-	}
 
-	if (mtx)
-		set_current_state(TASK_RUNNING);
-	else {
-		state = xchg(&current->state, saved_state);
-		if (unlikely(state == TASK_RUNNING))
-			current->state = TASK_RUNNING;
+		saved_state = rt_set_current_blocked_state(saved_state);
 	}
 
+	rt_restore_current_state(saved_state);
+
 	if (unlikely(waiter.task))
 		remove_waiter(mutex, &waiter, flags);
 

-- 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 5/7] rtmutex: remove uber optimization
  2008-12-19 16:02 [patch 0/7] 2.6.24.7-rt24 bugfixes Thomas Gleixner
                   ` (3 preceding siblings ...)
  2008-12-19 16:03 ` [patch 4/7] rtmutex: unify state manipulation Thomas Gleixner
@ 2008-12-19 16:03 ` Thomas Gleixner
  2008-12-19 16:03 ` [patch 6/7] rtmutex: remove useless schedule enforcement Thomas Gleixner
  2008-12-19 16:03 ` [patch 7/7] rtmutex: prevent missed wakeups Thomas Gleixner
  6 siblings, 0 replies; 14+ messages in thread
From: Thomas Gleixner @ 2008-12-19 16:03 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Steven Rostedt, Peter Zijlstra, Clark Williams,
	Gregory Haskins, Linux-rt

[-- Attachment #1: rtmutex-remove-uber-optimization.patch --]
[-- Type: text/plain, Size: 2537 bytes --]

The adaptive spin patches introduced an overdesigned optimization for
the adaptive path. This avoidance of those code pathes is not worth
the extra conditionals and furthermore it is inconsistent in itself.

Remove it and use the same mechanism as the other lock pathes. That
way we have a consistent state manipulation scheme and less extra
cases.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/rtmutex.c |   18 +++---------------
 1 file changed, 3 insertions(+), 15 deletions(-)

Index: linux-2.6.24/kernel/rtmutex.c
===================================================================
--- linux-2.6.24.orig/kernel/rtmutex.c
+++ linux-2.6.24/kernel/rtmutex.c
@@ -848,7 +848,6 @@ rt_spin_lock_slowlock(struct rt_mutex *l
 	struct rt_mutex_waiter waiter;
 	unsigned long saved_state, flags;
 	struct task_struct *orig_owner;
-	int missed = 0;
 
 	debug_rt_mutex_init_waiter(&waiter);
 	waiter.task = NULL;
@@ -870,20 +869,15 @@ rt_spin_lock_slowlock(struct rt_mutex *l
 	 * about original state TASK_INTERRUPTIBLE as well, as we
 	 * could miss a wakeup_interruptible()
 	 */
-	saved_state = current->state;
+	saved_state = rt_set_current_blocked_state(current->state);
 
 	for (;;) {
 		unsigned long saved_flags;
 		int saved_lock_depth = current->lock_depth;
 
 		/* Try to acquire the lock */
-		if (do_try_to_take_rt_mutex(lock, STEAL_LATERAL)) {
-			/* If we never blocked break out now */
-			if (!missed)
-				goto unlock;
+		if (do_try_to_take_rt_mutex(lock, STEAL_LATERAL))
 			break;
-		}
-		missed = 1;
 
 		/*
 		 * waiter.task is NULL the first time we come here and
@@ -913,12 +907,6 @@ rt_spin_lock_slowlock(struct rt_mutex *l
 		if (adaptive_wait(&waiter, orig_owner)) {
 			put_task_struct(orig_owner);
 
-			saved_state = rt_set_current_blocked_state(saved_state);
-			/*
-			 * The xchg() in update_current() is an implicit
-			 * barrier which we rely upon to ensure current->state
-			 * is visible before we test waiter.task.
-			 */
 			if (waiter.task)
 				schedule_rt_mutex(lock);
 		} else
@@ -927,6 +915,7 @@ rt_spin_lock_slowlock(struct rt_mutex *l
 		spin_lock_irqsave(&lock->wait_lock, flags);
 		current->flags |= saved_flags;
 		current->lock_depth = saved_lock_depth;
+		saved_state = rt_set_current_blocked_state(saved_state);
 	}
 
 	rt_restore_current_state(saved_state);
@@ -945,7 +934,6 @@ rt_spin_lock_slowlock(struct rt_mutex *l
 	 */
 	fixup_rt_mutex_waiters(lock);
 
- unlock:
 	spin_unlock_irqrestore(&lock->wait_lock, flags);
 
 	debug_rt_mutex_free_waiter(&waiter);

-- 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 6/7] rtmutex: remove useless schedule enforcement
  2008-12-19 16:02 [patch 0/7] 2.6.24.7-rt24 bugfixes Thomas Gleixner
                   ` (4 preceding siblings ...)
  2008-12-19 16:03 ` [patch 5/7] rtmutex: remove uber optimization Thomas Gleixner
@ 2008-12-19 16:03 ` Thomas Gleixner
  2008-12-19 16:03 ` [patch 7/7] rtmutex: prevent missed wakeups Thomas Gleixner
  6 siblings, 0 replies; 14+ messages in thread
From: Thomas Gleixner @ 2008-12-19 16:03 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Steven Rostedt, Peter Zijlstra, Clark Williams,
	Gregory Haskins, Linux-rt

[-- Attachment #1: rtmutex-remove-useless-schedule-enforcement.patch --]
[-- Type: text/plain, Size: 945 bytes --]

For !mtx the call to schedule() is forced even if the waiter is
already woken (waiter.task == NULL).

This makes no sense at all and is just waste.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/rtmutex.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Index: linux-2.6.24/kernel/rtmutex.c
===================================================================
--- linux-2.6.24.orig/kernel/rtmutex.c
+++ linux-2.6.24/kernel/rtmutex.c
@@ -1411,7 +1411,7 @@ rt_read_slowlock(struct rw_mutex *rwm, i
 
 		debug_rt_mutex_print_deadlock(&waiter);
 
-		if (!mtx || waiter.task)
+		if (waiter.task)
 			schedule_rt_mutex(mutex);
 
 		spin_lock_irqsave(&mutex->wait_lock, flags);
@@ -1557,7 +1557,7 @@ rt_write_slowlock(struct rw_mutex *rwm, 
 
 		debug_rt_mutex_print_deadlock(&waiter);
 
-		if (!mtx || waiter.task)
+		if (waiter.task)
 			schedule_rt_mutex(mutex);
 
 		spin_lock_irqsave(&mutex->wait_lock, flags);

-- 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 7/7] rtmutex: prevent missed wakeups
  2008-12-19 16:02 [patch 0/7] 2.6.24.7-rt24 bugfixes Thomas Gleixner
                   ` (5 preceding siblings ...)
  2008-12-19 16:03 ` [patch 6/7] rtmutex: remove useless schedule enforcement Thomas Gleixner
@ 2008-12-19 16:03 ` Thomas Gleixner
  6 siblings, 0 replies; 14+ messages in thread
From: Thomas Gleixner @ 2008-12-19 16:03 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Steven Rostedt, Peter Zijlstra, Clark Williams,
	Gregory Haskins, Linux-rt

[-- Attachment #1: sched-fix-rtmutex-wakeup-handling.patch --]
[-- Type: text/plain, Size: 7076 bytes --]

The sleeping locks implementation based on rtmutexes can miss wakeups
for two reasons:

1) The unconditional usage TASK_UNINTERRUPTIBLE for the blocking state
   
   Results in missed wakeups from wake_up_interruptible*()

   state = TASK_INTERRUPTIBLE;
   blocks_on_lock()
     state = TASK_UNINTERRUPTIBLE;
     schedule();
     ....
     acquires_lock();
     restore_state();

   Until the waiter has restored its state wake_up_interruptible*() will
   fail.

2) The rtmutex wakeup intermediate state TASK_RUNNING_MUTEX

   Results in missed wakeups from wake_up*()

   waiter is woken by mutex wakeup
   	  waiter->state = TASK_RUNNING_MUTEX;
   ....
   acquires_lock();
   restore_state();

   Until the waiter has restored its state wake_up*() will fail.

Solution:

Instead of setting the state to TASK_RUNNING_MUTEX in the mutex wakeup
case we logically OR TASK_RUNNING_MUTEX to the current waiter
state. This keeps the original bits (TASK_INTERRUPTIBLE /
TASK_UNINTERRUPTIBLE) intact and lets wakeups succeed. When a task
blocks on a lock in state TASK_INTERRUPTIBLE and is woken up by a real
wakeup, then we store the state = TASK_RUNNING for the restore and can
safely use TASK_UNINTERRUPTIBLE from that point to avoid further
wakeups which just let us loop in the lock code.

This also removes the extra TASK_RUNNING_MUTEX flags from the
wakeup_process*() functions as they are not longer necessary.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/rtmutex.c |   27 +++++++++++++++++++++------
 kernel/sched.c   |   40 +++++++++++++++++++++++-----------------
 2 files changed, 44 insertions(+), 23 deletions(-)

Index: linux-2.6.24/kernel/rtmutex.c
===================================================================
--- linux-2.6.24.orig/kernel/rtmutex.c
+++ linux-2.6.24/kernel/rtmutex.c
@@ -799,9 +799,11 @@ static int adaptive_wait(struct rt_mutex
  * The state setting needs to preserve the original state and needs to
  * take care of non rtmutex wakeups.
  *
- * The special case here is TASK_INTERRUPTIBLE: We cannot set the
- * blocked state to TASK_UNINTERRUPTIBLE as we would miss wakeups from
- * wake_up_interruptible().
+ * Called with rtmutex->wait_lock held to serialize against rtmutex
+ * wakeups().
+ *
+ * The cmpxchg() loop makes sure that we do not miss an update from a
+ * real wakeup.
  */
 static inline unsigned long
 rt_set_current_blocked_state(unsigned long saved_state)
@@ -810,14 +812,27 @@ rt_set_current_blocked_state(unsigned lo
 
 	do {
 		state = current->state;
+
 		/*
-		 * Take care of non rtmutex wakeups. rtmutex wakeups
-		 * set the state to TASK_RUNNING_MUTEX.
+		 * Take care of non rtmutex wakeups:
 		 */
 		if (state == TASK_RUNNING)
 			saved_state = TASK_RUNNING;
 
-		block_state = TASK_UNINTERRUPTIBLE;
+		/*
+		 * If state is TASK_INTERRUPTIBLE, then we set the
+		 * state for blocking to TASK_INTERRUPTIBLE as well,
+		 * otherwise we would miss real wakeups via
+		 * wake_up_interruptible(). If such a wakeup happens
+		 * we see the running state and preserve it in
+		 * saved_state. Now we can ignore further wakeups as
+		 * we will return in state running from our "spin"
+		 * sleep.
+		 */
+		if (saved_state == TASK_INTERRUPTIBLE)
+			block_state = TASK_INTERRUPTIBLE;
+		else
+			block_state = TASK_UNINTERRUPTIBLE;
 
 	} while (cmpxchg(&current->state, state, block_state) != state);
 
Index: linux-2.6.24/kernel/sched.c
===================================================================
--- linux-2.6.24.orig/kernel/sched.c
+++ linux-2.6.24/kernel/sched.c
@@ -1765,10 +1765,20 @@ out_activate:
 
 out_running:
 	trace_kernel_sched_wakeup(rq, p);
+
+	/*
+	 * For a mutex wakeup we or TASK_RUNNING_MUTEX to the task
+	 * state to preserve the original state, so a real wakeup
+	 * still can see the (UN)INTERRUPTIBLE bits in the state check
+	 * above. We dont have to worry about the | TASK_RUNNING_MUTEX
+	 * here. The waiter is serialized by the mutex lock and nobody
+	 * else can fiddle with p->state as we hold rq lock.
+	 */
 	if (mutex)
-		p->state = TASK_RUNNING_MUTEX;
+		p->state |= TASK_RUNNING_MUTEX;
 	else
 		p->state = TASK_RUNNING;
+
 #ifdef CONFIG_SMP
 	if (p->sched_class->task_wake_up)
 		p->sched_class->task_wake_up(rq, p);
@@ -1782,38 +1792,34 @@ out:
 int fastcall wake_up_process(struct task_struct *p)
 {
 	return try_to_wake_up(p, TASK_STOPPED | TASK_TRACED |
-			      TASK_RUNNING_MUTEX | TASK_INTERRUPTIBLE |
-			      TASK_UNINTERRUPTIBLE, 0, 0);
+			      TASK_INTERRUPTIBLE | TASK_UNINTERRUPTIBLE, 0, 0);
 }
 EXPORT_SYMBOL(wake_up_process);
 
 int fastcall wake_up_process_sync(struct task_struct * p)
 {
 	return try_to_wake_up(p, TASK_STOPPED | TASK_TRACED |
-			      TASK_RUNNING_MUTEX | TASK_INTERRUPTIBLE |
-			      TASK_UNINTERRUPTIBLE, 1, 0);
+			      TASK_INTERRUPTIBLE | TASK_UNINTERRUPTIBLE, 1, 0);
 }
 EXPORT_SYMBOL(wake_up_process_sync);
 
 int fastcall wake_up_process_mutex(struct task_struct * p)
 {
 	return try_to_wake_up(p, TASK_STOPPED | TASK_TRACED |
-			      TASK_RUNNING_MUTEX | TASK_INTERRUPTIBLE |
-			      TASK_UNINTERRUPTIBLE, 0, 1);
+			      TASK_INTERRUPTIBLE | TASK_UNINTERRUPTIBLE, 0, 1);
 }
 EXPORT_SYMBOL(wake_up_process_mutex);
 
 int fastcall wake_up_process_mutex_sync(struct task_struct * p)
 {
 	return try_to_wake_up(p, TASK_STOPPED | TASK_TRACED |
-			      TASK_RUNNING_MUTEX | TASK_INTERRUPTIBLE |
-			      TASK_UNINTERRUPTIBLE, 1, 1);
+			      TASK_INTERRUPTIBLE | TASK_UNINTERRUPTIBLE, 1, 1);
 }
 EXPORT_SYMBOL(wake_up_process_mutex_sync);
 
 int fastcall wake_up_state(struct task_struct *p, unsigned int state)
 {
-	return try_to_wake_up(p, state | TASK_RUNNING_MUTEX, 0, 0);
+	return try_to_wake_up(p, state, 0, 0);
 }
 
 /*
@@ -3961,10 +3967,10 @@ asmlinkage void __sched __schedule(void)
 	clear_tsk_need_resched(prev);
 	clear_tsk_need_resched_delayed(prev);
 
-	if ((prev->state & ~TASK_RUNNING_MUTEX) &&
-			!(preempt_count() & PREEMPT_ACTIVE)) {
+	if (!(prev->state & TASK_RUNNING_MUTEX) && prev->state &&
+	    !(preempt_count() & PREEMPT_ACTIVE)) {
 		if (unlikely((prev->state & TASK_INTERRUPTIBLE) &&
-				unlikely(signal_pending(prev)))) {
+			     unlikely(signal_pending(prev)))) {
 			prev->state = TASK_RUNNING;
 		} else {
 			touch_softlockup_watchdog();
@@ -4184,8 +4190,7 @@ asmlinkage void __sched preempt_schedule
 int default_wake_function(wait_queue_t *curr, unsigned mode, int sync,
 			  void *key)
 {
-	return try_to_wake_up(curr->private, mode | TASK_RUNNING_MUTEX,
-			      sync, 0);
+	return try_to_wake_up(curr->private, mode, sync, 0);
 }
 EXPORT_SYMBOL(default_wake_function);
 
@@ -5421,8 +5426,9 @@ static void show_task(struct task_struct
 	unsigned state;
 
 	state = p->state ? __ffs(p->state) + 1 : 0;
-	printk("%-13.13s %c [%p]", p->comm,
-		state < sizeof(stat_nam) - 1 ? stat_nam[state] : '?', p);
+	printk("%-13.13s %c (%03lx) [%p]", p->comm,
+	       state < sizeof(stat_nam) - 1 ? stat_nam[state] : '?',
+	       (unsigned long) p->state, p);
 #if BITS_PER_LONG == 32
 	if (0 && (state == TASK_RUNNING))
 		printk(KERN_CONT " running  ");

-- 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch 1/7] ftrace: fix task state printout
  2008-12-19 16:02 ` [patch 1/7] ftrace: fix task state printout Thomas Gleixner
@ 2008-12-19 16:25   ` Frédéric Weisbecker
  2008-12-19 21:36     ` Ingo Molnar
  2008-12-19 21:40     ` Thomas Gleixner
  0 siblings, 2 replies; 14+ messages in thread
From: Frédéric Weisbecker @ 2008-12-19 16:25 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Ingo Molnar, Steven Rostedt, Peter Zijlstra,
	Clark Williams, Gregory Haskins, Linux-rt

Hi Thomas,

2008/12/19 Thomas Gleixner <tglx@linutronix.de>:
> Impact: tracer task state decoding is wrong, size check is buggy
>
> The tracing code has interesting varieties of printing out task state.
> Unfortunalely only one of the instances is correct as it copies the
> code from sched.c:sched_show_task(). The others are plain wrong as
> they treatthe bitfield as an integer offset into the character
> array. Also the size check of the character array is wrong as it
> includes the trailing \0.
>
> Use a common state decoder inline which does the Right Thing.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


You've already sent it out. It has been applied on -tip :-)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch 4/7] rtmutex: unify state manipulation
  2008-12-19 16:03 ` [patch 4/7] rtmutex: unify state manipulation Thomas Gleixner
@ 2008-12-19 18:36   ` Steven Rostedt
  2008-12-19 19:40     ` Thomas Gleixner
  0 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2008-12-19 18:36 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Ingo Molnar, Peter Zijlstra, Clark Williams,
	Gregory Haskins, Linux-rt


On Fri, 19 Dec 2008, Thomas Gleixner wrote:

> The manipulation of the waiter task state is copied all over the place
> with slightly different details. Use one set of functions to reduce
> duplicated code and make the handling consistent for all instances.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  kernel/rtmutex.c |  104 ++++++++++++++++++++++++++++++-------------------------
>  1 file changed, 57 insertions(+), 47 deletions(-)
> 
> Index: linux-2.6.24/kernel/rtmutex.c
> ===================================================================
> --- linux-2.6.24.orig/kernel/rtmutex.c
> +++ linux-2.6.24/kernel/rtmutex.c
> @@ -765,13 +765,6 @@ rt_spin_lock_fastunlock(struct rt_mutex 
>  		slowfn(lock);
>  }
>  
> -static inline void
> -update_current(unsigned long new_state, unsigned long *saved_state)
> -{
> -	unsigned long state = xchg(&current->state, new_state);
> -	if (unlikely(state == TASK_RUNNING))
> -		*saved_state = TASK_RUNNING;
> -}
>  
>  #ifdef CONFIG_SMP
>  static int adaptive_wait(struct rt_mutex_waiter *waiter,
> @@ -803,6 +796,43 @@ static int adaptive_wait(struct rt_mutex
>  #endif
>  
>  /*
> + * The state setting needs to preserve the original state and needs to
> + * take care of non rtmutex wakeups.
> + *
> + * The special case here is TASK_INTERRUPTIBLE: We cannot set the
> + * blocked state to TASK_UNINTERRUPTIBLE as we would miss wakeups from
> + * wake_up_interruptible().
> + */
> +static inline unsigned long
> +rt_set_current_blocked_state(unsigned long saved_state)
> +{
> +	unsigned long state, block_state;
> +
> +	do {
> +		state = current->state;
> +		/*
> +		 * Take care of non rtmutex wakeups. rtmutex wakeups
> +		 * set the state to TASK_RUNNING_MUTEX.
> +		 */
> +		if (state == TASK_RUNNING)
> +			saved_state = TASK_RUNNING;
> +
> +		block_state = TASK_UNINTERRUPTIBLE;
> +
> +	} while (cmpxchg(&current->state, state, block_state) != state);

Doesn't this break archs that do not have cmpxchg?
There might be another way. We could just use your TASK_RUNNING_MUTEX or 
trick for both mutexes and spinlocks.


> +
> +	return saved_state;
> +}
> +
> +static inline void rt_restore_current_state(unsigned long saved_state)
> +{
> +	unsigned long state = xchg(&current->state, saved_state);
> +
> +	if (state == TASK_RUNNING)
> +		current->state = TASK_RUNNING;
> +}
> +
> +/*
>   * Slow path lock function spin_lock style: this variant is very
>   * careful not to miss any non-lock wakeups.
>   *
> @@ -816,7 +846,7 @@ static void fastcall noinline __sched
>  rt_spin_lock_slowlock(struct rt_mutex *lock)
>  {
>  	struct rt_mutex_waiter waiter;
> -	unsigned long saved_state, state, flags;
> +	unsigned long saved_state, flags;
>  	struct task_struct *orig_owner;
>  	int missed = 0;
>  
> @@ -836,7 +866,9 @@ rt_spin_lock_slowlock(struct rt_mutex *l
>  	 * of the lock sleep/wakeup mechanism. When we get a real
>  	 * wakeup the task->state is TASK_RUNNING and we change
>  	 * saved_state accordingly. If we did not get a real wakeup
> -	 * then we return with the saved state.
> +	 * then we return with the saved state. We need to be careful
> +	 * about original state TASK_INTERRUPTIBLE as well, as we
> +	 * could miss a wakeup_interruptible()
>  	 */
>  	saved_state = current->state;
>  
> @@ -880,7 +912,8 @@ rt_spin_lock_slowlock(struct rt_mutex *l
>  
>  		if (adaptive_wait(&waiter, orig_owner)) {
>  			put_task_struct(orig_owner);
> -			update_current(TASK_UNINTERRUPTIBLE, &saved_state);
> +
> +			saved_state = rt_set_current_blocked_state(saved_state);
>  			/*
>  			 * The xchg() in update_current() is an implicit
>  			 * barrier which we rely upon to ensure current->state
> @@ -896,9 +929,7 @@ rt_spin_lock_slowlock(struct rt_mutex *l
>  		current->lock_depth = saved_lock_depth;
>  	}
>  
> -	state = xchg(&current->state, saved_state);
> -	if (unlikely(state == TASK_RUNNING))
> -		current->state = TASK_RUNNING;
> +	rt_restore_current_state(saved_state);
>  
>  	/*
>  	 * Extremely rare case, if we got woken up by a non-mutex wakeup,
> @@ -1333,7 +1364,7 @@ rt_read_slowlock(struct rw_mutex *rwm, i
>  	struct rt_mutex_waiter waiter;
>  	struct rt_mutex *mutex = &rwm->mutex;
>  	int saved_lock_depth = -1;
> -	unsigned long saved_state = -1, state, flags;
> +	unsigned long saved_state, flags;
>  
>  	spin_lock_irqsave(&mutex->wait_lock, flags);
>  	init_rw_lists(rwm);
> @@ -1357,13 +1388,13 @@ rt_read_slowlock(struct rw_mutex *rwm, i
>  		 */
>  		if (unlikely(current->lock_depth >= 0))
>  			saved_lock_depth = rt_release_bkl(mutex, flags);
> -		set_current_state(TASK_UNINTERRUPTIBLE);
>  	} else {
>  		/* Spin lock must preserve BKL */
> -		saved_state = xchg(&current->state, TASK_UNINTERRUPTIBLE);
>  		saved_lock_depth = current->lock_depth;
>  	}
>  
> +	saved_state = rt_set_current_blocked_state(current->state);
> +
>  	for (;;) {
>  		unsigned long saved_flags;
>  
> @@ -1398,23 +1429,12 @@ rt_read_slowlock(struct rw_mutex *rwm, i
>  		spin_lock_irqsave(&mutex->wait_lock, flags);
>  
>  		current->flags |= saved_flags;
> -		if (mtx)
> -			set_current_state(TASK_UNINTERRUPTIBLE);
> -		else {
> +		if (!mtx)
>  			current->lock_depth = saved_lock_depth;
> -			state = xchg(&current->state, TASK_UNINTERRUPTIBLE);
> -			if (unlikely(state == TASK_RUNNING))
> -				saved_state = TASK_RUNNING;
> -		}
> +		saved_state = rt_set_current_blocked_state(saved_state);
>  	}
>  
> -	if (mtx)
> -		set_current_state(TASK_RUNNING);
> -	else {
> -		state = xchg(&current->state, saved_state);
> -		if (unlikely(state == TASK_RUNNING))
> -			current->state = TASK_RUNNING;
> -	}
> +	rt_restore_current_state(saved_state);

This is a bug. A mutex always leaves in the TASK_RUNNING state.

>  
>  	if (unlikely(waiter.task))
>  		remove_waiter(mutex, &waiter, flags);
> @@ -1490,7 +1510,7 @@ rt_write_slowlock(struct rw_mutex *rwm, 
>  	struct rt_mutex *mutex = &rwm->mutex;
>  	struct rt_mutex_waiter waiter;
>  	int saved_lock_depth = -1;
> -	unsigned long flags, saved_state = -1, state;
> +	unsigned long flags, saved_state;
>  
>  	debug_rt_mutex_init_waiter(&waiter);
>  	waiter.task = NULL;
> @@ -1514,13 +1534,13 @@ rt_write_slowlock(struct rw_mutex *rwm, 
>  		 */
>  		if (unlikely(current->lock_depth >= 0))
>  			saved_lock_depth = rt_release_bkl(mutex, flags);
> -		set_current_state(TASK_UNINTERRUPTIBLE);
>  	} else {
>  		/* Spin locks must preserve the BKL */
>  		saved_lock_depth = current->lock_depth;
> -		saved_state = xchg(&current->state, TASK_UNINTERRUPTIBLE);
>  	}
>  
> +	saved_state = rt_set_current_blocked_state(current->state);
> +
>  	for (;;) {
>  		unsigned long saved_flags;
>  
> @@ -1555,24 +1575,14 @@ rt_write_slowlock(struct rw_mutex *rwm, 
>  		spin_lock_irqsave(&mutex->wait_lock, flags);
>  
>  		current->flags |= saved_flags;
> -		if (mtx)
> -			set_current_state(TASK_UNINTERRUPTIBLE);
> -		else {
> +		if (!mtx)
>  			current->lock_depth = saved_lock_depth;
> -			state = xchg(&current->state, TASK_UNINTERRUPTIBLE);
> -			if (unlikely(state == TASK_RUNNING))
> -				saved_state = TASK_RUNNING;
> -		}
> -	}
>  
> -	if (mtx)
> -		set_current_state(TASK_RUNNING);
> -	else {
> -		state = xchg(&current->state, saved_state);
> -		if (unlikely(state == TASK_RUNNING))
> -			current->state = TASK_RUNNING;
> +		saved_state = rt_set_current_blocked_state(saved_state);
>  	}
>  
> +	rt_restore_current_state(saved_state);
> +

Here too.

>  	if (unlikely(waiter.task))
>  		remove_waiter(mutex, &waiter, flags);
>  

What about having the locking spinlocks and mutexes be almost identical. 
Like the rwlocks are (rwlocks and rwsems share the same code). We can use 
the RT_MUTEX_RUNNING trick for both. The only difference is that a mutex 
will always leave in the TASK_RUNNING state.

-- Steve


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch 4/7] rtmutex: unify state manipulation
  2008-12-19 18:36   ` Steven Rostedt
@ 2008-12-19 19:40     ` Thomas Gleixner
  0 siblings, 0 replies; 14+ messages in thread
From: Thomas Gleixner @ 2008-12-19 19:40 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: LKML, Ingo Molnar, Peter Zijlstra, Clark Williams,
	Gregory Haskins, Linux-rt

On Fri, 19 Dec 2008, Steven Rostedt wrote:
> > +	} while (cmpxchg(&current->state, state, block_state) != state);
> 
> Doesn't this break archs that do not have cmpxchg?

We can use xchg. The waiter is protected against the RUNNING_MUTEX
state change via the mutex->lock. It's just some overcautioness when I
started to fix this.

> There might be another way. We could just use your TASK_RUNNING_MUTEX or 
> trick for both mutexes and spinlocks.

The mechanisms should be the same for everything now.
 
> > -	if (mtx)
> > -		set_current_state(TASK_RUNNING);
> > -	else {
> > -		state = xchg(&current->state, saved_state);
> > -		if (unlikely(state == TASK_RUNNING))
> > -			current->state = TASK_RUNNING;
> > -	}
> > +	rt_restore_current_state(saved_state);
> 
> This is a bug. A mutex always leaves in the TASK_RUNNING state.
 
Duh, yes. So this should be:

     rt_restore_current_state(!mtx ? saved_state : TASK_RUNNING);

> What about having the locking spinlocks and mutexes be almost identical. 
> Like the rwlocks are (rwlocks and rwsems share the same code). We can use 
> the RT_MUTEX_RUNNING trick for both. The only difference is that a mutex 
> will always leave in the TASK_RUNNING state.

Good point.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch 1/7] ftrace: fix task state printout
  2008-12-19 16:25   ` Frédéric Weisbecker
@ 2008-12-19 21:36     ` Ingo Molnar
  2008-12-19 21:40     ` Thomas Gleixner
  1 sibling, 0 replies; 14+ messages in thread
From: Ingo Molnar @ 2008-12-19 21:36 UTC (permalink / raw)
  To: Frédéric Weisbecker
  Cc: Thomas Gleixner, LKML, Steven Rostedt, Peter Zijlstra,
	Clark Williams, Gregory Haskins, Linux-rt


* Frédéric Weisbecker <fweisbec@gmail.com> wrote:

> Hi Thomas,
> 
> 2008/12/19 Thomas Gleixner <tglx@linutronix.de>:
> > Impact: tracer task state decoding is wrong, size check is buggy
> >
> > The tracing code has interesting varieties of printing out task state.
> > Unfortunalely only one of the instances is correct as it copies the
> > code from sched.c:sched_show_task(). The others are plain wrong as
> > they treatthe bitfield as an integer offset into the character
> > array. Also the size check of the character array is wrong as it
> > includes the trailing \0.
> >
> > Use a common state decoder inline which does the Right Thing.
> >
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> You've already sent it out. It has been applied on -tip :-)

That's why the subject line of the series says "2.6.24.7-rt24 bugfixes" i 
guess :-)

	Ingo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch 1/7] ftrace: fix task state printout
  2008-12-19 16:25   ` Frédéric Weisbecker
  2008-12-19 21:36     ` Ingo Molnar
@ 2008-12-19 21:40     ` Thomas Gleixner
  2008-12-20 13:08       ` Frédéric Weisbecker
  1 sibling, 1 reply; 14+ messages in thread
From: Thomas Gleixner @ 2008-12-19 21:40 UTC (permalink / raw)
  To: Frédéric Weisbecker
  Cc: LKML, Ingo Molnar, Steven Rostedt, Peter Zijlstra,
	Clark Williams, Gregory Haskins, Linux-rt

[-- Attachment #1: Type: TEXT/PLAIN, Size: 881 bytes --]

On Fri, 19 Dec 2008, Frédéric Weisbecker wrote:
> Hi Thomas,
> 
> 2008/12/19 Thomas Gleixner <tglx@linutronix.de>:
> > Impact: tracer task state decoding is wrong, size check is buggy
> >
> > The tracing code has interesting varieties of printing out task state.
> > Unfortunalely only one of the instances is correct as it copies the
> > code from sched.c:sched_show_task(). The others are plain wrong as
> > they treatthe bitfield as an integer offset into the character
> > array. Also the size check of the character array is wrong as it
> > includes the trailing \0.
> >
> > Use a common state decoder inline which does the Right Thing.
> >
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> 
> You've already sent it out. It has been applied on -tip :-)

Yep, but these patches are against 2.6.24.7-rt24 as noted in the
subject line of patch 0/7 :)

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch 1/7] ftrace: fix task state printout
  2008-12-19 21:40     ` Thomas Gleixner
@ 2008-12-20 13:08       ` Frédéric Weisbecker
  0 siblings, 0 replies; 14+ messages in thread
From: Frédéric Weisbecker @ 2008-12-20 13:08 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Ingo Molnar, Steven Rostedt, Peter Zijlstra,
	Clark Williams, Gregory Haskins, Linux-rt

2008/12/19 Thomas Gleixner <tglx@linutronix.de>:
> On Fri, 19 Dec 2008, Frédéric Weisbecker wrote:
>> Hi Thomas,
>>
>> 2008/12/19 Thomas Gleixner <tglx@linutronix.de>:
>> > Impact: tracer task state decoding is wrong, size check is buggy
>> >
>> > The tracing code has interesting varieties of printing out task state.
>> > Unfortunalely only one of the instances is correct as it copies the
>> > code from sched.c:sched_show_task(). The others are plain wrong as
>> > they treatthe bitfield as an integer offset into the character
>> > array. Also the size check of the character array is wrong as it
>> > includes the trailing \0.
>> >
>> > Use a common state decoder inline which does the Right Thing.
>> >
>> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
>>
>>
>> You've already sent it out. It has been applied on -tip :-)
>
> Yep, but these patches are against 2.6.24.7-rt24 as noted in the
> subject line of patch 0/7 :)
>
> Thanks,
>
>        tglx
>

Oh ok sorry....

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2008-12-20 13:08 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-12-19 16:02 [patch 0/7] 2.6.24.7-rt24 bugfixes Thomas Gleixner
2008-12-19 16:02 ` [patch 1/7] ftrace: fix task state printout Thomas Gleixner
2008-12-19 16:25   ` Frédéric Weisbecker
2008-12-19 21:36     ` Ingo Molnar
2008-12-19 21:40     ` Thomas Gleixner
2008-12-20 13:08       ` Frédéric Weisbecker
2008-12-19 16:02 ` [patch 2/7] sched: remove useless nointeractive state Thomas Gleixner
2008-12-19 16:02 ` [patch 3/7] rtmutex: remove unused variable Thomas Gleixner
2008-12-19 16:03 ` [patch 4/7] rtmutex: unify state manipulation Thomas Gleixner
2008-12-19 18:36   ` Steven Rostedt
2008-12-19 19:40     ` Thomas Gleixner
2008-12-19 16:03 ` [patch 5/7] rtmutex: remove uber optimization Thomas Gleixner
2008-12-19 16:03 ` [patch 6/7] rtmutex: remove useless schedule enforcement Thomas Gleixner
2008-12-19 16:03 ` [patch 7/7] rtmutex: prevent missed wakeups Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).