* [patch 0/3] rtmutex: Propagate priority setting into lock chains @ 2006-06-22 9:08 Thomas Gleixner 2006-06-22 9:08 ` [patch 1/3] Drop tasklist lock in do_sched_setscheduler Thomas Gleixner ` (2 more replies) 0 siblings, 3 replies; 15+ messages in thread From: Thomas Gleixner @ 2006-06-22 9:08 UTC (permalink / raw) To: LKML; +Cc: Andrew Morton, Ingo Molnar Andrew, Please add the following patches to the rtmutex / pi-futex patchset. This ensures that asynchronous setscheduler() calls are properly propagated into a already blocked task's lock dependency chain. The testsuite has also been improved to verify this behaviour. tglx -- ^ permalink raw reply [flat|nested] 15+ messages in thread
* [patch 1/3] Drop tasklist lock in do_sched_setscheduler 2006-06-22 9:08 [patch 0/3] rtmutex: Propagate priority setting into lock chains Thomas Gleixner @ 2006-06-22 9:08 ` Thomas Gleixner 2006-06-23 1:48 ` Andrew Morton 2006-06-24 8:07 ` Andrew Morton 2006-06-22 9:08 ` [patch 2/3] rtmutex: Propagate priority settings into PI lock chains Thomas Gleixner 2006-06-22 9:08 ` [patch 3/3] rtmutex: Modify rtmutex-tester to test the setscheduler propagation Thomas Gleixner 2 siblings, 2 replies; 15+ messages in thread From: Thomas Gleixner @ 2006-06-22 9:08 UTC (permalink / raw) To: LKML; +Cc: Andrew Morton, Ingo Molnar [-- Attachment #1: drop-tasklist-lock-in-do-sched-setscheduler.patch --] [-- Type: text/plain, Size: 948 bytes --] There is no need to hold tasklist_lock across the setscheduler call, when we pin the task structure with get_task_struct(). Interrupts are disabled in setscheduler anyway and the permission checks do not need interrupts disabled. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> kernel/sched.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) Index: linux-2.6.17-mm/kernel/sched.c =================================================================== --- linux-2.6.17-mm.orig/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 +++ linux-2.6.17-mm/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 @@ -4140,8 +4140,10 @@ read_unlock_irq(&tasklist_lock); return -ESRCH; } - retval = sched_setscheduler(p, policy, &lparam); + get_task_struct(p); read_unlock_irq(&tasklist_lock); + retval = sched_setscheduler(p, policy, &lparam); + put_task_struct(p); return retval; } -- ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch 1/3] Drop tasklist lock in do_sched_setscheduler 2006-06-22 9:08 ` [patch 1/3] Drop tasklist lock in do_sched_setscheduler Thomas Gleixner @ 2006-06-23 1:48 ` Andrew Morton 2006-06-23 6:01 ` Thomas Gleixner 2006-06-24 8:07 ` Andrew Morton 1 sibling, 1 reply; 15+ messages in thread From: Andrew Morton @ 2006-06-23 1:48 UTC (permalink / raw) To: Thomas Gleixner; +Cc: linux-kernel, mingo On Thu, 22 Jun 2006 09:08:38 -0000 Thomas Gleixner <tglx@linutronix.de> wrote: > > There is no need to hold tasklist_lock across the setscheduler call, when we > pin the task structure with get_task_struct(). Interrupts are disabled in > setscheduler anyway and the permission checks do not need interrupts disabled. > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > Signed-off-by: Ingo Molnar <mingo@elte.hu> > > kernel/sched.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > Index: linux-2.6.17-mm/kernel/sched.c > =================================================================== > --- linux-2.6.17-mm.orig/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 > +++ linux-2.6.17-mm/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 > @@ -4140,8 +4140,10 @@ > read_unlock_irq(&tasklist_lock); > return -ESRCH; > } > - retval = sched_setscheduler(p, policy, &lparam); > + get_task_struct(p); > read_unlock_irq(&tasklist_lock); > + retval = sched_setscheduler(p, policy, &lparam); > + put_task_struct(p); > return retval; > } > Is this optimisation actually related to the rt-mutex patches, or to the other two patches? ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch 1/3] Drop tasklist lock in do_sched_setscheduler 2006-06-23 1:48 ` Andrew Morton @ 2006-06-23 6:01 ` Thomas Gleixner 0 siblings, 0 replies; 15+ messages in thread From: Thomas Gleixner @ 2006-06-23 6:01 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, mingo On Thu, 2006-06-22 at 18:48 -0700, Andrew Morton wrote: > On Thu, 22 Jun 2006 09:08:38 -0000 > Thomas Gleixner <tglx@linutronix.de> wrote: > > > > > There is no need to hold tasklist_lock across the setscheduler call, when we > > pin the task structure with get_task_struct(). Interrupts are disabled in > > setscheduler anyway and the permission checks do not need interrupts disabled. > > > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > > Signed-off-by: Ingo Molnar <mingo@elte.hu> > > > > kernel/sched.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > Index: linux-2.6.17-mm/kernel/sched.c > > =================================================================== > > --- linux-2.6.17-mm.orig/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 > > +++ linux-2.6.17-mm/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 > > @@ -4140,8 +4140,10 @@ > > read_unlock_irq(&tasklist_lock); > > return -ESRCH; > > } > > - retval = sched_setscheduler(p, policy, &lparam); > > + get_task_struct(p); > > read_unlock_irq(&tasklist_lock); > > + retval = sched_setscheduler(p, policy, &lparam); > > + put_task_struct(p); > > return retval; > > } > > > > Is this optimisation actually related to the rt-mutex patches, or to the > other two patches? Yes. We neither want interrupt disabled nor holding tasklist lock when it comes to the lock chain walk. So its a preperatory patch and a general optimization. tglx ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch 1/3] Drop tasklist lock in do_sched_setscheduler 2006-06-22 9:08 ` [patch 1/3] Drop tasklist lock in do_sched_setscheduler Thomas Gleixner 2006-06-23 1:48 ` Andrew Morton @ 2006-06-24 8:07 ` Andrew Morton 2006-06-24 8:25 ` Thomas Gleixner 1 sibling, 1 reply; 15+ messages in thread From: Andrew Morton @ 2006-06-24 8:07 UTC (permalink / raw) To: Thomas Gleixner; +Cc: linux-kernel, mingo On Thu, 22 Jun 2006 09:08:38 -0000 Thomas Gleixner <tglx@linutronix.de> wrote: > > There is no need to hold tasklist_lock across the setscheduler call, when we > pin the task structure with get_task_struct(). Interrupts are disabled in > setscheduler anyway and the permission checks do not need interrupts disabled. > These three patches had intricate dependencies upon the __IP__ and __IP_DECL__ gunk which later patches removed, so these patches do not compile against the pi-futex patches. So I dropped these. And I'll drop the lockdep patches, so you'll be able to redo these. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch 1/3] Drop tasklist lock in do_sched_setscheduler 2006-06-24 8:07 ` Andrew Morton @ 2006-06-24 8:25 ` Thomas Gleixner 0 siblings, 0 replies; 15+ messages in thread From: Thomas Gleixner @ 2006-06-24 8:25 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, mingo On Sat, 2006-06-24 at 01:07 -0700, Andrew Morton wrote: > These three patches had intricate dependencies upon the __IP__ and > __IP_DECL__ gunk which later patches removed, so these patches do not > compile against the pi-futex patches. > > So I dropped these. > > And I'll drop the lockdep patches, so you'll be able to redo these. Will do. tglx ^ permalink raw reply [flat|nested] 15+ messages in thread
* [patch 2/3] rtmutex: Propagate priority settings into PI lock chains 2006-06-22 9:08 [patch 0/3] rtmutex: Propagate priority setting into lock chains Thomas Gleixner 2006-06-22 9:08 ` [patch 1/3] Drop tasklist lock in do_sched_setscheduler Thomas Gleixner @ 2006-06-22 9:08 ` Thomas Gleixner 2006-06-22 14:20 ` Steven Rostedt 2006-06-23 2:06 ` [patch 2/3] rtmutex: Propagate priority settings into PI lock chains Andrew Morton 2006-06-22 9:08 ` [patch 3/3] rtmutex: Modify rtmutex-tester to test the setscheduler propagation Thomas Gleixner 2 siblings, 2 replies; 15+ messages in thread From: Thomas Gleixner @ 2006-06-22 9:08 UTC (permalink / raw) To: LKML; +Cc: Andrew Morton, Ingo Molnar [-- Attachment #1: rt-mutex-propagate-pi-on-set-scheduler.patch --] [-- Type: text/plain, Size: 5025 bytes --] When the priority of a task, which is blocked on a lock, changes we must propagate this change into the PI lock chain. Therefor the chain walk code is changed to get rid of the references to current to avoid false positives in the deadlock detector, as setscheduler might be called by a task which holds the lock on which the task whose priority is changed is blocked. Also add some comments about the get/put_task_struct usage to avoid confusion. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> include/linux/sched.h | 2 ++ kernel/rtmutex.c | 41 ++++++++++++++++++++++++++++++++++++----- kernel/sched.c | 2 ++ 3 files changed, 40 insertions(+), 5 deletions(-) Index: linux-2.6.17-mm/kernel/rtmutex.c =================================================================== --- linux-2.6.17-mm.orig/kernel/rtmutex.c 2006-06-22 10:26:11.000000000 +0200 +++ linux-2.6.17-mm/kernel/rtmutex.c 2006-06-22 10:26:11.000000000 +0200 @@ -160,7 +160,8 @@ static int rt_mutex_adjust_prio_chain(task_t *task, int deadlock_detect, struct rt_mutex *orig_lock, - struct rt_mutex_waiter *orig_waiter) + struct rt_mutex_waiter *orig_waiter, + struct task_struct *top_task) { struct rt_mutex *lock; struct rt_mutex_waiter *waiter, *top_waiter = orig_waiter; @@ -188,7 +189,7 @@ prev_max = max_lock_depth; printk(KERN_WARNING "Maximum lock depth %d reached " "task: %s (%d)\n", max_lock_depth, - current->comm, current->pid); + top_task->comm, top_task->pid); } put_task_struct(task); @@ -228,7 +229,7 @@ } /* Deadlock detection */ - if (lock == orig_lock || rt_mutex_owner(lock) == current) { + if (lock == orig_lock || rt_mutex_owner(lock) == top_task) { debug_rt_mutex_deadlock(deadlock_detect, orig_waiter, lock); spin_unlock(&lock->wait_lock); ret = deadlock_detect ? -EDEADLK : 0; @@ -431,6 +432,7 @@ __rt_mutex_adjust_prio(owner); if (owner->pi_blocked_on) { boost = 1; + /* gets dropped in rt_mutex_adjust_prio_chain()! */ get_task_struct(owner); } spin_unlock_irqrestore(&owner->pi_lock, flags); @@ -439,6 +441,7 @@ spin_lock_irqsave(&owner->pi_lock, flags); if (owner->pi_blocked_on) { boost = 1; + /* gets dropped in rt_mutex_adjust_prio_chain()! */ get_task_struct(owner); } spin_unlock_irqrestore(&owner->pi_lock, flags); @@ -448,7 +451,8 @@ spin_unlock(&lock->wait_lock); - res = rt_mutex_adjust_prio_chain(owner, detect_deadlock, lock, waiter); + res = rt_mutex_adjust_prio_chain(owner, detect_deadlock, lock, waiter, + current); spin_lock(&lock->wait_lock); @@ -549,6 +553,7 @@ if (owner->pi_blocked_on) { boost = 1; + /* gets dropped in rt_mutex_adjust_prio_chain()! */ get_task_struct(owner); } spin_unlock_irqrestore(&owner->pi_lock, flags); @@ -561,12 +566,37 @@ spin_unlock(&lock->wait_lock); - rt_mutex_adjust_prio_chain(owner, 0, lock, NULL); + rt_mutex_adjust_prio_chain(owner, 0, lock, NULL, current); spin_lock(&lock->wait_lock); } /* + * Recheck the pi chain, in case we got a priority setting + * + * Called from sched_setscheduler + */ +void rt_mutex_adjust_pi(struct task_struct *task) +{ + struct rt_mutex_waiter *waiter; + unsigned long flags; + + spin_lock_irqsave(&task->pi_lock, flags); + + waiter = task->pi_blocked_on; + if (!waiter || waiter->list_entry.prio == task->prio) { + spin_unlock_irqrestore(&task->pi_lock, flags); + return; + } + + /* gets dropped in rt_mutex_adjust_prio_chain()! */ + get_task_struct(task); + spin_unlock_irqrestore(&task->pi_lock, flags); + + rt_mutex_adjust_prio_chain(task, 0, NULL, NULL, task); +} + +/* * Slow path lock function: */ static int __sched @@ -633,6 +663,7 @@ if (unlikely(ret)) break; } + spin_unlock(&lock->wait_lock); debug_rt_mutex_print_deadlock(&waiter); Index: linux-2.6.17-mm/include/linux/sched.h =================================================================== --- linux-2.6.17-mm.orig/include/linux/sched.h 2006-06-22 10:26:11.000000000 +0200 +++ linux-2.6.17-mm/include/linux/sched.h 2006-06-22 10:26:11.000000000 +0200 @@ -1125,11 +1125,13 @@ #ifdef CONFIG_RT_MUTEXES extern int rt_mutex_getprio(task_t *p); extern void rt_mutex_setprio(task_t *p, int prio); +extern void rt_mutex_adjust_pi(task_t *p); #else static inline int rt_mutex_getprio(task_t *p) { return p->normal_prio; } +# define rt_mutex_adjust_pi(p) do { } while (0) #endif extern void set_user_nice(task_t *p, long nice); Index: linux-2.6.17-mm/kernel/sched.c =================================================================== --- linux-2.6.17-mm.orig/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 +++ linux-2.6.17-mm/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 @@ -4119,6 +4119,8 @@ __task_rq_unlock(rq); spin_unlock_irqrestore(&p->pi_lock, flags); + rt_mutex_adjust_pi(p); + return 0; } EXPORT_SYMBOL_GPL(sched_setscheduler); -- ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch 2/3] rtmutex: Propagate priority settings into PI lock chains 2006-06-22 9:08 ` [patch 2/3] rtmutex: Propagate priority settings into PI lock chains Thomas Gleixner @ 2006-06-22 14:20 ` Steven Rostedt 2006-06-22 18:02 ` Esben Nielsen 2006-06-23 2:08 ` Andrew Morton 2006-06-23 2:06 ` [patch 2/3] rtmutex: Propagate priority settings into PI lock chains Andrew Morton 1 sibling, 2 replies; 15+ messages in thread From: Steven Rostedt @ 2006-06-22 14:20 UTC (permalink / raw) To: Thomas Gleixner; +Cc: LKML, Andrew Morton, Ingo Molnar I've stated these comments on the -rt thread, but it's more important to repeat them here. On Thu, 22 Jun 2006, Thomas Gleixner wrote: > /* > + * Recheck the pi chain, in case we got a priority setting > + * > + * Called from sched_setscheduler > + */ > +void rt_mutex_adjust_pi(struct task_struct *task) > +{ > + struct rt_mutex_waiter *waiter; > + unsigned long flags; > + > + spin_lock_irqsave(&task->pi_lock, flags); > + > + waiter = task->pi_blocked_on; Good to see you fixed the waiter race that I mentioned in the other thread. You did it before I mentioned it, but I didn't read this yet ;) > + if (!waiter || waiter->list_entry.prio == task->prio) { > + spin_unlock_irqrestore(&task->pi_lock, flags); > + return; > + } > + > + /* gets dropped in rt_mutex_adjust_prio_chain()! */ > + get_task_struct(task); > + spin_unlock_irqrestore(&task->pi_lock, flags); > + > + rt_mutex_adjust_prio_chain(task, 0, NULL, NULL, task); The above means that you cant ever call sched_setscheduler from a interrupt handler (or softirq). The rt_mutex_adjust_prio_chain since that grabs wait_lock which is not for interrupt use. > +} > + > +/* > * Slow path lock function: > */ > static int __sched > @@ -633,6 +663,7 @@ > if (unlikely(ret)) > break; > } > + > spin_unlock(&lock->wait_lock); > > debug_rt_mutex_print_deadlock(&waiter); [...] > > extern void set_user_nice(task_t *p, long nice); > Index: linux-2.6.17-mm/kernel/sched.c > =================================================================== > --- linux-2.6.17-mm.orig/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 > +++ linux-2.6.17-mm/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 Oh and Thomas... export QUILT_DIFF_OPTS='-p' > @@ -4119,6 +4119,8 @@ Can sched_setscheduler be called from interrupt context? > __task_rq_unlock(rq); > spin_unlock_irqrestore(&p->pi_lock, flags); > > + rt_mutex_adjust_pi(p); > + > return 0; > } > EXPORT_SYMBOL_GPL(sched_setscheduler); I haven't found any place that this was called from interrupt context, but with this added, it can not be. So it should be documented that sched_setscheduler grabs locks that are not to be called from interrupt context. -- Steve Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Index: linux-2.6.17-rc4-mm1/kernel/sched.c =================================================================== --- linux-2.6.17-rc4-mm1.orig/kernel/sched.c 2006-06-22 10:13:50.000000000 -0400 +++ linux-2.6.17-rc4-mm1/kernel/sched.c 2006-06-22 10:15:09.000000000 -0400 @@ -4006,6 +4006,10 @@ static void __setscheduler(struct task_s * @p: the task in question. * @policy: new policy. * @param: structure containing the new RT priority. + * + * Do not call this from interrupt context. If RT_MUTEXES is configured + * then it can grab spin locks that are not protected with interrupts + * disabled. */ int sched_setscheduler(struct task_struct *p, int policy, struct sched_param *param) ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch 2/3] rtmutex: Propagate priority settings into PI lock chains 2006-06-22 14:20 ` Steven Rostedt @ 2006-06-22 18:02 ` Esben Nielsen 2006-06-23 6:27 ` Steven Rostedt 2006-06-23 2:08 ` Andrew Morton 1 sibling, 1 reply; 15+ messages in thread From: Esben Nielsen @ 2006-06-22 18:02 UTC (permalink / raw) To: Steven Rostedt; +Cc: Thomas Gleixner, LKML, Andrew Morton, Ingo Molnar On Thu, 22 Jun 2006, Steven Rostedt wrote: > > I've stated these comments on the -rt thread, but it's more important to > repeat them here. > > On Thu, 22 Jun 2006, Thomas Gleixner wrote: > >> /* >> + * Recheck the pi chain, in case we got a priority setting >> + * >> + * Called from sched_setscheduler >> + */ >> +void rt_mutex_adjust_pi(struct task_struct *task) >> +{ >> + struct rt_mutex_waiter *waiter; >> + unsigned long flags; >> + >> + spin_lock_irqsave(&task->pi_lock, flags); >> + >> + waiter = task->pi_blocked_on; > > Good to see you fixed the waiter race that I mentioned in the other > thread. You did it before I mentioned it, but I didn't read this yet ;) > >> + if (!waiter || waiter->list_entry.prio == task->prio) { >> + spin_unlock_irqrestore(&task->pi_lock, flags); >> + return; >> + } >> + >> + /* gets dropped in rt_mutex_adjust_prio_chain()! */ >> + get_task_struct(task); >> + spin_unlock_irqrestore(&task->pi_lock, flags); >> + >> + rt_mutex_adjust_prio_chain(task, 0, NULL, NULL, task); > > The above means that you cant ever call sched_setscheduler from a > interrupt handler (or softirq). The rt_mutex_adjust_prio_chain since that > grabs wait_lock which is not for interrupt use. Worse in RT context: It makes it unhealthy to call from a RT task as it doesn't have predictable runtime unless you know that the target task is not blocked on a deep locking tree. I know this is very unlikely to happen very often in real life and this thread isn't about preempt-realtime, but I'll say it anyway: Hard realtime is about avoiding surprisingly long execution times - especially those which are extremely unlikely to happen, but nevertheless are possible, because you are not very likely to see those situations in any tests, and therefore you can suddenly miss deadlines in the field without a clue what is happening. Esben > >> +} >> + >> +/* >> * Slow path lock function: >> */ >> static int __sched >> @@ -633,6 +663,7 @@ >> if (unlikely(ret)) >> break; >> } >> + >> spin_unlock(&lock->wait_lock); >> >> debug_rt_mutex_print_deadlock(&waiter); > > [...] > >> >> extern void set_user_nice(task_t *p, long nice); >> Index: linux-2.6.17-mm/kernel/sched.c >> =================================================================== >> --- linux-2.6.17-mm.orig/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 >> +++ linux-2.6.17-mm/kernel/sched.c 2006-06-22 10:26:11.000000000 +0200 > > Oh and Thomas... > > export QUILT_DIFF_OPTS='-p' > >> @@ -4119,6 +4119,8 @@ > > Can sched_setscheduler be called from interrupt context? > >> __task_rq_unlock(rq); >> spin_unlock_irqrestore(&p->pi_lock, flags); >> >> + rt_mutex_adjust_pi(p); >> + >> return 0; >> } >> EXPORT_SYMBOL_GPL(sched_setscheduler); > > I haven't found any place that this was called from interrupt context, but > with this added, it can not be. So it should be documented that > sched_setscheduler grabs locks that are not to be called from interrupt > context. > > -- Steve > > Signed-off-by: Steven Rostedt <rostedt@goodmis.org> > > Index: linux-2.6.17-rc4-mm1/kernel/sched.c > =================================================================== > --- linux-2.6.17-rc4-mm1.orig/kernel/sched.c 2006-06-22 10:13:50.000000000 -0400 > +++ linux-2.6.17-rc4-mm1/kernel/sched.c 2006-06-22 10:15:09.000000000 -0400 > @@ -4006,6 +4006,10 @@ static void __setscheduler(struct task_s > * @p: the task in question. > * @policy: new policy. > * @param: structure containing the new RT priority. > + * > + * Do not call this from interrupt context. If RT_MUTEXES is configured > + * then it can grab spin locks that are not protected with interrupts > + * disabled. > */ > int sched_setscheduler(struct task_struct *p, int policy, > struct sched_param *param) > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch 2/3] rtmutex: Propagate priority settings into PI lock chains 2006-06-22 18:02 ` Esben Nielsen @ 2006-06-23 6:27 ` Steven Rostedt 0 siblings, 0 replies; 15+ messages in thread From: Steven Rostedt @ 2006-06-23 6:27 UTC (permalink / raw) To: Esben Nielsen; +Cc: Thomas Gleixner, LKML, Andrew Morton, Ingo Molnar On Thu, 22 Jun 2006, Esben Nielsen wrote: > > The above means that you cant ever call sched_setscheduler from a > > interrupt handler (or softirq). The rt_mutex_adjust_prio_chain since that > > grabs wait_lock which is not for interrupt use. > > Worse in RT context: It makes it unhealthy to call from a RT task as it > doesn't have predictable runtime unless you know that the target task is > not blocked on a deep locking tree. > > I know this is very unlikely to happen very often in real life and this > thread isn't about preempt-realtime, but I'll say it anyway: Hard realtime Esben, you are right. This is not about RT so it does _not_ belong in this thread. Please keep the topic in this thread about -mm. We already have a RT thread to discuss this in. My comments here where about a fact that setscheduler when from interrupt context friendly to interrupt context unfriendly and I thought it would be good to document that fact. I like Andrews answer better. Document it with a BUG_ON(in_interrupt). -- Steve > is about avoiding surprisingly long execution times - especially those > which are extremely unlikely to happen, but nevertheless are possible, > because you are not very likely to see those situations in any tests, and > therefore you can suddenly miss deadlines in the field without a clue what > is happening. > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch 2/3] rtmutex: Propagate priority settings into PI lock chains 2006-06-22 14:20 ` Steven Rostedt 2006-06-22 18:02 ` Esben Nielsen @ 2006-06-23 2:08 ` Andrew Morton 2006-06-23 9:28 ` [PATCH -mm] bug if setscheduler is called from interrupt context Steven Rostedt 1 sibling, 1 reply; 15+ messages in thread From: Andrew Morton @ 2006-06-23 2:08 UTC (permalink / raw) To: Steven Rostedt; +Cc: tglx, linux-kernel, mingo On Thu, 22 Jun 2006 10:20:59 -0400 (EDT) Steven Rostedt <rostedt@goodmis.org> wrote: > > + if (!waiter || waiter->list_entry.prio == task->prio) { > > + spin_unlock_irqrestore(&task->pi_lock, flags); > > + return; > > + } > > + > > + /* gets dropped in rt_mutex_adjust_prio_chain()! */ > > + get_task_struct(task); > > + spin_unlock_irqrestore(&task->pi_lock, flags); > > + > > + rt_mutex_adjust_prio_chain(task, 0, NULL, NULL, task); > > The above means that you cant ever call sched_setscheduler from a > interrupt handler (or softirq). The rt_mutex_adjust_prio_chain since that > grabs wait_lock which is not for interrupt use. Running setscheduler() from IRQ context sounds rather perverse. BUG_ON(in_interrupt()) would reduce the temptation. ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH -mm] bug if setscheduler is called from interrupt context. 2006-06-23 2:08 ` Andrew Morton @ 2006-06-23 9:28 ` Steven Rostedt 0 siblings, 0 replies; 15+ messages in thread From: Steven Rostedt @ 2006-06-23 9:28 UTC (permalink / raw) To: Andrew Morton; +Cc: tglx, linux-kernel, mingo Thomas Gleixner is adding the call to a rtmutex function in setscheduler. This call grabs a spin_lock that is not always protected by interrupts disabled. So this means that setscheduler cant be called from interrupt context. To prevent this from happening in the future, this patch adds a BUG_ON(in_interrupt()) in that function. (Thanks to akpm <aka. Andrew Morton> for this suggestion). -- Steve Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Index: linux-2.6.17-mm1/kernel/sched.c =================================================================== --- linux-2.6.17-mm1.orig/kernel/sched.c 2006-06-23 05:19:41.000000000 -0400 +++ linux-2.6.17-mm1/kernel/sched.c 2006-06-23 05:20:44.000000000 -0400 @@ -4034,6 +4034,8 @@ int sched_setscheduler(struct task_struc unsigned long flags; runqueue_t *rq; + /* may grab non-irq protected spin_locks */ + BUG_ON(in_interrupt()); recheck: /* double check policy once rq lock held */ if (policy < 0) ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch 2/3] rtmutex: Propagate priority settings into PI lock chains 2006-06-22 9:08 ` [patch 2/3] rtmutex: Propagate priority settings into PI lock chains Thomas Gleixner 2006-06-22 14:20 ` Steven Rostedt @ 2006-06-23 2:06 ` Andrew Morton 2006-06-23 16:26 ` Thomas Gleixner 1 sibling, 1 reply; 15+ messages in thread From: Andrew Morton @ 2006-06-23 2:06 UTC (permalink / raw) To: Thomas Gleixner; +Cc: linux-kernel, mingo On Thu, 22 Jun 2006 09:08:39 -0000 Thomas Gleixner <tglx@linutronix.de> wrote: > When the priority of a task, which is blocked on a lock, changes we must > propagate this change into the PI lock chain. Therefor the chain walk > code is changed to get rid of the references to current to avoid false > positives in the deadlock detector, as setscheduler might be called by a > task which holds the lock on which the task whose priority is changed is > blocked. > Also add some comments about the get/put_task_struct usage to avoid > confusion. > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > Signed-off-by: Ingo Molnar <mingo@elte.hu> > > include/linux/sched.h | 2 ++ > kernel/rtmutex.c | 41 ++++++++++++++++++++++++++++++++++++----- That file's full of lockdep droppings from a different patchset. Please check the end result. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch 2/3] rtmutex: Propagate priority settings into PI lock chains 2006-06-23 2:06 ` [patch 2/3] rtmutex: Propagate priority settings into PI lock chains Andrew Morton @ 2006-06-23 16:26 ` Thomas Gleixner 0 siblings, 0 replies; 15+ messages in thread From: Thomas Gleixner @ 2006-06-23 16:26 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, mingo On Thu, 2006-06-22 at 19:06 -0700, Andrew Morton wrote: > On Thu, 22 Jun 2006 09:08:39 -0000 > Thomas Gleixner <tglx@linutronix.de> wrote: > > > When the priority of a task, which is blocked on a lock, changes we must > > propagate this change into the PI lock chain. Therefor the chain walk > > code is changed to get rid of the references to current to avoid false > > positives in the deadlock detector, as setscheduler might be called by a > > task which holds the lock on which the task whose priority is changed is > > blocked. > > Also add some comments about the get/put_task_struct usage to avoid > > confusion. > > > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > > Signed-off-by: Ingo Molnar <mingo@elte.hu> > > > > include/linux/sched.h | 2 ++ > > kernel/rtmutex.c | 41 ++++++++++++++++++++++++++++++++++++----- > > That file's full of lockdep droppings from a different patchset. Please > check the end result. Will do in a minute tglx ^ permalink raw reply [flat|nested] 15+ messages in thread
* [patch 3/3] rtmutex: Modify rtmutex-tester to test the setscheduler propagation 2006-06-22 9:08 [patch 0/3] rtmutex: Propagate priority setting into lock chains Thomas Gleixner 2006-06-22 9:08 ` [patch 1/3] Drop tasklist lock in do_sched_setscheduler Thomas Gleixner 2006-06-22 9:08 ` [patch 2/3] rtmutex: Propagate priority settings into PI lock chains Thomas Gleixner @ 2006-06-22 9:08 ` Thomas Gleixner 2 siblings, 0 replies; 15+ messages in thread From: Thomas Gleixner @ 2006-06-22 9:08 UTC (permalink / raw) To: LKML; +Cc: Andrew Morton, Ingo Molnar [-- Attachment #1: rt-mutex-fix-tester-for-setsched-tests.patch --] [-- Type: text/plain, Size: 13789 bytes --] Make test suite setscheduler calls asynchronously. Remove the waits in the test cases and add a new testcase to verify the correctness of the setscheduler priority propagation. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> kernel/rtmutex-tester.c | 32 +-- scripts/rt-tester/check-all.sh | 1 scripts/rt-tester/t2-l1-2rt-sameprio.tst | 2 scripts/rt-tester/t2-l1-pi.tst | 2 scripts/rt-tester/t2-l1-signal.tst | 2 scripts/rt-tester/t2-l2-2rt-deadlock.tst | 2 scripts/rt-tester/t3-l1-pi-1rt.tst | 3 scripts/rt-tester/t3-l1-pi-2rt.tst | 3 scripts/rt-tester/t3-l1-pi-3rt.tst | 3 scripts/rt-tester/t3-l1-pi-signal.tst | 3 scripts/rt-tester/t3-l1-pi-steal.tst | 3 scripts/rt-tester/t3-l2-pi.tst | 3 scripts/rt-tester/t4-l2-pi-deboost.tst | 4 scripts/rt-tester/t5-l4-pi-boost-deboost-setsched.tst | 183 ++++++++++++++++++ scripts/rt-tester/t5-l4-pi-boost-deboost.tst | 5 15 files changed, 202 insertions(+), 49 deletions(-) Index: linux-2.6.17-mm/scripts/rt-tester/check-all.sh =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/check-all.sh 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/check-all.sh 2006-06-22 10:26:11.000000000 +0200 @@ -18,4 +18,5 @@ testit t3-l2-pi.tst testit t4-l2-pi-deboost.tst testit t5-l4-pi-boost-deboost.tst +testit t5-l4-pi-boost-deboost-setsched.tst Index: linux-2.6.17-mm/kernel/rtmutex-tester.c =================================================================== --- linux-2.6.17-mm.orig/kernel/rtmutex-tester.c 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/kernel/rtmutex-tester.c 2006-06-22 10:26:11.000000000 +0200 @@ -46,7 +46,7 @@ RTTEST_LOCKINTNOWAIT, /* 6 Lock interruptible no wait in wakeup, data = lockindex */ RTTEST_LOCKCONT, /* 7 Continue locking after the wakeup delay */ RTTEST_UNLOCK, /* 8 Unlock, data = lockindex */ - RTTEST_LOCKBKL, /* 9 Lock BKL */ + RTTEST_LOCKBKL, /* 9 Lock BKL */ RTTEST_UNLOCKBKL, /* 10 Unlock BKL */ RTTEST_SIGNAL, /* 11 Signal other test thread, data = thread id */ RTTEST_RESETEVENT = 98, /* 98 Reset event counter */ @@ -55,7 +55,6 @@ static int handle_op(struct test_thread_data *td, int lockwakeup) { - struct sched_param schedpar; int i, id, ret = -EINVAL; switch(td->opcode) { @@ -63,17 +62,6 @@ case RTTEST_NOP: return 0; - case RTTEST_SCHEDOT: - schedpar.sched_priority = 0; - ret = sched_setscheduler(current, SCHED_NORMAL, &schedpar); - if (!ret) - set_user_nice(current, 0); - return ret; - - case RTTEST_SCHEDRT: - schedpar.sched_priority = td->opdata; - return sched_setscheduler(current, SCHED_FIFO, &schedpar); - case RTTEST_LOCKCONT: td->mutexes[td->opdata] = 1; td->event = atomic_add_return(1, &rttest_event); @@ -310,9 +298,10 @@ static ssize_t sysfs_test_command(struct sys_device *dev, const char *buf, size_t count) { + struct sched_param schedpar; struct test_thread_data *td; char cmdbuf[32]; - int op, dat, tid; + int op, dat, tid, ret; td = container_of(dev, struct test_thread_data, sysdev); tid = td->sysdev.id; @@ -334,6 +323,21 @@ return -EINVAL; switch (op) { + case RTTEST_SCHEDOT: + schedpar.sched_priority = 0; + ret = sched_setscheduler(threads[tid], SCHED_NORMAL, &schedpar); + if (ret) + return ret; + set_user_nice(current, 0); + break; + + case RTTEST_SCHEDRT: + schedpar.sched_priority = dat; + ret = sched_setscheduler(threads[tid], SCHED_FIFO, &schedpar); + if (ret) + return ret; + break; + case RTTEST_SIGNAL: send_sig(SIGHUP, threads[tid], 0); break; Index: linux-2.6.17-mm/scripts/rt-tester/t2-l1-2rt-sameprio.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t2-l1-2rt-sameprio.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t2-l1-2rt-sameprio.tst 2006-06-22 10:26:11.000000000 +0200 @@ -57,9 +57,7 @@ # Set schedulers C: schedfifo: 0: 80 -W: opcodeeq: 0: 0 C: schedfifo: 1: 80 -W: opcodeeq: 1: 0 # T0 lock L0 C: locknowait: 0: 0 Index: linux-2.6.17-mm/scripts/rt-tester/t2-l1-pi.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t2-l1-pi.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t2-l1-pi.tst 2006-06-22 10:26:11.000000000 +0200 @@ -57,9 +57,7 @@ # Set schedulers C: schedother: 0: 0 -W: opcodeeq: 0: 0 C: schedfifo: 1: 80 -W: opcodeeq: 1: 0 # T0 lock L0 C: locknowait: 0: 0 Index: linux-2.6.17-mm/scripts/rt-tester/t2-l1-signal.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t2-l1-signal.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t2-l1-signal.tst 2006-06-22 10:26:11.000000000 +0200 @@ -57,9 +57,7 @@ # Set schedulers C: schedother: 0: 0 -W: opcodeeq: 0: 0 C: schedother: 1: 0 -W: opcodeeq: 1: 0 # T0 lock L0 C: locknowait: 0: 0 Index: linux-2.6.17-mm/scripts/rt-tester/t2-l2-2rt-deadlock.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t2-l2-2rt-deadlock.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t2-l2-2rt-deadlock.tst 2006-06-22 10:26:11.000000000 +0200 @@ -57,9 +57,7 @@ # Set schedulers C: schedfifo: 0: 80 -W: opcodeeq: 0: 0 C: schedfifo: 1: 80 -W: opcodeeq: 1: 0 # T0 lock L0 C: locknowait: 0: 0 Index: linux-2.6.17-mm/scripts/rt-tester/t3-l1-pi-1rt.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t3-l1-pi-1rt.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t3-l1-pi-1rt.tst 2006-06-22 10:26:11.000000000 +0200 @@ -57,11 +57,8 @@ # Set schedulers C: schedother: 0: 0 -W: opcodeeq: 0: 0 C: schedother: 1: 0 -W: opcodeeq: 1: 0 C: schedfifo: 2: 82 -W: opcodeeq: 2: 0 # T0 lock L0 C: locknowait: 0: 0 Index: linux-2.6.17-mm/scripts/rt-tester/t3-l1-pi-2rt.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t3-l1-pi-2rt.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t3-l1-pi-2rt.tst 2006-06-22 10:26:11.000000000 +0200 @@ -57,11 +57,8 @@ # Set schedulers C: schedother: 0: 0 -W: opcodeeq: 0: 0 C: schedfifo: 1: 81 -W: opcodeeq: 1: 0 C: schedfifo: 2: 82 -W: opcodeeq: 2: 0 # T0 lock L0 C: locknowait: 0: 0 Index: linux-2.6.17-mm/scripts/rt-tester/t3-l1-pi-3rt.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t3-l1-pi-3rt.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t3-l1-pi-3rt.tst 2006-06-22 10:26:11.000000000 +0200 @@ -57,11 +57,8 @@ # Set schedulers C: schedfifo: 0: 80 -W: opcodeeq: 0: 0 C: schedfifo: 1: 81 -W: opcodeeq: 1: 0 C: schedfifo: 2: 82 -W: opcodeeq: 2: 0 # T0 lock L0 C: locknowait: 0: 0 Index: linux-2.6.17-mm/scripts/rt-tester/t3-l1-pi-signal.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t3-l1-pi-signal.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t3-l1-pi-signal.tst 2006-06-22 10:26:11.000000000 +0200 @@ -55,11 +55,8 @@ # Set priorities C: schedother: 0: 0 -W: opcodeeq: 0: 0 C: schedfifo: 1: 80 -W: opcodeeq: 1: 0 C: schedfifo: 2: 81 -W: opcodeeq: 2: 0 # T0 lock L0 C: lock: 0: 0 Index: linux-2.6.17-mm/scripts/rt-tester/t3-l1-pi-steal.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t3-l1-pi-steal.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t3-l1-pi-steal.tst 2006-06-22 10:26:11.000000000 +0200 @@ -57,11 +57,8 @@ # Set schedulers C: schedother: 0: 0 -W: opcodeeq: 0: 0 C: schedfifo: 1: 80 -W: opcodeeq: 1: 0 C: schedfifo: 2: 81 -W: opcodeeq: 2: 0 # T0 lock L0 C: lock: 0: 0 Index: linux-2.6.17-mm/scripts/rt-tester/t3-l2-pi.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t3-l2-pi.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t3-l2-pi.tst 2006-06-22 10:26:11.000000000 +0200 @@ -57,11 +57,8 @@ # Set schedulers C: schedother: 0: 0 -W: opcodeeq: 0: 0 C: schedother: 1: 0 -W: opcodeeq: 1: 0 C: schedfifo: 2: 82 -W: opcodeeq: 2: 0 # T0 lock L0 C: locknowait: 0: 0 Index: linux-2.6.17-mm/scripts/rt-tester/t4-l2-pi-deboost.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t4-l2-pi-deboost.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t4-l2-pi-deboost.tst 2006-06-22 10:26:11.000000000 +0200 @@ -57,13 +57,9 @@ # Set schedulers C: schedother: 0: 0 -W: opcodeeq: 0: 0 C: schedother: 1: 0 -W: opcodeeq: 1: 0 C: schedfifo: 2: 82 -W: opcodeeq: 2: 0 C: schedfifo: 3: 83 -W: opcodeeq: 3: 0 # T0 lock L0 C: locknowait: 0: 0 Index: linux-2.6.17-mm/scripts/rt-tester/t5-l4-pi-boost-deboost-setsched.tst =================================================================== --- /dev/null 1970-01-01 00:00:00.000000000 +0000 +++ linux-2.6.17-mm/scripts/rt-tester/t5-l4-pi-boost-deboost-setsched.tst 2006-06-22 10:26:11.000000000 +0200 @@ -0,0 +1,183 @@ +# +# rt-mutex test +# +# Op: C(ommand)/T(est)/W(ait) +# | opcode +# | | threadid: 0-7 +# | | | opcode argument +# | | | | +# C: lock: 0: 0 +# +# Commands +# +# opcode opcode argument +# schedother nice value +# schedfifo priority +# lock lock nr (0-7) +# locknowait lock nr (0-7) +# lockint lock nr (0-7) +# lockintnowait lock nr (0-7) +# lockcont lock nr (0-7) +# unlock lock nr (0-7) +# lockbkl lock nr (0-7) +# unlockbkl lock nr (0-7) +# signal thread to signal (0-7) +# reset 0 +# resetevent 0 +# +# Tests / Wait +# +# opcode opcode argument +# +# prioeq priority +# priolt priority +# priogt priority +# nprioeq normal priority +# npriolt normal priority +# npriogt normal priority +# locked lock nr (0-7) +# blocked lock nr (0-7) +# blockedwake lock nr (0-7) +# unlocked lock nr (0-7) +# lockedbkl dont care +# blockedbkl dont care +# unlockedbkl dont care +# opcodeeq command opcode or number +# opcodelt number +# opcodegt number +# eventeq number +# eventgt number +# eventlt number + +# +# 5 threads 4 lock PI - modify priority of blocked threads +# +C: resetevent: 0: 0 +W: opcodeeq: 0: 0 + +# Set schedulers +C: schedother: 0: 0 +C: schedfifo: 1: 81 +C: schedfifo: 2: 82 +C: schedfifo: 3: 83 +C: schedfifo: 4: 84 + +# T0 lock L0 +C: locknowait: 0: 0 +W: locked: 0: 0 + +# T1 lock L1 +C: locknowait: 1: 1 +W: locked: 1: 1 + +# T1 lock L0 +C: lockintnowait: 1: 0 +W: blocked: 1: 0 +T: prioeq: 0: 81 + +# T2 lock L2 +C: locknowait: 2: 2 +W: locked: 2: 2 + +# T2 lock L1 +C: lockintnowait: 2: 1 +W: blocked: 2: 1 +T: prioeq: 0: 82 +T: prioeq: 1: 82 + +# T3 lock L3 +C: locknowait: 3: 3 +W: locked: 3: 3 + +# T3 lock L2 +C: lockintnowait: 3: 2 +W: blocked: 3: 2 +T: prioeq: 0: 83 +T: prioeq: 1: 83 +T: prioeq: 2: 83 + +# T4 lock L3 +C: lockintnowait: 4: 3 +W: blocked: 4: 3 +T: prioeq: 0: 84 +T: prioeq: 1: 84 +T: prioeq: 2: 84 +T: prioeq: 3: 84 + +# Reduce prio of T4 +C: schedfifo: 4: 80 +T: prioeq: 0: 83 +T: prioeq: 1: 83 +T: prioeq: 2: 83 +T: prioeq: 3: 83 +T: prioeq: 4: 80 + +# Increase prio of T4 +C: schedfifo: 4: 84 +T: prioeq: 0: 84 +T: prioeq: 1: 84 +T: prioeq: 2: 84 +T: prioeq: 3: 84 +T: prioeq: 4: 84 + +# Reduce prio of T3 +C: schedfifo: 3: 80 +T: prioeq: 0: 84 +T: prioeq: 1: 84 +T: prioeq: 2: 84 +T: prioeq: 3: 84 +T: prioeq: 4: 84 + +# Increase prio of T3 +C: schedfifo: 3: 85 +T: prioeq: 0: 85 +T: prioeq: 1: 85 +T: prioeq: 2: 85 +T: prioeq: 3: 85 +T: prioeq: 4: 84 + +# Reduce prio of T3 +C: schedfifo: 3: 83 +T: prioeq: 0: 84 +T: prioeq: 1: 84 +T: prioeq: 2: 84 +T: prioeq: 3: 84 +T: prioeq: 4: 84 + +# Signal T4 +C: signal: 4: 0 +W: unlocked: 4: 3 +T: prioeq: 0: 83 +T: prioeq: 1: 83 +T: prioeq: 2: 83 +T: prioeq: 3: 83 + +# Signal T3 +C: signal: 3: 0 +W: unlocked: 3: 2 +T: prioeq: 0: 82 +T: prioeq: 1: 82 +T: prioeq: 2: 82 + +# Signal T2 +C: signal: 2: 0 +W: unlocked: 2: 1 +T: prioeq: 0: 81 +T: prioeq: 1: 81 + +# Signal T1 +C: signal: 1: 0 +W: unlocked: 1: 0 +T: priolt: 0: 1 + +# Unlock and exit +C: unlock: 3: 3 +C: unlock: 2: 2 +C: unlock: 1: 1 +C: unlock: 0: 0 + +W: unlocked: 3: 3 +W: unlocked: 2: 2 +W: unlocked: 1: 1 +W: unlocked: 0: 0 + Index: linux-2.6.17-mm/scripts/rt-tester/t5-l4-pi-boost-deboost.tst =================================================================== --- linux-2.6.17-mm.orig/scripts/rt-tester/t5-l4-pi-boost-deboost.tst 2006-06-22 10:26:10.000000000 +0200 +++ linux-2.6.17-mm/scripts/rt-tester/t5-l4-pi-boost-deboost.tst 2006-06-22 10:26:11.000000000 +0200 @@ -57,15 +57,10 @@ # Set schedulers C: schedother: 0: 0 -W: opcodeeq: 0: 0 C: schedfifo: 1: 81 -W: opcodeeq: 1: 0 C: schedfifo: 2: 82 -W: opcodeeq: 2: 0 C: schedfifo: 3: 83 -W: opcodeeq: 3: 0 C: schedfifo: 4: 84 -W: opcodeeq: 4: 0 # T0 lock L0 C: locknowait: 0: 0 -- ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2006-06-24 8:23 UTC | newest] Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2006-06-22 9:08 [patch 0/3] rtmutex: Propagate priority setting into lock chains Thomas Gleixner 2006-06-22 9:08 ` [patch 1/3] Drop tasklist lock in do_sched_setscheduler Thomas Gleixner 2006-06-23 1:48 ` Andrew Morton 2006-06-23 6:01 ` Thomas Gleixner 2006-06-24 8:07 ` Andrew Morton 2006-06-24 8:25 ` Thomas Gleixner 2006-06-22 9:08 ` [patch 2/3] rtmutex: Propagate priority settings into PI lock chains Thomas Gleixner 2006-06-22 14:20 ` Steven Rostedt 2006-06-22 18:02 ` Esben Nielsen 2006-06-23 6:27 ` Steven Rostedt 2006-06-23 2:08 ` Andrew Morton 2006-06-23 9:28 ` [PATCH -mm] bug if setscheduler is called from interrupt context Steven Rostedt 2006-06-23 2:06 ` [patch 2/3] rtmutex: Propagate priority settings into PI lock chains Andrew Morton 2006-06-23 16:26 ` Thomas Gleixner 2006-06-22 9:08 ` [patch 3/3] rtmutex: Modify rtmutex-tester to test the setscheduler propagation Thomas Gleixner
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).