All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched()
@ 2009-07-10 14:49 Frederic Weisbecker
  2009-07-10 14:49 ` [PATCH 2/2] sched: Move the sleeping while atomic checks early in cond_resched() Frederic Weisbecker
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2009-07-10 14:49 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Frederic Weisbecker, Peter Zijlstra

The schedule() function is a loop that reschedules the current task
while the TIF_NEED_RESCHED flag is set:

void schedule(void)
{
need_resched:
	/* schedule code */
	if (need_resched())
		goto need_resched;
}

And cond_resched() repeat this loop:

do {
	add_preempt_count(PREEMPT_ACTIVE);
	schedule();
	sub_preempt_count(PREEMPT_ACTIVE);
} while(need_resched());

This loop is needless because schedule() already did the check and
nothing can set TIF_NEED_RESCHED between schedule() exit and the loop
check in need_resched().

Then remove this needless loop.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/sched.c |    8 +++-----
 1 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 4d1e387..87ecac1 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6613,11 +6613,9 @@ static void __cond_resched(void)
 	 * PREEMPT_ACTIVE, which could trigger a second
 	 * cond_resched() call.
 	 */
-	do {
-		add_preempt_count(PREEMPT_ACTIVE);
-		schedule();
-		sub_preempt_count(PREEMPT_ACTIVE);
-	} while (need_resched());
+	add_preempt_count(PREEMPT_ACTIVE);
+	schedule();
+	sub_preempt_count(PREEMPT_ACTIVE);
 }
 
 int __sched _cond_resched(void)
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/2] sched: Move the sleeping while atomic checks early in cond_resched()
  2009-07-10 14:49 [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched() Frederic Weisbecker
@ 2009-07-10 14:49 ` Frederic Weisbecker
  2009-07-10 14:59   ` Peter Zijlstra
  2009-07-10 15:00 ` [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched() Peter Zijlstra
  2009-07-10 15:17 ` Arnd Bergmann
  2 siblings, 1 reply; 19+ messages in thread
From: Frederic Weisbecker @ 2009-07-10 14:49 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Frederic Weisbecker, Peter Zijlstra

might_sleep() is called lately in cond_resched(), after the
need_resched()/preempt enabled/system running tests are checked.

It's better to check the sleeps while atomic earlier and not depend
on some environment datas that reduce the chances to detect a
problem.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 include/linux/sched.h |    2 ++
 kernel/sched.c        |    3 ---
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 0cb0d8d..e357dc7 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2279,11 +2279,13 @@ extern int _cond_resched(void);
 #ifdef CONFIG_PREEMPT_BKL
 static inline int cond_resched(void)
 {
+	might_sleep();
 	return 0;
 }
 #else
 static inline int cond_resched(void)
 {
+	might_sleep();
 	return _cond_resched();
 }
 #endif
diff --git a/kernel/sched.c b/kernel/sched.c
index 87ecac1..c22804b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6605,9 +6605,6 @@ SYSCALL_DEFINE0(sched_yield)
 
 static void __cond_resched(void)
 {
-#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
-	__might_sleep(__FILE__, __LINE__);
-#endif
 	/*
 	 * The BKS might be reacquired before we have dropped
 	 * PREEMPT_ACTIVE, which could trigger a second
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] sched: Move the sleeping while atomic checks early in cond_resched()
  2009-07-10 14:49 ` [PATCH 2/2] sched: Move the sleeping while atomic checks early in cond_resched() Frederic Weisbecker
@ 2009-07-10 14:59   ` Peter Zijlstra
  2009-07-10 15:08     ` Frederic Weisbecker
  0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2009-07-10 14:59 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML

On Fri, 2009-07-10 at 16:49 +0200, Frederic Weisbecker wrote:

> index 0cb0d8d..e357dc7 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -2279,11 +2279,13 @@ extern int _cond_resched(void);
>  #ifdef CONFIG_PREEMPT_BKL
>  static inline int cond_resched(void)
>  {
> +	might_sleep();
>  	return 0;
>  }
>  #else
>  static inline int cond_resched(void)
>  {
> +	might_sleep();
>  	return _cond_resched();
>  }
>  #endif

# define might_resched() _cond_resched()

# define might_sleep() \
	do { __might_sleep(__FILE__, __LINE__); might_resched(); } while (0)


Doesn't seem to make it any better that, but yeah, moving that
__might_sleep() did occur to me earlier today when I touched that code.

> diff --git a/kernel/sched.c b/kernel/sched.c
> index 87ecac1..c22804b 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -6605,9 +6605,6 @@ SYSCALL_DEFINE0(sched_yield)
>  
>  static void __cond_resched(void)
>  {
> -#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
> -	__might_sleep(__FILE__, __LINE__);
> -#endif
>  	/*
>  	 * The BKS might be reacquired before we have dropped
>  	 * PREEMPT_ACTIVE, which could trigger a second




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched()
  2009-07-10 14:49 [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched() Frederic Weisbecker
  2009-07-10 14:49 ` [PATCH 2/2] sched: Move the sleeping while atomic checks early in cond_resched() Frederic Weisbecker
@ 2009-07-10 15:00 ` Peter Zijlstra
  2009-07-10 15:17 ` Arnd Bergmann
  2 siblings, 0 replies; 19+ messages in thread
From: Peter Zijlstra @ 2009-07-10 15:00 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML

On Fri, 2009-07-10 at 16:49 +0200, Frederic Weisbecker wrote:
> The schedule() function is a loop that reschedules the current task
> while the TIF_NEED_RESCHED flag is set:
> 
> void schedule(void)
> {
> need_resched:
> 	/* schedule code */
> 	if (need_resched())
> 		goto need_resched;
> }
> 
> And cond_resched() repeat this loop:
> 
> do {
> 	add_preempt_count(PREEMPT_ACTIVE);
> 	schedule();
> 	sub_preempt_count(PREEMPT_ACTIVE);
> } while(need_resched());
> 
> This loop is needless because schedule() already did the check and
> nothing can set TIF_NEED_RESCHED between schedule() exit and the loop
> check in need_resched().
> 
> Then remove this needless loop.
> 
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>

> ---
>  kernel/sched.c |    8 +++-----
>  1 files changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 4d1e387..87ecac1 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -6613,11 +6613,9 @@ static void __cond_resched(void)
>  	 * PREEMPT_ACTIVE, which could trigger a second
>  	 * cond_resched() call.
>  	 */
> -	do {
> -		add_preempt_count(PREEMPT_ACTIVE);
> -		schedule();
> -		sub_preempt_count(PREEMPT_ACTIVE);
> -	} while (need_resched());
> +	add_preempt_count(PREEMPT_ACTIVE);
> +	schedule();
> +	sub_preempt_count(PREEMPT_ACTIVE);
>  }
>  
>  int __sched _cond_resched(void)


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] sched: Move the sleeping while atomic checks early in cond_resched()
  2009-07-10 14:59   ` Peter Zijlstra
@ 2009-07-10 15:08     ` Frederic Weisbecker
  2009-07-10 15:12       ` Peter Zijlstra
  0 siblings, 1 reply; 19+ messages in thread
From: Frederic Weisbecker @ 2009-07-10 15:08 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, LKML

On Fri, Jul 10, 2009 at 04:59:55PM +0200, Peter Zijlstra wrote:
> On Fri, 2009-07-10 at 16:49 +0200, Frederic Weisbecker wrote:
> 
> > index 0cb0d8d..e357dc7 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -2279,11 +2279,13 @@ extern int _cond_resched(void);
> >  #ifdef CONFIG_PREEMPT_BKL
> >  static inline int cond_resched(void)
> >  {
> > +	might_sleep();
> >  	return 0;
> >  }
> >  #else
> >  static inline int cond_resched(void)
> >  {
> > +	might_sleep();
> >  	return _cond_resched();
> >  }
> >  #endif
> 
> # define might_resched() _cond_resched()


Argh, indeed.
I thought might_sleep() only wrapped __might_sleep(__FILE__, __LINE__)

 
> # define might_sleep() \
> 	do { __might_sleep(__FILE__, __LINE__); might_resched(); } while (0)
> 
> 
> Doesn't seem to make it any better that, but yeah, moving that
> __might_sleep() did occur to me earlier today when I touched that code.


Ok.

Another idea: if cond_resched() was a macro and __might_sleep() was
called inside, the given __FILE__ __LINE__ would be much more useful.

Only the backtraces would be useful in the current state, __FILE__
and __LINE__ point to sched.h, which is not exactly what is needed,
right?

Thanks,
Frederic.


> > diff --git a/kernel/sched.c b/kernel/sched.c
> > index 87ecac1..c22804b 100644
> > --- a/kernel/sched.c
> > +++ b/kernel/sched.c
> > @@ -6605,9 +6605,6 @@ SYSCALL_DEFINE0(sched_yield)
> >  
> >  static void __cond_resched(void)
> >  {
> > -#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
> > -	__might_sleep(__FILE__, __LINE__);
> > -#endif
> >  	/*
> >  	 * The BKS might be reacquired before we have dropped
> >  	 * PREEMPT_ACTIVE, which could trigger a second
> 
> 
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] sched: Move the sleeping while atomic checks early in cond_resched()
  2009-07-10 15:08     ` Frederic Weisbecker
@ 2009-07-10 15:12       ` Peter Zijlstra
  2009-07-10 16:10         ` Ingo Molnar
  0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2009-07-10 15:12 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML

On Fri, 2009-07-10 at 17:08 +0200, Frederic Weisbecker wrote:
> On Fri, Jul 10, 2009 at 04:59:55PM +0200, Peter Zijlstra wrote:
> > On Fri, 2009-07-10 at 16:49 +0200, Frederic Weisbecker wrote:
> > 
> > > index 0cb0d8d..e357dc7 100644
> > > --- a/include/linux/sched.h
> > > +++ b/include/linux/sched.h
> > > @@ -2279,11 +2279,13 @@ extern int _cond_resched(void);
> > >  #ifdef CONFIG_PREEMPT_BKL
> > >  static inline int cond_resched(void)
> > >  {
> > > +	might_sleep();
> > >  	return 0;
> > >  }
> > >  #else
> > >  static inline int cond_resched(void)
> > >  {
> > > +	might_sleep();
> > >  	return _cond_resched();
> > >  }
> > >  #endif
> > 
> > # define might_resched() _cond_resched()
> 
> 
> Argh, indeed.
> I thought might_sleep() only wrapped __might_sleep(__FILE__, __LINE__)
> 
>  
> > # define might_sleep() \
> > 	do { __might_sleep(__FILE__, __LINE__); might_resched(); } while (0)
> > 
> > 
> > Doesn't seem to make it any better that, but yeah, moving that
> > __might_sleep() did occur to me earlier today when I touched that code.
> 
> 
> Ok.
> 
> Another idea: if cond_resched() was a macro and __might_sleep() was
> called inside, the given __FILE__ __LINE__ would be much more useful.
> 
> Only the backtraces would be useful in the current state, __FILE__
> and __LINE__ point to sched.h, which is not exactly what is needed,
> right?

Right. There's some CONFIG_PREEMPT_BKL clutter in sched.h but I think we
could largely fold might_sleep() and cond_resched().

Ingo?


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched()
  2009-07-10 14:49 [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched() Frederic Weisbecker
  2009-07-10 14:49 ` [PATCH 2/2] sched: Move the sleeping while atomic checks early in cond_resched() Frederic Weisbecker
  2009-07-10 15:00 ` [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched() Peter Zijlstra
@ 2009-07-10 15:17 ` Arnd Bergmann
  2009-07-10 15:24   ` Frederic Weisbecker
  2 siblings, 1 reply; 19+ messages in thread
From: Arnd Bergmann @ 2009-07-10 15:17 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML, Peter Zijlstra

On Friday 10 July 2009, Frederic Weisbecker wrote:
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -6613,11 +6613,9 @@ static void __cond_resched(void)
>          * PREEMPT_ACTIVE, which could trigger a second
>          * cond_resched() call.
>          */
> -       do {
> -               add_preempt_count(PREEMPT_ACTIVE);
> -               schedule();
> -               sub_preempt_count(PREEMPT_ACTIVE);
> -       } while (need_resched());
> +       add_preempt_count(PREEMPT_ACTIVE);
> +       schedule();
> +       sub_preempt_count(PREEMPT_ACTIVE);
>  }
>  

If you drop the loop, then you should also remove the comment that
explains why it was put there.

	Arnd <><

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched()
  2009-07-10 15:17 ` Arnd Bergmann
@ 2009-07-10 15:24   ` Frederic Weisbecker
  2009-07-10 15:35     ` Arnd Bergmann
  0 siblings, 1 reply; 19+ messages in thread
From: Frederic Weisbecker @ 2009-07-10 15:24 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: Ingo Molnar, LKML, Peter Zijlstra

On Fri, Jul 10, 2009 at 05:17:38PM +0200, Arnd Bergmann wrote:
> On Friday 10 July 2009, Frederic Weisbecker wrote:
> > --- a/kernel/sched.c
> > +++ b/kernel/sched.c
> > @@ -6613,11 +6613,9 @@ static void __cond_resched(void)
> >          * PREEMPT_ACTIVE, which could trigger a second
> >          * cond_resched() call.
> >          */
> > -       do {
> > -               add_preempt_count(PREEMPT_ACTIVE);
> > -               schedule();
> > -               sub_preempt_count(PREEMPT_ACTIVE);
> > -       } while (need_resched());
> > +       add_preempt_count(PREEMPT_ACTIVE);
> > +       schedule();
> > +       sub_preempt_count(PREEMPT_ACTIVE);
> >  }
> >  
> 
> If you drop the loop, then you should also remove the comment that
> explains why it was put there.
> 
> 	Arnd <><

Hmm, these comments seem to actually explain why we do the PREEMPT_ACTIVE
trick, which is to prevent from cond_resched() recursion, right?


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched()
  2009-07-10 15:24   ` Frederic Weisbecker
@ 2009-07-10 15:35     ` Arnd Bergmann
  2009-07-10 15:50       ` Frederic Weisbecker
  0 siblings, 1 reply; 19+ messages in thread
From: Arnd Bergmann @ 2009-07-10 15:35 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML, Peter Zijlstra, Andrew Morton

On Friday 10 July 2009, Frederic Weisbecker wrote:
> On Fri, Jul 10, 2009 at 05:17:38PM +0200, Arnd Bergmann wrote:
> > On Friday 10 July 2009, Frederic Weisbecker wrote:
> > > --- a/kernel/sched.c
> > > +++ b/kernel/sched.c
> > > @@ -6613,11 +6613,9 @@ static void __cond_resched(void)
> > >          * PREEMPT_ACTIVE, which could trigger a second
> > >          * cond_resched() call.
> > >          */
> > > -       do {
> > > -               add_preempt_count(PREEMPT_ACTIVE);
> > > -               schedule();
> > > -               sub_preempt_count(PREEMPT_ACTIVE);
> > > -       } while (need_resched());
> > > +       add_preempt_count(PREEMPT_ACTIVE);
> > > +       schedule();
> > > +       sub_preempt_count(PREEMPT_ACTIVE);
> > >  }
> > >  
> > 
> > If you drop the loop, then you should also remove the comment that
> > explains why it was put there.
> > 
> 
> Hmm, these comments seem to actually explain why we do the PREEMPT_ACTIVE
> trick, which is to prevent from cond_resched() recursion, right?
> 

I think we both misinterpreted the comment, which seemed to refer
to older code added by Ingo in 5bbcfd900 "cond_resched(): fix bogus
might_sleep() warning" and removed by Andrew in e7b384043e2
"cond_resched() fix".

The original code in Ingos version looked like

 static inline void __cond_resched(void)
 {
       /*
        * The BKS might be reacquired before we have dropped
        * PREEMPT_ACTIVE, which could trigger a second
        * cond_resched() call.
        */
       if (unlikely(preempt_count()))
               return;
       do {
               add_preempt_count(PREEMPT_ACTIVE);
               schedule();
	...


So, it's got nothing to do with the loop, but should still be removed
because the 'if (unlikely(preempt_count()))' is no longer there.

	Arnd <><

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched()
  2009-07-10 15:35     ` Arnd Bergmann
@ 2009-07-10 15:50       ` Frederic Weisbecker
  2009-07-10 16:11         ` Ingo Molnar
  0 siblings, 1 reply; 19+ messages in thread
From: Frederic Weisbecker @ 2009-07-10 15:50 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: Ingo Molnar, LKML, Peter Zijlstra, Andrew Morton

On Fri, Jul 10, 2009 at 05:35:29PM +0200, Arnd Bergmann wrote:
> On Friday 10 July 2009, Frederic Weisbecker wrote:
> > On Fri, Jul 10, 2009 at 05:17:38PM +0200, Arnd Bergmann wrote:
> > > On Friday 10 July 2009, Frederic Weisbecker wrote:
> > > > --- a/kernel/sched.c
> > > > +++ b/kernel/sched.c
> > > > @@ -6613,11 +6613,9 @@ static void __cond_resched(void)
> > > >          * PREEMPT_ACTIVE, which could trigger a second
> > > >          * cond_resched() call.
> > > >          */
> > > > -       do {
> > > > -               add_preempt_count(PREEMPT_ACTIVE);
> > > > -               schedule();
> > > > -               sub_preempt_count(PREEMPT_ACTIVE);
> > > > -       } while (need_resched());
> > > > +       add_preempt_count(PREEMPT_ACTIVE);
> > > > +       schedule();
> > > > +       sub_preempt_count(PREEMPT_ACTIVE);
> > > >  }
> > > >  
> > > 
> > > If you drop the loop, then you should also remove the comment that
> > > explains why it was put there.
> > > 
> > 
> > Hmm, these comments seem to actually explain why we do the PREEMPT_ACTIVE
> > trick, which is to prevent from cond_resched() recursion, right?
> > 
> 
> I think we both misinterpreted the comment, which seemed to refer
> to older code added by Ingo in 5bbcfd900 "cond_resched(): fix bogus
> might_sleep() warning" and removed by Andrew in e7b384043e2
> "cond_resched() fix".
> 
> The original code in Ingos version looked like
> 
>  static inline void __cond_resched(void)
>  {
>        /*
>         * The BKS might be reacquired before we have dropped
>         * PREEMPT_ACTIVE, which could trigger a second
>         * cond_resched() call.
>         */
>        if (unlikely(preempt_count()))
>                return;
>        do {
>                add_preempt_count(PREEMPT_ACTIVE);
>                schedule();
> 	...
> 
> 
> So, it's got nothing to do with the loop, but should still be removed
> because the 'if (unlikely(preempt_count()))' is no longer there.


Yeah, but the comment still fits the code after this patch, don't you think? :-)


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] sched: Move the sleeping while atomic checks early in cond_resched()
  2009-07-10 15:12       ` Peter Zijlstra
@ 2009-07-10 16:10         ` Ingo Molnar
  2009-07-10 17:14           ` [PATCH] " Frederic Weisbecker
  0 siblings, 1 reply; 19+ messages in thread
From: Ingo Molnar @ 2009-07-10 16:10 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Frederic Weisbecker, LKML


* Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:

> On Fri, 2009-07-10 at 17:08 +0200, Frederic Weisbecker wrote:
> > On Fri, Jul 10, 2009 at 04:59:55PM +0200, Peter Zijlstra wrote:
> > > On Fri, 2009-07-10 at 16:49 +0200, Frederic Weisbecker wrote:
> > > 
> > > > index 0cb0d8d..e357dc7 100644
> > > > --- a/include/linux/sched.h
> > > > +++ b/include/linux/sched.h
> > > > @@ -2279,11 +2279,13 @@ extern int _cond_resched(void);
> > > >  #ifdef CONFIG_PREEMPT_BKL
> > > >  static inline int cond_resched(void)
> > > >  {
> > > > +	might_sleep();
> > > >  	return 0;
> > > >  }
> > > >  #else
> > > >  static inline int cond_resched(void)
> > > >  {
> > > > +	might_sleep();
> > > >  	return _cond_resched();
> > > >  }
> > > >  #endif
> > > 
> > > # define might_resched() _cond_resched()
> > 
> > 
> > Argh, indeed.
> > I thought might_sleep() only wrapped __might_sleep(__FILE__, __LINE__)
> > 
> >  
> > > # define might_sleep() \
> > > 	do { __might_sleep(__FILE__, __LINE__); might_resched(); } while (0)
> > > 
> > > 
> > > Doesn't seem to make it any better that, but yeah, moving that
> > > __might_sleep() did occur to me earlier today when I touched that code.
> > 
> > 
> > Ok.
> > 
> > Another idea: if cond_resched() was a macro and __might_sleep() was
> > called inside, the given __FILE__ __LINE__ would be much more useful.
> > 
> > Only the backtraces would be useful in the current state, __FILE__
> > and __LINE__ point to sched.h, which is not exactly what is needed,
> > right?
> 
> Right. There's some CONFIG_PREEMPT_BKL clutter in sched.h but I 
> think we could largely fold might_sleep() and cond_resched().
> 
> Ingo?

Yep - CONFIG_PREEMPT_BKL wont ever come back so any legacy related 
to it should be removed. (That doesnt mean it wont crash or 
misbehave, this code is quite fragile.)

	Ingo

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched()
  2009-07-10 15:50       ` Frederic Weisbecker
@ 2009-07-10 16:11         ` Ingo Molnar
  2009-07-10 16:26           ` Frederic Weisbecker
  2009-07-10 17:23           ` [PATCH] sched: Remove obsolete comment in __cond_resched() Frederic Weisbecker
  0 siblings, 2 replies; 19+ messages in thread
From: Ingo Molnar @ 2009-07-10 16:11 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Arnd Bergmann, LKML, Peter Zijlstra, Andrew Morton


* Frederic Weisbecker <fweisbec@gmail.com> wrote:

> On Fri, Jul 10, 2009 at 05:35:29PM +0200, Arnd Bergmann wrote:
> > On Friday 10 July 2009, Frederic Weisbecker wrote:
> > > On Fri, Jul 10, 2009 at 05:17:38PM +0200, Arnd Bergmann wrote:
> > > > On Friday 10 July 2009, Frederic Weisbecker wrote:
> > > > > --- a/kernel/sched.c
> > > > > +++ b/kernel/sched.c
> > > > > @@ -6613,11 +6613,9 @@ static void __cond_resched(void)
> > > > >          * PREEMPT_ACTIVE, which could trigger a second
> > > > >          * cond_resched() call.
> > > > >          */
> > > > > -       do {
> > > > > -               add_preempt_count(PREEMPT_ACTIVE);
> > > > > -               schedule();
> > > > > -               sub_preempt_count(PREEMPT_ACTIVE);
> > > > > -       } while (need_resched());
> > > > > +       add_preempt_count(PREEMPT_ACTIVE);
> > > > > +       schedule();
> > > > > +       sub_preempt_count(PREEMPT_ACTIVE);
> > > > >  }
> > > > >  
> > > > 
> > > > If you drop the loop, then you should also remove the comment that
> > > > explains why it was put there.
> > > > 
> > > 
> > > Hmm, these comments seem to actually explain why we do the PREEMPT_ACTIVE
> > > trick, which is to prevent from cond_resched() recursion, right?
> > > 
> > 
> > I think we both misinterpreted the comment, which seemed to refer
> > to older code added by Ingo in 5bbcfd900 "cond_resched(): fix bogus
> > might_sleep() warning" and removed by Andrew in e7b384043e2
> > "cond_resched() fix".
> > 
> > The original code in Ingos version looked like
> > 
> >  static inline void __cond_resched(void)
> >  {
> >        /*
> >         * The BKS might be reacquired before we have dropped
> >         * PREEMPT_ACTIVE, which could trigger a second
> >         * cond_resched() call.
> >         */
> >        if (unlikely(preempt_count()))
> >                return;
> >        do {
> >                add_preempt_count(PREEMPT_ACTIVE);
> >                schedule();
> > 	...
> > 
> > 
> > So, it's got nothing to do with the loop, but should still be removed
> > because the 'if (unlikely(preempt_count()))' is no longer there.
> 
> 
> Yeah, but the comment still fits the code after this patch, don't 
> you think? :-)

... except that there's no Big Kernel Semaphore anymore ;-)

	Ingo

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched()
  2009-07-10 16:11         ` Ingo Molnar
@ 2009-07-10 16:26           ` Frederic Weisbecker
  2009-07-10 17:23           ` [PATCH] sched: Remove obsolete comment in __cond_resched() Frederic Weisbecker
  1 sibling, 0 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2009-07-10 16:26 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Arnd Bergmann, LKML, Peter Zijlstra, Andrew Morton

On Fri, Jul 10, 2009 at 06:11:41PM +0200, Ingo Molnar wrote:
> 
> * Frederic Weisbecker <fweisbec@gmail.com> wrote:
> 
> > On Fri, Jul 10, 2009 at 05:35:29PM +0200, Arnd Bergmann wrote:
> > > On Friday 10 July 2009, Frederic Weisbecker wrote:
> > > > On Fri, Jul 10, 2009 at 05:17:38PM +0200, Arnd Bergmann wrote:
> > > > > On Friday 10 July 2009, Frederic Weisbecker wrote:
> > > > > > --- a/kernel/sched.c
> > > > > > +++ b/kernel/sched.c
> > > > > > @@ -6613,11 +6613,9 @@ static void __cond_resched(void)
> > > > > >          * PREEMPT_ACTIVE, which could trigger a second
> > > > > >          * cond_resched() call.
> > > > > >          */
> > > > > > -       do {
> > > > > > -               add_preempt_count(PREEMPT_ACTIVE);
> > > > > > -               schedule();
> > > > > > -               sub_preempt_count(PREEMPT_ACTIVE);
> > > > > > -       } while (need_resched());
> > > > > > +       add_preempt_count(PREEMPT_ACTIVE);
> > > > > > +       schedule();
> > > > > > +       sub_preempt_count(PREEMPT_ACTIVE);
> > > > > >  }
> > > > > >  
> > > > > 
> > > > > If you drop the loop, then you should also remove the comment that
> > > > > explains why it was put there.
> > > > > 
> > > > 
> > > > Hmm, these comments seem to actually explain why we do the PREEMPT_ACTIVE
> > > > trick, which is to prevent from cond_resched() recursion, right?
> > > > 
> > > 
> > > I think we both misinterpreted the comment, which seemed to refer
> > > to older code added by Ingo in 5bbcfd900 "cond_resched(): fix bogus
> > > might_sleep() warning" and removed by Andrew in e7b384043e2
> > > "cond_resched() fix".
> > > 
> > > The original code in Ingos version looked like
> > > 
> > >  static inline void __cond_resched(void)
> > >  {
> > >        /*
> > >         * The BKS might be reacquired before we have dropped
> > >         * PREEMPT_ACTIVE, which could trigger a second
> > >         * cond_resched() call.
> > >         */
> > >        if (unlikely(preempt_count()))
> > >                return;
> > >        do {
> > >                add_preempt_count(PREEMPT_ACTIVE);
> > >                schedule();
> > > 	...
> > > 
> > > 
> > > So, it's got nothing to do with the loop, but should still be removed
> > > because the 'if (unlikely(preempt_count()))' is no longer there.
> > 
> > 
> > Yeah, but the comment still fits the code after this patch, don't 
> > you think? :-)
> 
> ... except that there's no Big Kernel Semaphore anymore ;-)
> 
> 	Ingo

Ah, I lack some backgrounds about Linux heroic ages :)
I thought it was a mispell of BKL.

I guess the comment should be removed anyway, while reading it
more, it doesn't explain the code that follows it.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH] sched: Move the sleeping while atomic checks early in cond_resched()
  2009-07-10 16:10         ` Ingo Molnar
@ 2009-07-10 17:14           ` Frederic Weisbecker
  2009-07-10 17:43             ` Peter Zijlstra
  0 siblings, 1 reply; 19+ messages in thread
From: Frederic Weisbecker @ 2009-07-10 17:14 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Frederic Weisbecker, Peter Zijlstra

might_sleep() is called lately in cond_resched(), after the
need_resched()/preempt enabled/system running tests are checked.

It's better to check the sleeps while atomic earlier and not depend
on some environment datas that reduce the chances to detect a
problem.

Changes in v2:
- call __might_sleep() directly instead of might_sleep() which may call
  cond_resched()
- turn cond_resched() into a macro so that the file:line couple reported
  refers to the caller of cond_resched() and not __cond_resched() itself.
- drop the obsolete CONFIG_PREEMPT_BKL related zones

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
 include/linux/sched.h |   22 +++++++---------------
 kernel/sched.c        |    5 ++---
 2 files changed, 9 insertions(+), 18 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 0cb0d8d..737f569 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2276,23 +2276,15 @@ static inline int need_resched(void)
  * cond_resched_softirq() will enable bhs before scheduling.
  */
 extern int _cond_resched(void);
-#ifdef CONFIG_PREEMPT_BKL
-static inline int cond_resched(void)
-{
-	return 0;
-}
-#else
-static inline int cond_resched(void)
-{
-	return _cond_resched();
-}
-#endif
+#define cond_resched() ({			\
+	__might_sleep(__FILE__, __LINE__);	\
+	_cond_resched();			\
+})
+
 extern int cond_resched_lock(spinlock_t * lock);
 extern int cond_resched_softirq(void);
-static inline int cond_resched_bkl(void)
-{
-	return _cond_resched();
-}
+
+#define cond_resched_bkl()	cond_resched();
 
 /*
  * Does a critical section need to be broken due to another
diff --git a/kernel/sched.c b/kernel/sched.c
index 87ecac1..649ec92 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6605,9 +6605,6 @@ SYSCALL_DEFINE0(sched_yield)
 
 static void __cond_resched(void)
 {
-#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
-	__might_sleep(__FILE__, __LINE__);
-#endif
 	/*
 	 * The BKS might be reacquired before we have dropped
 	 * PREEMPT_ACTIVE, which could trigger a second
@@ -6644,6 +6641,7 @@ int cond_resched_lock(spinlock_t *lock)
 
 	if (spin_needbreak(lock) || resched) {
 		spin_unlock(lock);
+		__might_sleep(__FILE__, __LINE__);
 		if (resched && need_resched())
 			__cond_resched();
 		else
@@ -6661,6 +6659,7 @@ int __sched cond_resched_softirq(void)
 
 	if (need_resched() && system_state == SYSTEM_RUNNING) {
 		local_bh_enable();
+		__might_sleep(__FILE__, __LINE__);
 		__cond_resched();
 		local_bh_disable();
 		return 1;
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH] sched: Remove obsolete comment in __cond_resched()
  2009-07-10 16:11         ` Ingo Molnar
  2009-07-10 16:26           ` Frederic Weisbecker
@ 2009-07-10 17:23           ` Frederic Weisbecker
  1 sibling, 0 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2009-07-10 17:23 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Frederic Weisbecker, Peter Zijlstra

Remove the outdated comment from __cond_resched() related
to the now removed Big Kernel Semaphore.

Reported-by: Arnd Bergmann <arnd@arndb.de>
Reported-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/sched.c |    5 -----
 1 files changed, 0 insertions(+), 5 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 649ec92..7b2ab88 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6605,11 +6605,6 @@ SYSCALL_DEFINE0(sched_yield)
 
 static void __cond_resched(void)
 {
-	/*
-	 * The BKS might be reacquired before we have dropped
-	 * PREEMPT_ACTIVE, which could trigger a second
-	 * cond_resched() call.
-	 */
 	add_preempt_count(PREEMPT_ACTIVE);
 	schedule();
 	sub_preempt_count(PREEMPT_ACTIVE);
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH] sched: Move the sleeping while atomic checks early in cond_resched()
  2009-07-10 17:14           ` [PATCH] " Frederic Weisbecker
@ 2009-07-10 17:43             ` Peter Zijlstra
  2009-07-10 18:08               ` Frederic Weisbecker
  0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2009-07-10 17:43 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML

On Fri, 2009-07-10 at 19:14 +0200, Frederic Weisbecker wrote:
> might_sleep() is called lately in cond_resched(), after the
> need_resched()/preempt enabled/system running tests are checked.
> 
> It's better to check the sleeps while atomic earlier and not depend
> on some environment datas that reduce the chances to detect a
> problem.
> 
> Changes in v2:
> - call __might_sleep() directly instead of might_sleep() which may call
>   cond_resched()
> - turn cond_resched() into a macro so that the file:line couple reported
>   refers to the caller of cond_resched() and not __cond_resched() itself.
> - drop the obsolete CONFIG_PREEMPT_BKL related zones
> 
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> 
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> ---
>  include/linux/sched.h |   22 +++++++---------------
>  kernel/sched.c        |    5 ++---
>  2 files changed, 9 insertions(+), 18 deletions(-)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 0cb0d8d..737f569 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -2276,23 +2276,15 @@ static inline int need_resched(void)
>   * cond_resched_softirq() will enable bhs before scheduling.
>   */
>  extern int _cond_resched(void);
> -#ifdef CONFIG_PREEMPT_BKL
> -static inline int cond_resched(void)
> -{
> -	return 0;
> -}
> -#else
> -static inline int cond_resched(void)
> -{
> -	return _cond_resched();
> -}
> -#endif
> +#define cond_resched() ({			\
> +	__might_sleep(__FILE__, __LINE__);	\
> +	_cond_resched();			\
> +})

I don't think this will compile for !CONFIG_DEBUG_SPINLOCK_SLEEP.

>  extern int cond_resched_lock(spinlock_t * lock);
>  extern int cond_resched_softirq(void);
> -static inline int cond_resched_bkl(void)
> -{
> -	return _cond_resched();
> -}
> +
> +#define cond_resched_bkl()	cond_resched();

We might as well convert the one user of this ;-)

>  /*
>   * Does a critical section need to be broken due to another
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 87ecac1..649ec92 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -6605,9 +6605,6 @@ SYSCALL_DEFINE0(sched_yield)
>  
>  static void __cond_resched(void)
>  {
> -#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
> -	__might_sleep(__FILE__, __LINE__);
> -#endif
>  	/*
>  	 * The BKS might be reacquired before we have dropped
>  	 * PREEMPT_ACTIVE, which could trigger a second
> @@ -6644,6 +6641,7 @@ int cond_resched_lock(spinlock_t *lock)
>  
>  	if (spin_needbreak(lock) || resched) {
>  		spin_unlock(lock);
> +		__might_sleep(__FILE__, __LINE__);
>  		if (resched && need_resched())
>  			__cond_resched();
>  		else
> @@ -6661,6 +6659,7 @@ int __sched cond_resched_softirq(void)
>  
>  	if (need_resched() && system_state == SYSTEM_RUNNING) {
>  		local_bh_enable();
> +		__might_sleep(__FILE__, __LINE__);
>  		__cond_resched();
>  		local_bh_disable();
>  		return 1;

Right, how about renaming these to _cond_resched_{lock,softirq}, and
added a __might_sleep() definition for !DEBUG_SPINLOCK_SLEEP and add
macro wrappers to sched.c for these two as well?


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH] sched: Move the sleeping while atomic checks early in cond_resched()
  2009-07-10 17:43             ` Peter Zijlstra
@ 2009-07-10 18:08               ` Frederic Weisbecker
  2009-07-10 18:13                 ` Peter Zijlstra
  0 siblings, 1 reply; 19+ messages in thread
From: Frederic Weisbecker @ 2009-07-10 18:08 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, LKML

On Fri, Jul 10, 2009 at 07:43:30PM +0200, Peter Zijlstra wrote:
> On Fri, 2009-07-10 at 19:14 +0200, Frederic Weisbecker wrote:
> > might_sleep() is called lately in cond_resched(), after the
> > need_resched()/preempt enabled/system running tests are checked.
> > 
> > It's better to check the sleeps while atomic earlier and not depend
> > on some environment datas that reduce the chances to detect a
> > problem.
> > 
> > Changes in v2:
> > - call __might_sleep() directly instead of might_sleep() which may call
> >   cond_resched()
> > - turn cond_resched() into a macro so that the file:line couple reported
> >   refers to the caller of cond_resched() and not __cond_resched() itself.
> > - drop the obsolete CONFIG_PREEMPT_BKL related zones
> > 
> > Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> > Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > 
> > Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> > ---
> >  include/linux/sched.h |   22 +++++++---------------
> >  kernel/sched.c        |    5 ++---
> >  2 files changed, 9 insertions(+), 18 deletions(-)
> > 
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index 0cb0d8d..737f569 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -2276,23 +2276,15 @@ static inline int need_resched(void)
> >   * cond_resched_softirq() will enable bhs before scheduling.
> >   */
> >  extern int _cond_resched(void);
> > -#ifdef CONFIG_PREEMPT_BKL
> > -static inline int cond_resched(void)
> > -{
> > -	return 0;
> > -}
> > -#else
> > -static inline int cond_resched(void)
> > -{
> > -	return _cond_resched();
> > -}
> > -#endif
> > +#define cond_resched() ({			\
> > +	__might_sleep(__FILE__, __LINE__);	\
> > +	_cond_resched();			\
> > +})
> 
> I don't think this will compile for !CONFIG_DEBUG_SPINLOCK_SLEEP.


Ahh, right.

 
> >  extern int cond_resched_lock(spinlock_t * lock);
> >  extern int cond_resched_softirq(void);
> > -static inline int cond_resched_bkl(void)
> > -{
> > -	return _cond_resched();
> > -}
> > +
> > +#define cond_resched_bkl()	cond_resched();
> 
> We might as well convert the one user of this ;-)


Ok :)


> >  /*
> >   * Does a critical section need to be broken due to another
> > diff --git a/kernel/sched.c b/kernel/sched.c
> > index 87ecac1..649ec92 100644
> > --- a/kernel/sched.c
> > +++ b/kernel/sched.c
> > @@ -6605,9 +6605,6 @@ SYSCALL_DEFINE0(sched_yield)
> >  
> >  static void __cond_resched(void)
> >  {
> > -#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
> > -	__might_sleep(__FILE__, __LINE__);
> > -#endif
> >  	/*
> >  	 * The BKS might be reacquired before we have dropped
> >  	 * PREEMPT_ACTIVE, which could trigger a second
> > @@ -6644,6 +6641,7 @@ int cond_resched_lock(spinlock_t *lock)
> >  
> >  	if (spin_needbreak(lock) || resched) {
> >  		spin_unlock(lock);
> > +		__might_sleep(__FILE__, __LINE__);
> >  		if (resched && need_resched())
> >  			__cond_resched();
> >  		else
> > @@ -6661,6 +6659,7 @@ int __sched cond_resched_softirq(void)
> >  
> >  	if (need_resched() && system_state == SYSTEM_RUNNING) {
> >  		local_bh_enable();
> > +		__might_sleep(__FILE__, __LINE__);
> >  		__cond_resched();
> >  		local_bh_disable();
> >  		return 1;
> 
> Right, how about renaming these to _cond_resched_{lock,softirq}, and
> added a __might_sleep() definition for !DEBUG_SPINLOCK_SLEEP and add
> macro wrappers to sched.c for these two as well?

I did that first but thought that might_sleep() would fail in a spinlock
held or softirq context, right?


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH] sched: Move the sleeping while atomic checks early in cond_resched()
  2009-07-10 18:08               ` Frederic Weisbecker
@ 2009-07-10 18:13                 ` Peter Zijlstra
  2009-07-10 18:29                   ` Frederic Weisbecker
  0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2009-07-10 18:13 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML

On Fri, 2009-07-10 at 20:08 +0200, Frederic Weisbecker wrote:

> > Right, how about renaming these to _cond_resched_{lock,softirq}, and
> > added a __might_sleep() definition for !DEBUG_SPINLOCK_SLEEP and add
> > macro wrappers to sched.c for these two as well?
> 
> I did that first but thought that might_sleep() would fail in a spinlock
> held or softirq context, right?

Ah, right.. maybe we can add a preempt_count_offset parameter to
__might_sleep() such that it will compensate for the pending
spin_unlock()/local_bh_enable().




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH] sched: Move the sleeping while atomic checks early in cond_resched()
  2009-07-10 18:13                 ` Peter Zijlstra
@ 2009-07-10 18:29                   ` Frederic Weisbecker
  0 siblings, 0 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2009-07-10 18:29 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, LKML

On Fri, Jul 10, 2009 at 08:13:42PM +0200, Peter Zijlstra wrote:
> On Fri, 2009-07-10 at 20:08 +0200, Frederic Weisbecker wrote:
> 
> > > Right, how about renaming these to _cond_resched_{lock,softirq}, and
> > > added a __might_sleep() definition for !DEBUG_SPINLOCK_SLEEP and add
> > > macro wrappers to sched.c for these two as well?
> > 
> > I did that first but thought that might_sleep() would fail in a spinlock
> > held or softirq context, right?
> 
> Ah, right.. maybe we can add a preempt_count_offset parameter to
> __might_sleep() such that it will compensate for the pending
> spin_unlock()/local_bh_enable().


Good idea, I'm trying that.


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2009-07-10 18:30 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-07-10 14:49 [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched() Frederic Weisbecker
2009-07-10 14:49 ` [PATCH 2/2] sched: Move the sleeping while atomic checks early in cond_resched() Frederic Weisbecker
2009-07-10 14:59   ` Peter Zijlstra
2009-07-10 15:08     ` Frederic Weisbecker
2009-07-10 15:12       ` Peter Zijlstra
2009-07-10 16:10         ` Ingo Molnar
2009-07-10 17:14           ` [PATCH] " Frederic Weisbecker
2009-07-10 17:43             ` Peter Zijlstra
2009-07-10 18:08               ` Frederic Weisbecker
2009-07-10 18:13                 ` Peter Zijlstra
2009-07-10 18:29                   ` Frederic Weisbecker
2009-07-10 15:00 ` [PATCH 1/2] sched: Drop the need_resched() loop from cond_resched() Peter Zijlstra
2009-07-10 15:17 ` Arnd Bergmann
2009-07-10 15:24   ` Frederic Weisbecker
2009-07-10 15:35     ` Arnd Bergmann
2009-07-10 15:50       ` Frederic Weisbecker
2009-07-10 16:11         ` Ingo Molnar
2009-07-10 16:26           ` Frederic Weisbecker
2009-07-10 17:23           ` [PATCH] sched: Remove obsolete comment in __cond_resched() Frederic Weisbecker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.