linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] sched: schedule/preempt optimizations and cleanups
@ 2015-01-28  0:24 Frederic Weisbecker
  2015-01-28  0:24 ` [PATCH 1/4] sched: Pull resched loop to __schedule() callers Frederic Weisbecker
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2015-01-28  0:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: LKML, Frederic Weisbecker, Steven Rostedt, Linus Torvalds

Hey,

This series is based on suggestions from Linus. I posted a previous
version one month ago then Linus suggested more improvements. Here
is the following iteration.

The last patch suggests to (ab)use PREEMPT_ACTIVE to disable preemption
on schedule(). It's optional as it's a possibily controversial cleanup.

Thanks.

Frederic Weisbecker (4):
  sched: Pull resched loop to __schedule() callers
  sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE
  sched: Pull preemption disablement to __schedule() caller
  sched: Account PREEMPT_ACTIVE context as atomic

 include/linux/preempt_mask.h |  4 ++--
 kernel/sched/core.c          | 29 +++++++++++++++--------------
 2 files changed, 17 insertions(+), 16 deletions(-)

-- 
2.1.4


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/4] sched: Pull resched loop to __schedule() callers
  2015-01-28  0:24 [PATCH 0/4] sched: schedule/preempt optimizations and cleanups Frederic Weisbecker
@ 2015-01-28  0:24 ` Frederic Weisbecker
  2015-02-04 14:36   ` [tip:sched/core] " tip-bot for Frederic Weisbecker
  2015-01-28  0:24 ` [RFC PATCH 2/4] sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE Frederic Weisbecker
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2015-01-28  0:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: LKML, Frederic Weisbecker, Steven Rostedt, Linus Torvalds

__schedule() disables preemption during its job and re-enables it
afterward without doing a preemption check to avoid recursion.

But if an event happens after the context switch which requires
rescheduling, we need to check again if a task of a higher priority
needs the CPU. A preempt irq can raise such a situation. To handle that,
__schedule() loops on need_resched().

But preempt_schedule_*() functions, which call __schedule(), also loop
on need_resched() to handle missed preempt irqs. Hence we end up with
the same loop happening twice.

Lets simplify that by attributing the need_resched() loop responsability
to all __schedule() callers.

There is a risk that the outer loop now handles reschedules that used
to be handled by the inner loop with the added overhead of caller details
(inc/dec of PREEMPT_ACTIVE, irq save/restore) but assuming those inner
rescheduling loop weren't too frequent, this shouldn't matter. Especially
since the whole preemption path is now loosing one loop in any case.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
 kernel/sched/core.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c7ed25d..bbef95d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2748,6 +2748,10 @@ again:
  *          - explicit schedule() call
  *          - return from syscall or exception to user-space
  *          - return from interrupt-handler to user-space
+ *
+ * WARNING: all callers must re-check need_resched() afterward and reschedule
+ * accordingly in case an event triggered the need for rescheduling (such as
+ * an interrupt waking up a task) while preemption was disabled in __schedule().
  */
 static void __sched __schedule(void)
 {
@@ -2756,7 +2760,6 @@ static void __sched __schedule(void)
 	struct rq *rq;
 	int cpu;
 
-need_resched:
 	preempt_disable();
 	cpu = smp_processor_id();
 	rq = cpu_rq(cpu);
@@ -2821,8 +2824,6 @@ need_resched:
 	post_schedule(rq);
 
 	sched_preempt_enable_no_resched();
-	if (need_resched())
-		goto need_resched;
 }
 
 static inline void sched_submit_work(struct task_struct *tsk)
@@ -2842,7 +2843,9 @@ asmlinkage __visible void __sched schedule(void)
 	struct task_struct *tsk = current;
 
 	sched_submit_work(tsk);
-	__schedule();
+	do {
+		__schedule();
+	} while (need_resched());
 }
 EXPORT_SYMBOL(schedule);
 
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RFC PATCH 2/4] sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE
  2015-01-28  0:24 [PATCH 0/4] sched: schedule/preempt optimizations and cleanups Frederic Weisbecker
  2015-01-28  0:24 ` [PATCH 1/4] sched: Pull resched loop to __schedule() callers Frederic Weisbecker
@ 2015-01-28  0:24 ` Frederic Weisbecker
  2015-01-28  1:42   ` Steven Rostedt
  2015-01-28 15:42   ` Peter Zijlstra
  2015-01-28  0:24 ` [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller Frederic Weisbecker
  2015-01-28  0:24 ` [RFC PATCH 4/4] sched: Account PREEMPT_ACTIVE context as atomic Frederic Weisbecker
  3 siblings, 2 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2015-01-28  0:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: LKML, Frederic Weisbecker, Steven Rostedt, Linus Torvalds

d1f74e20b5b064a130cd0743a256c2d3cfe84010 turned PREEMPT_ACTIVE modifiers
to use raw untraced preempt count operations. Meanwhile this prevents
from debugging and tracing preemption disabled if we pull that
responsibility to schedule() callers (see following patches).

Is there anything we can do about that?

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
 kernel/sched/core.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bbef95d..89b165f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2883,9 +2883,9 @@ void __sched schedule_preempt_disabled(void)
 static void preempt_schedule_common(void)
 {
 	do {
-		__preempt_count_add(PREEMPT_ACTIVE);
+		preempt_count_add(PREEMPT_ACTIVE);
 		__schedule();
-		__preempt_count_sub(PREEMPT_ACTIVE);
+		preempt_count_sub(PREEMPT_ACTIVE);
 
 		/*
 		 * Check again in case we missed a preemption opportunity
@@ -2938,7 +2938,7 @@ asmlinkage __visible void __sched notrace preempt_schedule_context(void)
 		return;
 
 	do {
-		__preempt_count_add(PREEMPT_ACTIVE);
+		preempt_count_add(PREEMPT_ACTIVE);
 		/*
 		 * Needs preempt disabled in case user_exit() is traced
 		 * and the tracer calls preempt_enable_notrace() causing
@@ -2948,7 +2948,7 @@ asmlinkage __visible void __sched notrace preempt_schedule_context(void)
 		__schedule();
 		exception_exit(prev_ctx);
 
-		__preempt_count_sub(PREEMPT_ACTIVE);
+		preempt_count_sub(PREEMPT_ACTIVE);
 		barrier();
 	} while (need_resched());
 }
@@ -2973,11 +2973,11 @@ asmlinkage __visible void __sched preempt_schedule_irq(void)
 	prev_state = exception_enter();
 
 	do {
-		__preempt_count_add(PREEMPT_ACTIVE);
+		preempt_count_add(PREEMPT_ACTIVE);
 		local_irq_enable();
 		__schedule();
 		local_irq_disable();
-		__preempt_count_sub(PREEMPT_ACTIVE);
+		preempt_count_sub(PREEMPT_ACTIVE);
 
 		/*
 		 * Check again in case we missed a preemption opportunity
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller
  2015-01-28  0:24 [PATCH 0/4] sched: schedule/preempt optimizations and cleanups Frederic Weisbecker
  2015-01-28  0:24 ` [PATCH 1/4] sched: Pull resched loop to __schedule() callers Frederic Weisbecker
  2015-01-28  0:24 ` [RFC PATCH 2/4] sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE Frederic Weisbecker
@ 2015-01-28  0:24 ` Frederic Weisbecker
  2015-01-28 15:50   ` Peter Zijlstra
  2015-01-28  0:24 ` [RFC PATCH 4/4] sched: Account PREEMPT_ACTIVE context as atomic Frederic Weisbecker
  3 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2015-01-28  0:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: LKML, Frederic Weisbecker, Steven Rostedt, Linus Torvalds

Lets factorize all preempt_count operations to the __schedule()
callers. This way we spare two preempt count changes in __schedule()
when it's called by preemption APIs.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
 kernel/sched/core.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 89b165f..1c0e5b1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2760,7 +2760,6 @@ static void __sched __schedule(void)
 	struct rq *rq;
 	int cpu;
 
-	preempt_disable();
 	cpu = smp_processor_id();
 	rq = cpu_rq(cpu);
 	rcu_note_context_switch();
@@ -2822,8 +2821,6 @@ static void __sched __schedule(void)
 		raw_spin_unlock_irq(&rq->lock);
 
 	post_schedule(rq);
-
-	sched_preempt_enable_no_resched();
 }
 
 static inline void sched_submit_work(struct task_struct *tsk)
@@ -2844,7 +2841,9 @@ asmlinkage __visible void __sched schedule(void)
 
 	sched_submit_work(tsk);
 	do {
+		preempt_disable();
 		__schedule();
+		sched_preempt_enable_no_resched();
 	} while (need_resched());
 }
 EXPORT_SYMBOL(schedule);
@@ -2883,9 +2882,9 @@ void __sched schedule_preempt_disabled(void)
 static void preempt_schedule_common(void)
 {
 	do {
-		preempt_count_add(PREEMPT_ACTIVE);
+		preempt_count_add(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
 		__schedule();
-		preempt_count_sub(PREEMPT_ACTIVE);
+		preempt_count_sub(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
 
 		/*
 		 * Check again in case we missed a preemption opportunity
@@ -2938,7 +2937,7 @@ asmlinkage __visible void __sched notrace preempt_schedule_context(void)
 		return;
 
 	do {
-		preempt_count_add(PREEMPT_ACTIVE);
+		preempt_count_add(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
 		/*
 		 * Needs preempt disabled in case user_exit() is traced
 		 * and the tracer calls preempt_enable_notrace() causing
@@ -2947,8 +2946,7 @@ asmlinkage __visible void __sched notrace preempt_schedule_context(void)
 		prev_ctx = exception_enter();
 		__schedule();
 		exception_exit(prev_ctx);
-
-		preempt_count_sub(PREEMPT_ACTIVE);
+		preempt_count_sub(PREEMPT_ACTIVE  + PREEMPT_CHECK_OFFSET);
 		barrier();
 	} while (need_resched());
 }
@@ -2973,11 +2971,11 @@ asmlinkage __visible void __sched preempt_schedule_irq(void)
 	prev_state = exception_enter();
 
 	do {
-		preempt_count_add(PREEMPT_ACTIVE);
+		preempt_count_add(PREEMPT_ACTIVE  + PREEMPT_CHECK_OFFSET);
 		local_irq_enable();
 		__schedule();
 		local_irq_disable();
-		preempt_count_sub(PREEMPT_ACTIVE);
+		preempt_count_sub(PREEMPT_ACTIVE  + PREEMPT_CHECK_OFFSET);
 
 		/*
 		 * Check again in case we missed a preemption opportunity
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RFC PATCH 4/4] sched: Account PREEMPT_ACTIVE context as atomic
  2015-01-28  0:24 [PATCH 0/4] sched: schedule/preempt optimizations and cleanups Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2015-01-28  0:24 ` [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller Frederic Weisbecker
@ 2015-01-28  0:24 ` Frederic Weisbecker
  2015-01-28 15:46   ` Peter Zijlstra
  3 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2015-01-28  0:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: LKML, Frederic Weisbecker, Steven Rostedt, Linus Torvalds

PREEMPT_ACTIVE implies non-preemptible context and thus atomic context
despite what in_atomic*() APIs reports about it. These functions
shouldn't ignore this value like they are currently doing.

It appears that these APIs were ignoring PREEMPT_ACTIVE in order to
ease the check in schedule_debug(). Meanwhile it is sufficient to rely
on PREEMPT_ACTIVE in order to disable preemption in __schedule().

So lets fix the in_atomic*() APIs and simplify the preempt count ops
on __schedule() callers.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
 include/linux/preempt_mask.h |  4 ++--
 kernel/sched/core.c          | 12 ++++++------
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/linux/preempt_mask.h b/include/linux/preempt_mask.h
index dbeec4d..4b8c9b7 100644
--- a/include/linux/preempt_mask.h
+++ b/include/linux/preempt_mask.h
@@ -99,14 +99,14 @@
  * used in the general case to determine whether sleeping is possible.
  * Do not use in_atomic() in driver code.
  */
-#define in_atomic()	((preempt_count() & ~PREEMPT_ACTIVE) != 0)
+#define in_atomic()	(preempt_count() != 0)
 
 /*
  * Check whether we were atomic before we did preempt_disable():
  * (used by the scheduler, *after* releasing the kernel lock)
  */
 #define in_atomic_preempt_off() \
-		((preempt_count() & ~PREEMPT_ACTIVE) != PREEMPT_CHECK_OFFSET)
+	(preempt_count() & ~(PREEMPT_ACTIVE | PREEMPT_CHECK_OFFSET))
 
 #ifdef CONFIG_PREEMPT_COUNT
 # define preemptible()	(preempt_count() == 0 && !irqs_disabled())
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1c0e5b1..c017a5f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2882,9 +2882,9 @@ void __sched schedule_preempt_disabled(void)
 static void preempt_schedule_common(void)
 {
 	do {
-		preempt_count_add(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
+		preempt_count_add(PREEMPT_ACTIVE);
 		__schedule();
-		preempt_count_sub(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
+		preempt_count_sub(PREEMPT_ACTIVE);
 
 		/*
 		 * Check again in case we missed a preemption opportunity
@@ -2937,7 +2937,7 @@ asmlinkage __visible void __sched notrace preempt_schedule_context(void)
 		return;
 
 	do {
-		preempt_count_add(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
+		preempt_count_add(PREEMPT_ACTIVE);
 		/*
 		 * Needs preempt disabled in case user_exit() is traced
 		 * and the tracer calls preempt_enable_notrace() causing
@@ -2946,7 +2946,7 @@ asmlinkage __visible void __sched notrace preempt_schedule_context(void)
 		prev_ctx = exception_enter();
 		__schedule();
 		exception_exit(prev_ctx);
-		preempt_count_sub(PREEMPT_ACTIVE  + PREEMPT_CHECK_OFFSET);
+		preempt_count_sub(PREEMPT_ACTIVE);
 		barrier();
 	} while (need_resched());
 }
@@ -2971,11 +2971,11 @@ asmlinkage __visible void __sched preempt_schedule_irq(void)
 	prev_state = exception_enter();
 
 	do {
-		preempt_count_add(PREEMPT_ACTIVE  + PREEMPT_CHECK_OFFSET);
+		preempt_count_add(PREEMPT_ACTIVE);
 		local_irq_enable();
 		__schedule();
 		local_irq_disable();
-		preempt_count_sub(PREEMPT_ACTIVE  + PREEMPT_CHECK_OFFSET);
+		preempt_count_sub(PREEMPT_ACTIVE);
 
 		/*
 		 * Check again in case we missed a preemption opportunity
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 2/4] sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE
  2015-01-28  0:24 ` [RFC PATCH 2/4] sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE Frederic Weisbecker
@ 2015-01-28  1:42   ` Steven Rostedt
  2015-01-28 13:59     ` Frederic Weisbecker
  2015-01-28 15:42   ` Peter Zijlstra
  1 sibling, 1 reply; 18+ messages in thread
From: Steven Rostedt @ 2015-01-28  1:42 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, Peter Zijlstra, LKML, Linus Torvalds

On Wed, 28 Jan 2015 01:24:10 +0100
Frederic Weisbecker <fweisbec@gmail.com> wrote:

> d1f74e20b5b064a130cd0743a256c2d3cfe84010 turned PREEMPT_ACTIVE modifiers
> to use raw untraced preempt count operations. Meanwhile this prevents
> from debugging and tracing preemption disabled if we pull that
> responsibility to schedule() callers (see following patches).
> 
> Is there anything we can do about that?
> 

I'm trying to understand how you solved the recursion issue with
preempt_schedule()?

Here's what happens:

preempt_schedule()
  preempt_count_add() -> gets traced by function tracer
     function_trace_call()
         preempt_disable_notrace()
         [...]
         preempt_enable_notrace() -> sees NEED_RESCHED set
             preempt_schedule()
                preempt_count_add() -> gets traced
                   function_trace_call()
                       preempt_disable_notrace()
                       preempt_enable_notrace() -> sees NEED_RESCHED set

                          [etc etc until BOOM!]

Perhaps if you can find a way to clear NEED_RECHED before disabling
preemption, then it would work. But I don't see that in the patch
series.

-- Steve


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 2/4] sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE
  2015-01-28  1:42   ` Steven Rostedt
@ 2015-01-28 13:59     ` Frederic Weisbecker
  2015-01-28 15:04       ` Steven Rostedt
  0 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2015-01-28 13:59 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Ingo Molnar, Peter Zijlstra, LKML, Linus Torvalds

On Tue, Jan 27, 2015 at 08:42:39PM -0500, Steven Rostedt wrote:
> On Wed, 28 Jan 2015 01:24:10 +0100
> Frederic Weisbecker <fweisbec@gmail.com> wrote:
> 
> > d1f74e20b5b064a130cd0743a256c2d3cfe84010 turned PREEMPT_ACTIVE modifiers
> > to use raw untraced preempt count operations. Meanwhile this prevents
> > from debugging and tracing preemption disabled if we pull that
> > responsibility to schedule() callers (see following patches).
> > 
> > Is there anything we can do about that?
> > 
> 
> I'm trying to understand how you solved the recursion issue with
> preempt_schedule()?

I don't solve it, I rather outline the issue to make sure it's still
a problem today. I can keep the non-traced API but we'll loose debuggability
and latency measurement in preempt_schedule(). But I think this was already
the case before my patchset.

> 
> Here's what happens:
> 
> preempt_schedule()
>   preempt_count_add() -> gets traced by function tracer
>      function_trace_call()
>          preempt_disable_notrace()
>          [...]
>          preempt_enable_notrace() -> sees NEED_RESCHED set
>              preempt_schedule()
>                 preempt_count_add() -> gets traced
>                    function_trace_call()
>                        preempt_disable_notrace()
>                        preempt_enable_notrace() -> sees NEED_RESCHED set
> 
>                           [etc etc until BOOM!]
> 
> Perhaps if you can find a way to clear NEED_RECHED before disabling
> preemption, then it would work. But I don't see that in the patch
> series.

Something like this in function tracing?

prev_resched = need_resched();
do_function_tracing()
    preempt_disable()
    .....
    preempt_enable_no_resched()

if (!prev_resched && need_resched())
    preempt_schedule()

Sounds like a good idea. More overhead but maybe more stability.

> 
> -- Steve
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 2/4] sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE
  2015-01-28 13:59     ` Frederic Weisbecker
@ 2015-01-28 15:04       ` Steven Rostedt
  0 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2015-01-28 15:04 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Ingo Molnar, Peter Zijlstra, LKML, Linus Torvalds,
	Thomas Gleixner, Peter Zijlstra

On Wed, 28 Jan 2015 14:59:43 +0100
Frederic Weisbecker <fweisbec@gmail.com> wrote:

> On Tue, Jan 27, 2015 at 08:42:39PM -0500, Steven Rostedt wrote:
> > On Wed, 28 Jan 2015 01:24:10 +0100
> > Frederic Weisbecker <fweisbec@gmail.com> wrote:
> > 
> > > d1f74e20b5b064a130cd0743a256c2d3cfe84010 turned PREEMPT_ACTIVE modifiers
> > > to use raw untraced preempt count operations. Meanwhile this prevents
> > > from debugging and tracing preemption disabled if we pull that
> > > responsibility to schedule() callers (see following patches).
> > > 
> > > Is there anything we can do about that?
> > > 
> > 
> > I'm trying to understand how you solved the recursion issue with
> > preempt_schedule()?
> 
> I don't solve it, I rather outline the issue to make sure it's still
> a problem today. I can keep the non-traced API but we'll loose debuggability
> and latency measurement in preempt_schedule(). But I think this was already
> the case before my patchset.
> 
> > 
> > Here's what happens:
> > 
> > preempt_schedule()
> >   preempt_count_add() -> gets traced by function tracer
> >      function_trace_call()
> >          preempt_disable_notrace()
> >          [...]
> >          preempt_enable_notrace() -> sees NEED_RESCHED set
> >              preempt_schedule()
> >                 preempt_count_add() -> gets traced
> >                    function_trace_call()
> >                        preempt_disable_notrace()
> >                        preempt_enable_notrace() -> sees NEED_RESCHED set
> > 
> >                           [etc etc until BOOM!]
> > 
> > Perhaps if you can find a way to clear NEED_RECHED before disabling
> > preemption, then it would work. But I don't see that in the patch
> > series.
> 
> Something like this in function tracing?
> 
> prev_resched = need_resched();
> do_function_tracing()
>     preempt_disable()
>     .....
>     preempt_enable_no_resched()
> 
> if (!prev_resched && need_resched())
>     preempt_schedule()
> 
> Sounds like a good idea. More overhead but maybe more stability.
> 

HAHAHAHAH! Yeah, if I want another pounding by Thomas.

That's exactly what I had originally, when Thomas pointed out that
there's a race there that could have need_resched set just before
calling the function tracer, and we miss the preemption check and never
schedule on time. That took Thomas some time to figure out, and he was
really pissed when he discovered it was because of the function tracer.

So, no, that wont work.

Maybe, we just have a per_cpu variable that we set in
preempt_schedule() that the function tracer can read, and say "do not
resched here".

Another solution is to hard code the preempt trace just after the
__preempt_disable() that would simulate the tracing if it was enabled.
I had patches to do that a long time ago, but Peter didn't like them.
Although, I think it was because I used a trick with the
stop_critical_timings, but I think a cleaner approach is a custom
version made for this exact purpose would be better.

-- Steve

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 2/4] sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE
  2015-01-28  0:24 ` [RFC PATCH 2/4] sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE Frederic Weisbecker
  2015-01-28  1:42   ` Steven Rostedt
@ 2015-01-28 15:42   ` Peter Zijlstra
  2015-02-02 17:22     ` Frederic Weisbecker
  1 sibling, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2015-01-28 15:42 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML, Steven Rostedt, Linus Torvalds

On Wed, Jan 28, 2015 at 01:24:10AM +0100, Frederic Weisbecker wrote:
> d1f74e20b5b064a130cd0743a256c2d3cfe84010 turned PREEMPT_ACTIVE modifiers
> to use raw untraced preempt count operations. Meanwhile this prevents
> from debugging and tracing preemption disabled if we pull that
> responsibility to schedule() callers (see following patches).
> 
Why do you care anyhow? The trace format has the preempt count in, so
any change is obvious.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 4/4] sched: Account PREEMPT_ACTIVE context as atomic
  2015-01-28  0:24 ` [RFC PATCH 4/4] sched: Account PREEMPT_ACTIVE context as atomic Frederic Weisbecker
@ 2015-01-28 15:46   ` Peter Zijlstra
  2015-02-02 17:29     ` Frederic Weisbecker
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2015-01-28 15:46 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML, Steven Rostedt, Linus Torvalds

On Wed, Jan 28, 2015 at 01:24:12AM +0100, Frederic Weisbecker wrote:
> PREEMPT_ACTIVE implies non-preemptible context and thus atomic context
> despite what in_atomic*() APIs reports about it. These functions
> shouldn't ignore this value like they are currently doing.
> 
> It appears that these APIs were ignoring PREEMPT_ACTIVE in order to
> ease the check in schedule_debug(). Meanwhile it is sufficient to rely
> on PREEMPT_ACTIVE in order to disable preemption in __schedule().
> 
> So lets fix the in_atomic*() APIs and simplify the preempt count ops
> on __schedule() callers.

So what I think the history is here is that PREEMPT_ACTIVE is/was seen
as a flag, protecting recursion, not so much a preempt-disable.

By doing this, you loose that separation.

Note that (at least on x86) we have another flag in the preempt count.

And I don't think the generated code really changes, the only difference
is the value added/subtracted and that's an encoded immediate I think.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller
  2015-01-28  0:24 ` [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller Frederic Weisbecker
@ 2015-01-28 15:50   ` Peter Zijlstra
  2015-02-02 17:53     ` Frederic Weisbecker
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2015-01-28 15:50 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML, Steven Rostedt, Linus Torvalds

On Wed, Jan 28, 2015 at 01:24:11AM +0100, Frederic Weisbecker wrote:
> +++ b/kernel/sched/core.c
> @@ -2760,7 +2760,6 @@ static void __sched __schedule(void)
>  	struct rq *rq;
>  	int cpu;
>  
> -	preempt_disable();

Implies barrier();

>  	cpu = smp_processor_id();
>  	rq = cpu_rq(cpu);
>  	rcu_note_context_switch();
> @@ -2822,8 +2821,6 @@ static void __sched __schedule(void)
>  		raw_spin_unlock_irq(&rq->lock);
>  
>  	post_schedule(rq);

implies barrier();

> -
> -	sched_preempt_enable_no_resched();
>  }
>  
>  static inline void sched_submit_work(struct task_struct *tsk)

> @@ -2883,9 +2882,9 @@ void __sched schedule_preempt_disabled(void)
>  static void preempt_schedule_common(void)
>  {
>  	do {
> -		preempt_count_add(PREEMPT_ACTIVE);
> +		preempt_count_add(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);

Does _NOT_ imply barrier();

>  		__schedule();
> -		preempt_count_sub(PREEMPT_ACTIVE);

idem.

> +		preempt_count_sub(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
>  
>  		/*
>  		 * Check again in case we missed a preemption opportunity

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 2/4] sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE
  2015-01-28 15:42   ` Peter Zijlstra
@ 2015-02-02 17:22     ` Frederic Weisbecker
  0 siblings, 0 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2015-02-02 17:22 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, LKML, Steven Rostedt, Linus Torvalds

On Wed, Jan 28, 2015 at 04:42:30PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 28, 2015 at 01:24:10AM +0100, Frederic Weisbecker wrote:
> > d1f74e20b5b064a130cd0743a256c2d3cfe84010 turned PREEMPT_ACTIVE modifiers
> > to use raw untraced preempt count operations. Meanwhile this prevents
> > from debugging and tracing preemption disabled if we pull that
> > responsibility to schedule() callers (see following patches).
> > 
> Why do you care anyhow? The trace format has the preempt count in, so
> any change is obvious.

Yeah ok, lets keep things that way. Inc/dec PREEMPT_ACTIVE + PREEMPT_OFFSET from the same
place doesn't change anything anyway wrt preempt tracing. It wasn't traced before so lets
keep things that way and keep the raw preempt operations. If we want to trace this place,
that's another problem. 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 4/4] sched: Account PREEMPT_ACTIVE context as atomic
  2015-01-28 15:46   ` Peter Zijlstra
@ 2015-02-02 17:29     ` Frederic Weisbecker
  0 siblings, 0 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2015-02-02 17:29 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, LKML, Steven Rostedt, Linus Torvalds

On Wed, Jan 28, 2015 at 04:46:37PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 28, 2015 at 01:24:12AM +0100, Frederic Weisbecker wrote:
> > PREEMPT_ACTIVE implies non-preemptible context and thus atomic context
> > despite what in_atomic*() APIs reports about it. These functions
> > shouldn't ignore this value like they are currently doing.
> > 
> > It appears that these APIs were ignoring PREEMPT_ACTIVE in order to
> > ease the check in schedule_debug(). Meanwhile it is sufficient to rely
> > on PREEMPT_ACTIVE in order to disable preemption in __schedule().
> > 
> > So lets fix the in_atomic*() APIs and simplify the preempt count ops
> > on __schedule() callers.
> 
> So what I think the history is here is that PREEMPT_ACTIVE is/was seen
> as a flag, protecting recursion, not so much a preempt-disable.
> 
> By doing this, you loose that separation.

Indeed, preemption disablement is a side effet.

> 
> Note that (at least on x86) we have another flag in the preempt count.
> 
> And I don't think the generated code really changes, the only difference
> is the value added/subtracted and that's an encoded immediate I think.

Right the resulting code isn't optimized at all with this patch. Only the C code
was deemed to be more simple but actually it isn't since we are abusing a side
effect property.

I'm dropping this patch then.

Thanks.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller
  2015-01-28 15:50   ` Peter Zijlstra
@ 2015-02-02 17:53     ` Frederic Weisbecker
  2015-02-03 10:53       ` Peter Zijlstra
  0 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2015-02-02 17:53 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, LKML, Steven Rostedt, Linus Torvalds

On Wed, Jan 28, 2015 at 04:50:44PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 28, 2015 at 01:24:11AM +0100, Frederic Weisbecker wrote:
> > +++ b/kernel/sched/core.c
> > @@ -2760,7 +2760,6 @@ static void __sched __schedule(void)
> >  	struct rq *rq;
> >  	int cpu;
> >  
> > -	preempt_disable();
> 
> Implies barrier();
> 
> >  	cpu = smp_processor_id();
> >  	rq = cpu_rq(cpu);
> >  	rcu_note_context_switch();
> > @@ -2822,8 +2821,6 @@ static void __sched __schedule(void)
> >  		raw_spin_unlock_irq(&rq->lock);
> >  
> >  	post_schedule(rq);
> 
> implies barrier();
> 
> > -
> > -	sched_preempt_enable_no_resched();
> >  }
> >  
> >  static inline void sched_submit_work(struct task_struct *tsk)
> 
> > @@ -2883,9 +2882,9 @@ void __sched schedule_preempt_disabled(void)
> >  static void preempt_schedule_common(void)
> >  {
> >  	do {
> > -		preempt_count_add(PREEMPT_ACTIVE);
> > +		preempt_count_add(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
> 
> Does _NOT_ imply barrier();
> 
> >  		__schedule();
> > -		preempt_count_sub(PREEMPT_ACTIVE);
> 
> idem.

It looks like preempt_count_add/inc() mostly imply entering a context that we want
to be seen right away (thus want barrier() after) and preempt_count_sub/dec() mostly
want previous work to be visible before re-enabling interrupt, preemption, etc...
(thus want barrier() before).

So maybe these functions (the non-underscored ones) should imply a barrier() rather
than their callers (preempt_disable() and others). Inline functions instead of macros
would do the trick (if the headers hell let us do that).

Note the underscored implementations are all inline currently so this happens to
work by chance for direct calls to preempt_count_add/sub() outside preempt_disable().
If the non-underscored caller is turned into inline too I don't expect performance issues.

What do you think, does it make sense?


> 
> > +		preempt_count_sub(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
> >  
> >  		/*
> >  		 * Check again in case we missed a preemption opportunity

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller
  2015-02-02 17:53     ` Frederic Weisbecker
@ 2015-02-03 10:53       ` Peter Zijlstra
  2015-02-04 17:31         ` Frederic Weisbecker
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2015-02-03 10:53 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML, Steven Rostedt, Linus Torvalds

On Mon, Feb 02, 2015 at 06:53:45PM +0100, Frederic Weisbecker wrote:
> It looks like preempt_count_add/inc() mostly imply entering a context that we want
> to be seen right away (thus want barrier() after) and preempt_count_sub/dec() mostly
> want previous work to be visible before re-enabling interrupt, preemption, etc...
> (thus want barrier() before).
> 
> So maybe these functions (the non-underscored ones) should imply a barrier() rather
> than their callers (preempt_disable() and others). Inline functions instead of macros
> would do the trick (if the headers hell let us do that).
> 
> Note the underscored implementations are all inline currently so this happens to
> work by chance for direct calls to preempt_count_add/sub() outside preempt_disable().
> If the non-underscored caller is turned into inline too I don't expect performance issues.
> 
> What do you think, does it make sense?

AFAIK inline does _not_ guarantee a compiler barrier, only an actual
function call does.

When inlining the compiler creates visibility into the 'call' and can
avoid the constraint -- teh interweb seems to agree and also pointed out
that 'pure' function calls, even when actual function calls, can avoid
being a compiler barrier.

The below blog seems to do a fair job of explaining things; in
particular the 'implied compiler barriers' section is relevant here:

  http://preshing.com/20120625/memory-ordering-at-compile-time/

As it stands the difference between the non underscore and the
underscore version is debug/tracing muck. The underscore ops are the raw
operations without fancy bits on.

I think I would prefer keeping it that way; this means that
preempt_count_$op() is a pure op and when we want to build stuff with it
like preempt_{en,dis}able() they add the extra semantics on top.

In any case; if we make __schedule() noinline (I think that might make
sense) that function call would itself imply the compiler barrier and
something like:

	__preempt_count_add(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
	__schedule();
	__preempt_count_sub(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);

Would actually be safe/correct.

As it stands I think __schedule() would fail the GCC inline static
criteria for being too large, but you never know, noinline guarantees it
will not.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [tip:sched/core] sched: Pull resched loop to __schedule() callers
  2015-01-28  0:24 ` [PATCH 1/4] sched: Pull resched loop to __schedule() callers Frederic Weisbecker
@ 2015-02-04 14:36   ` tip-bot for Frederic Weisbecker
  0 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Frederic Weisbecker @ 2015-02-04 14:36 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: fweisbec, rostedt, peterz, torvalds, tglx, linux-kernel, mingo, hpa

Commit-ID:  bfd9b2b5f80e7289fdd50210afe4d9ca5952a865
Gitweb:     http://git.kernel.org/tip/bfd9b2b5f80e7289fdd50210afe4d9ca5952a865
Author:     Frederic Weisbecker <fweisbec@gmail.com>
AuthorDate: Wed, 28 Jan 2015 01:24:09 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 4 Feb 2015 07:52:30 +0100

sched: Pull resched loop to __schedule() callers

__schedule() disables preemption during its job and re-enables it
afterward without doing a preemption check to avoid recursion.

But if an event happens after the context switch which requires
rescheduling, we need to check again if a task of a higher priority
needs the CPU. A preempt irq can raise such a situation. To handle that,
__schedule() loops on need_resched().

But preempt_schedule_*() functions, which call __schedule(), also loop
on need_resched() to handle missed preempt irqs. Hence we end up with
the same loop happening twice.

Lets simplify that by attributing the need_resched() loop responsibility
to all __schedule() callers.

There is a risk that the outer loop now handles reschedules that used
to be handled by the inner loop with the added overhead of caller details
(inc/dec of PREEMPT_ACTIVE, irq save/restore) but assuming those inner
rescheduling loop weren't too frequent, this shouldn't matter. Especially
since the whole preemption path is now losing one loop in any case.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/1422404652-29067-2-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5091fd4..b269e8a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2765,6 +2765,10 @@ again:
  *          - explicit schedule() call
  *          - return from syscall or exception to user-space
  *          - return from interrupt-handler to user-space
+ *
+ * WARNING: all callers must re-check need_resched() afterward and reschedule
+ * accordingly in case an event triggered the need for rescheduling (such as
+ * an interrupt waking up a task) while preemption was disabled in __schedule().
  */
 static void __sched __schedule(void)
 {
@@ -2773,7 +2777,6 @@ static void __sched __schedule(void)
 	struct rq *rq;
 	int cpu;
 
-need_resched:
 	preempt_disable();
 	cpu = smp_processor_id();
 	rq = cpu_rq(cpu);
@@ -2840,8 +2843,6 @@ need_resched:
 	post_schedule(rq);
 
 	sched_preempt_enable_no_resched();
-	if (need_resched())
-		goto need_resched;
 }
 
 static inline void sched_submit_work(struct task_struct *tsk)
@@ -2861,7 +2862,9 @@ asmlinkage __visible void __sched schedule(void)
 	struct task_struct *tsk = current;
 
 	sched_submit_work(tsk);
-	__schedule();
+	do {
+		__schedule();
+	} while (need_resched());
 }
 EXPORT_SYMBOL(schedule);
 

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller
  2015-02-03 10:53       ` Peter Zijlstra
@ 2015-02-04 17:31         ` Frederic Weisbecker
  2015-02-04 17:48           ` Peter Zijlstra
  0 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2015-02-04 17:31 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, LKML, Steven Rostedt, Linus Torvalds

On Tue, Feb 03, 2015 at 11:53:03AM +0100, Peter Zijlstra wrote:
> On Mon, Feb 02, 2015 at 06:53:45PM +0100, Frederic Weisbecker wrote:
> > It looks like preempt_count_add/inc() mostly imply entering a context that we want
> > to be seen right away (thus want barrier() after) and preempt_count_sub/dec() mostly
> > want previous work to be visible before re-enabling interrupt, preemption, etc...
> > (thus want barrier() before).
> > 
> > So maybe these functions (the non-underscored ones) should imply a barrier() rather
> > than their callers (preempt_disable() and others). Inline functions instead of macros
> > would do the trick (if the headers hell let us do that).
> > 
> > Note the underscored implementations are all inline currently so this happens to
> > work by chance for direct calls to preempt_count_add/sub() outside preempt_disable().
> > If the non-underscored caller is turned into inline too I don't expect performance issues.
> > 
> > What do you think, does it make sense?
> 
> AFAIK inline does _not_ guarantee a compiler barrier, only an actual
> function call does.
> 
> When inlining the compiler creates visibility into the 'call' and can
> avoid the constraint -- teh interweb seems to agree and also pointed out
> that 'pure' function calls, even when actual function calls, can avoid
> being a compiler barrier.
> 
> The below blog seems to do a fair job of explaining things; in
> particular the 'implied compiler barriers' section is relevant here:
> 
>   http://preshing.com/20120625/memory-ordering-at-compile-time/

Ok, ok then.
 
> As it stands the difference between the non underscore and the
> underscore version is debug/tracing muck. The underscore ops are the raw
> operations without fancy bits on.
> 
> I think I would prefer keeping it that way; this means that
> preempt_count_$op() is a pure op and when we want to build stuff with it
> like preempt_{en,dis}able() they add the extra semantics on top.
> 
> In any case; if we make __schedule() noinline (I think that might make
> sense) that function call would itself imply the compiler barrier and
> something like:
> 
> 	__preempt_count_add(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
> 	__schedule();
> 	__preempt_count_sub(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
> 
> Would actually be safe/correct.
> 
> As it stands I think __schedule() would fail the GCC inline static
> criteria for being too large, but you never know, noinline guarantees it
> will not.

Right, although relying only on __schedule() as a function call is perhaps
error-prone in case we add things in preempt_schedule*() APIs later, before
the call to __schedule(), that need the preempt count to be visible.

I can create preempt_active_enter() / preempt_active_exit() that take care
of the preempt op and the barrier() for example.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller
  2015-02-04 17:31         ` Frederic Weisbecker
@ 2015-02-04 17:48           ` Peter Zijlstra
  0 siblings, 0 replies; 18+ messages in thread
From: Peter Zijlstra @ 2015-02-04 17:48 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML, Steven Rostedt, Linus Torvalds

On Wed, Feb 04, 2015 at 06:31:57PM +0100, Frederic Weisbecker wrote:

> > In any case; if we make __schedule() noinline (I think that might make
> > sense) that function call would itself imply the compiler barrier and
> > something like:
> > 
> > 	__preempt_count_add(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
> > 	__schedule();
> > 	__preempt_count_sub(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
> > 
> > Would actually be safe/correct.
> > 
> > As it stands I think __schedule() would fail the GCC inline static
> > criteria for being too large, but you never know, noinline guarantees it
> > will not.
> 
> Right, although relying only on __schedule() as a function call is perhaps
> error-prone in case we add things in preempt_schedule*() APIs later, before
> the call to __schedule(), that need the preempt count to be visible.
> 
> I can create preempt_active_enter() / preempt_active_exit() that take care
> of the preempt op and the barrier() for example.

Sure, like that exception_enter() in preempt_schedule_context() for
instance?

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2015-02-04 17:48 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-28  0:24 [PATCH 0/4] sched: schedule/preempt optimizations and cleanups Frederic Weisbecker
2015-01-28  0:24 ` [PATCH 1/4] sched: Pull resched loop to __schedule() callers Frederic Weisbecker
2015-02-04 14:36   ` [tip:sched/core] " tip-bot for Frederic Weisbecker
2015-01-28  0:24 ` [RFC PATCH 2/4] sched: Use traced preempt count operations to toggle PREEMPT_ACTIVE Frederic Weisbecker
2015-01-28  1:42   ` Steven Rostedt
2015-01-28 13:59     ` Frederic Weisbecker
2015-01-28 15:04       ` Steven Rostedt
2015-01-28 15:42   ` Peter Zijlstra
2015-02-02 17:22     ` Frederic Weisbecker
2015-01-28  0:24 ` [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller Frederic Weisbecker
2015-01-28 15:50   ` Peter Zijlstra
2015-02-02 17:53     ` Frederic Weisbecker
2015-02-03 10:53       ` Peter Zijlstra
2015-02-04 17:31         ` Frederic Weisbecker
2015-02-04 17:48           ` Peter Zijlstra
2015-01-28  0:24 ` [RFC PATCH 4/4] sched: Account PREEMPT_ACTIVE context as atomic Frederic Weisbecker
2015-01-28 15:46   ` Peter Zijlstra
2015-02-02 17:29     ` Frederic Weisbecker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).