All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT
@ 2011-06-08 17:48 Frederic Weisbecker
  2011-06-08 17:48 ` [PATCH 1/4] sched: Remove pointless in_atomic() definition check Frederic Weisbecker
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Frederic Weisbecker @ 2011-06-08 17:48 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Paul E. McKenney, Peter Zijlstra,
	Randy Dunlap, Ingo Molnar

Aside it may mostly avoid the need for a specific PROVE_RCU
check when we sleep inside an rcu read side critical section.

Better make sleeping inside atomic sections work everywhere.

Frederic Weisbecker (4):
  sched: Remove pointless in_atomic() definition check
  sched: Isolate preempt counting in its own config option
  sched: Make sleeping inside spinlock detection working in
    !CONFIG_PREEMPT
  sched: Generalize sleep inside spinlock detection

 Documentation/DocBook/kernel-hacking.tmpl  |    2 +-
 Documentation/SubmitChecklist              |    2 +-
 Documentation/development-process/4.Coding |    2 +-
 Documentation/ja_JP/SubmitChecklist        |    2 +-
 Documentation/zh_CN/SubmitChecklist        |    2 +-
 include/linux/bit_spinlock.h               |    2 +-
 include/linux/hardirq.h                    |    4 ++--
 include/linux/kernel.h                     |    2 +-
 include/linux/pagemap.h                    |    4 ++--
 include/linux/preempt.h                    |   26 +++++++++++++++++---------
 include/linux/rcupdate.h                   |   12 ++++++------
 include/linux/sched.h                      |    2 +-
 kernel/Kconfig.preempt                     |    3 +++
 kernel/sched.c                             |    6 ++----
 lib/Kconfig.debug                          |    9 ++++++---
 15 files changed, 46 insertions(+), 34 deletions(-)

-- 
1.7.5.4


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/4] sched: Remove pointless in_atomic() definition check
  2011-06-08 17:48 [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT Frederic Weisbecker
@ 2011-06-08 17:48 ` Frederic Weisbecker
  2011-06-08 17:48 ` [PATCH 2/4] sched: Isolate preempt counting in its own config option Frederic Weisbecker
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Frederic Weisbecker @ 2011-06-08 17:48 UTC (permalink / raw)
  To: LKML; +Cc: Frederic Weisbecker, Ingo Molnar, Peter Zijlstra

It's really supposed to be defined here. If it's not then
we actually want the build to crash so that we know it,
and not keep it silent.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/sched.c |    2 --
 1 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index fd18f39..01d9536 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -8028,7 +8028,6 @@ static inline int preempt_count_equals(int preempt_offset)
 
 void __might_sleep(const char *file, int line, int preempt_offset)
 {
-#ifdef in_atomic
 	static unsigned long prev_jiffy;	/* ratelimiting */
 
 	if ((preempt_count_equals(preempt_offset) && !irqs_disabled()) ||
@@ -8050,7 +8049,6 @@ void __might_sleep(const char *file, int line, int preempt_offset)
 	if (irqs_disabled())
 		print_irqtrace_events(current);
 	dump_stack();
-#endif
 }
 EXPORT_SYMBOL(__might_sleep);
 #endif
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/4] sched: Isolate preempt counting in its own config option
  2011-06-08 17:48 [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT Frederic Weisbecker
  2011-06-08 17:48 ` [PATCH 1/4] sched: Remove pointless in_atomic() definition check Frederic Weisbecker
@ 2011-06-08 17:48 ` Frederic Weisbecker
  2011-06-08 19:40   ` Paul E. McKenney
  2011-06-08 19:47   ` Peter Zijlstra
  2011-06-08 17:48 ` [PATCH 3/4] sched: Make sleeping inside spinlock detection working in !CONFIG_PREEMPT Frederic Weisbecker
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 16+ messages in thread
From: Frederic Weisbecker @ 2011-06-08 17:48 UTC (permalink / raw)
  To: LKML; +Cc: Frederic Weisbecker, Paul E. McKenney, Ingo Molnar, Peter Zijlstra

Create a new CONFIG_PREEMPT_COUNT that handles the inc/dec
of preempt count offset independently. So that the offset
can be updated by preempt_disable() and preempt_enable()
even without the need for CONFIG_PREEMPT beeing set.

This prepares to make CONFIG_DEBUG_SPINLOCK_SLEEP working
with !CONFIG_PREEMPT where it currently doesn't detect
code that sleeps inside explicit preemption disabled
sections.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 include/linux/bit_spinlock.h |    2 +-
 include/linux/hardirq.h      |    4 ++--
 include/linux/pagemap.h      |    4 ++--
 include/linux/preempt.h      |   26 +++++++++++++++++---------
 include/linux/rcupdate.h     |   12 ++++++------
 include/linux/sched.h        |    2 +-
 kernel/Kconfig.preempt       |    3 +++
 kernel/sched.c               |    2 +-
 8 files changed, 33 insertions(+), 22 deletions(-)

diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h
index b4326bf..564d997 100644
--- a/include/linux/bit_spinlock.h
+++ b/include/linux/bit_spinlock.h
@@ -88,7 +88,7 @@ static inline int bit_spin_is_locked(int bitnum, unsigned long *addr)
 {
 #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
 	return test_bit(bitnum, addr);
-#elif defined CONFIG_PREEMPT
+#elif defined CONFIG_PREEMPT_COUNT
 	return preempt_count();
 #else
 	return 1;
diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
index ba36217..f743883f 100644
--- a/include/linux/hardirq.h
+++ b/include/linux/hardirq.h
@@ -93,7 +93,7 @@
  */
 #define in_nmi()	(preempt_count() & NMI_MASK)
 
-#if defined(CONFIG_PREEMPT)
+#if defined(CONFIG_PREEMPT_COUNT)
 # define PREEMPT_CHECK_OFFSET 1
 #else
 # define PREEMPT_CHECK_OFFSET 0
@@ -115,7 +115,7 @@
 #define in_atomic_preempt_off() \
 		((preempt_count() & ~PREEMPT_ACTIVE) != PREEMPT_CHECK_OFFSET)
 
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
 # define preemptible()	(preempt_count() == 0 && !irqs_disabled())
 # define IRQ_EXIT_OFFSET (HARDIRQ_OFFSET-1)
 #else
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 716875e..8e38d4c 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -134,7 +134,7 @@ static inline int page_cache_get_speculative(struct page *page)
 	VM_BUG_ON(in_interrupt());
 
 #if !defined(CONFIG_SMP) && defined(CONFIG_TREE_RCU)
-# ifdef CONFIG_PREEMPT
+# ifdef CONFIG_PREEMPT_COUNT
 	VM_BUG_ON(!in_atomic());
 # endif
 	/*
@@ -172,7 +172,7 @@ static inline int page_cache_add_speculative(struct page *page, int count)
 	VM_BUG_ON(in_interrupt());
 
 #if !defined(CONFIG_SMP) && defined(CONFIG_TREE_RCU)
-# ifdef CONFIG_PREEMPT
+# ifdef CONFIG_PREEMPT_COUNT
 	VM_BUG_ON(!in_atomic());
 # endif
 	VM_BUG_ON(page_count(page) == 0);
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 2e681d9..58969b2 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -27,6 +27,21 @@
 
 asmlinkage void preempt_schedule(void);
 
+#define preempt_check_resched() \
+do { \
+	if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) \
+		preempt_schedule(); \
+} while (0)
+
+#else /* !CONFIG_PREEMPT */
+
+#define preempt_check_resched()		do { } while (0)
+
+#endif /* CONFIG_PREEMPT */
+
+
+#ifdef CONFIG_PREEMPT_COUNT
+
 #define preempt_disable() \
 do { \
 	inc_preempt_count(); \
@@ -39,12 +54,6 @@ do { \
 	dec_preempt_count(); \
 } while (0)
 
-#define preempt_check_resched() \
-do { \
-	if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) \
-		preempt_schedule(); \
-} while (0)
-
 #define preempt_enable() \
 do { \
 	preempt_enable_no_resched(); \
@@ -80,18 +89,17 @@ do { \
 	preempt_check_resched(); \
 } while (0)
 
-#else
+#else /* !CONFIG_PREEMPT_COUNT */
 
 #define preempt_disable()		do { } while (0)
 #define preempt_enable_no_resched()	do { } while (0)
 #define preempt_enable()		do { } while (0)
-#define preempt_check_resched()		do { } while (0)
 
 #define preempt_disable_notrace()		do { } while (0)
 #define preempt_enable_no_resched_notrace()	do { } while (0)
 #define preempt_enable_notrace()		do { } while (0)
 
-#endif
+#endif /* CONFIG_PREEMPT_COUNT */
 
 #ifdef CONFIG_PREEMPT_NOTIFIERS
 
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 99f9aa7..8f4f881 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -239,7 +239,7 @@ extern int rcu_read_lock_bh_held(void);
  * Check debug_lockdep_rcu_enabled() to prevent false positives during boot
  * and while lockdep is disabled.
  */
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
 static inline int rcu_read_lock_sched_held(void)
 {
 	int lockdep_opinion = 0;
@@ -250,12 +250,12 @@ static inline int rcu_read_lock_sched_held(void)
 		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
 	return lockdep_opinion || preempt_count() != 0 || irqs_disabled();
 }
-#else /* #ifdef CONFIG_PREEMPT */
+#else /* #ifdef CONFIG_PREEMPT_COUNT */
 static inline int rcu_read_lock_sched_held(void)
 {
 	return 1;
 }
-#endif /* #else #ifdef CONFIG_PREEMPT */
+#endif /* #else #ifdef CONFIG_PREEMPT_COUNT */
 
 #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
 
@@ -276,17 +276,17 @@ static inline int rcu_read_lock_bh_held(void)
 	return 1;
 }
 
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
 static inline int rcu_read_lock_sched_held(void)
 {
 	return preempt_count() != 0 || irqs_disabled();
 }
-#else /* #ifdef CONFIG_PREEMPT */
+#else /* #ifdef CONFIG_PREEMPT_COUNT */
 static inline int rcu_read_lock_sched_held(void)
 {
 	return 1;
 }
-#endif /* #else #ifdef CONFIG_PREEMPT */
+#endif /* #else #ifdef CONFIG_PREEMPT_COUNT */
 
 #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
 
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 483c1ed..4ecd5cb 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2502,7 +2502,7 @@ extern int _cond_resched(void);
 
 extern int __cond_resched_lock(spinlock_t *lock);
 
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
 #define PREEMPT_LOCK_OFFSET	PREEMPT_OFFSET
 #else
 #define PREEMPT_LOCK_OFFSET	0
diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
index bf987b9..24e7cb0 100644
--- a/kernel/Kconfig.preempt
+++ b/kernel/Kconfig.preempt
@@ -35,6 +35,7 @@ config PREEMPT_VOLUNTARY
 
 config PREEMPT
 	bool "Preemptible Kernel (Low-Latency Desktop)"
+	select PREEMPT_COUNT
 	help
 	  This option reduces the latency of the kernel by making
 	  all kernel code (that is not executing in a critical section)
@@ -52,3 +53,5 @@ config PREEMPT
 
 endchoice
 
+config PREEMPT_COUNT
+       bool
\ No newline at end of file
diff --git a/kernel/sched.c b/kernel/sched.c
index 01d9536..90ad7cf 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2843,7 +2843,7 @@ void sched_fork(struct task_struct *p)
 #if defined(CONFIG_SMP)
 	p->on_cpu = 0;
 #endif
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
 	/* Want to start with kernel preemption disabled. */
 	task_thread_info(p)->preempt_count = 1;
 #endif
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/4] sched: Make sleeping inside spinlock detection working in !CONFIG_PREEMPT
  2011-06-08 17:48 [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT Frederic Weisbecker
  2011-06-08 17:48 ` [PATCH 1/4] sched: Remove pointless in_atomic() definition check Frederic Weisbecker
  2011-06-08 17:48 ` [PATCH 2/4] sched: Isolate preempt counting in its own config option Frederic Weisbecker
@ 2011-06-08 17:48 ` Frederic Weisbecker
  2011-06-08 19:40   ` Paul E. McKenney
  2011-06-08 17:48 ` [PATCH 4/4] sched: Generalize sleep inside spinlock detection Frederic Weisbecker
  2011-06-08 22:49 ` Bug: ACPI, scheduling while atomic (was Re: [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT) Frederic Weisbecker
  4 siblings, 1 reply; 16+ messages in thread
From: Frederic Weisbecker @ 2011-06-08 17:48 UTC (permalink / raw)
  To: LKML; +Cc: Frederic Weisbecker, Paul E. McKenney, Ingo Molnar, Peter Zijlstra

Select CONFIG_PREEMPT_COUNT when we enable the sleeping inside
spinlock detection, so that the preempt offset gets correctly
incremented/decremented from preempt_disable()/preempt_enable().

This makes the preempt count eventually working in !CONFIG_PREEMPT
when that debug option is set and thus fixes the detection of explicit
preemption disabled sections under such config. Code that sleeps
in explicitly preempt disabled section can be finally spotted
in non-preemptible kernels.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 lib/Kconfig.debug |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 28afa4c..a7dd7b5 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -650,6 +650,7 @@ config TRACE_IRQFLAGS
 
 config DEBUG_SPINLOCK_SLEEP
 	bool "Spinlock debugging: sleep-inside-spinlock checking"
+	select PREEMPT_COUNT
 	depends on DEBUG_KERNEL
 	help
 	  If you say Y here, various routines which may sleep will become very
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 4/4] sched: Generalize sleep inside spinlock detection
  2011-06-08 17:48 [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2011-06-08 17:48 ` [PATCH 3/4] sched: Make sleeping inside spinlock detection working in !CONFIG_PREEMPT Frederic Weisbecker
@ 2011-06-08 17:48 ` Frederic Weisbecker
  2011-06-08 19:41   ` Paul E. McKenney
  2011-06-08 22:49 ` Bug: ACPI, scheduling while atomic (was Re: [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT) Frederic Weisbecker
  4 siblings, 1 reply; 16+ messages in thread
From: Frederic Weisbecker @ 2011-06-08 17:48 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Paul E. McKenney, Peter Zijlstra,
	Ingo Molnar, Randy Dunlap

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 5949 bytes --]

The sleeping inside spinlock detection is actually used
for more general sleeping inside atomic sections
debugging: preemption disabled, rcu read side critical
sections, interrupts, interrupt disabled, etc...

Change the name of the config and its help section to
reflect its more general role.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Randy Dunlap <randy.dunlap@oracle.com>
---
 Documentation/DocBook/kernel-hacking.tmpl  |    2 +-
 Documentation/SubmitChecklist              |    2 +-
 Documentation/development-process/4.Coding |    2 +-
 Documentation/ja_JP/SubmitChecklist        |    2 +-
 Documentation/zh_CN/SubmitChecklist        |    2 +-
 include/linux/kernel.h                     |    2 +-
 kernel/sched.c                             |    2 +-
 lib/Kconfig.debug                          |    8 +++++---
 8 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/Documentation/DocBook/kernel-hacking.tmpl b/Documentation/DocBook/kernel-hacking.tmpl
index 7b3f493..07a9c48 100644
--- a/Documentation/DocBook/kernel-hacking.tmpl
+++ b/Documentation/DocBook/kernel-hacking.tmpl
@@ -409,7 +409,7 @@ cond_resched(); /* Will sleep */
 
   <para>
    You should always compile your kernel
-   <symbol>CONFIG_DEBUG_SPINLOCK_SLEEP</symbol> on, and it will warn
+   <symbol>CONFIG_DEBUG_ATOMIC_SLEEP</symbol> on, and it will warn
    you if you break these rules.  If you <emphasis>do</emphasis> break
    the rules, you will eventually lock up your box.
   </para>
diff --git a/Documentation/SubmitChecklist b/Documentation/SubmitChecklist
index da0382d..7b13be4 100644
--- a/Documentation/SubmitChecklist
+++ b/Documentation/SubmitChecklist
@@ -53,7 +53,7 @@ kernel patches.
 
 12: Has been tested with CONFIG_PREEMPT, CONFIG_DEBUG_PREEMPT,
     CONFIG_DEBUG_SLAB, CONFIG_DEBUG_PAGEALLOC, CONFIG_DEBUG_MUTEXES,
-    CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_SPINLOCK_SLEEP all simultaneously
+    CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_ATOMIC_SLEEP all simultaneously
     enabled.
 
 13: Has been build- and runtime tested with and without CONFIG_SMP and
diff --git a/Documentation/development-process/4.Coding b/Documentation/development-process/4.Coding
index f3f1a46..83f5f5b 100644
--- a/Documentation/development-process/4.Coding
+++ b/Documentation/development-process/4.Coding
@@ -244,7 +244,7 @@ testing purposes.  In particular, you should turn on:
  - DEBUG_SLAB can find a variety of memory allocation and use errors; it
    should be used on most development kernels.
 
- - DEBUG_SPINLOCK, DEBUG_SPINLOCK_SLEEP, and DEBUG_MUTEXES will find a
+ - DEBUG_SPINLOCK, DEBUG_ATOMIC_SLEEP, and DEBUG_MUTEXES will find a
    number of common locking errors.
 
 There are quite a few other debugging options, some of which will be
diff --git a/Documentation/ja_JP/SubmitChecklist b/Documentation/ja_JP/SubmitChecklist
index 2df4576..cb5507b 100644
--- a/Documentation/ja_JP/SubmitChecklist
+++ b/Documentation/ja_JP/SubmitChecklist
@@ -68,7 +68,7 @@ Linux カーネルパッチ投稿者向けチェックリスト
 
 12: CONFIG_PREEMPT, CONFIG_DEBUG_PREEMPT, CONFIG_DEBUG_SLAB,
     CONFIG_DEBUG_PAGEALLOC, CONFIG_DEBUG_MUTEXES, CONFIG_DEBUG_SPINLOCK,
-    CONFIG_DEBUG_SPINLOCK_SLEEP これら全てを同時に有効にして動作確認を
+    CONFIG_DEBUG_ATOMIC_SLEEP これら全てを同時に有効にして動作確認を
     行ってください。
 
 13: CONFIG_SMP, CONFIG_PREEMPT を有効にした場合と無効にした場合の両方で
diff --git a/Documentation/zh_CN/SubmitChecklist b/Documentation/zh_CN/SubmitChecklist
index 951415b..4c741d6 100644
--- a/Documentation/zh_CN/SubmitChecklist
+++ b/Documentation/zh_CN/SubmitChecklist
@@ -67,7 +67,7 @@ Linux
 
 12£ºÒѾ­Í¨¹ýCONFIG_PREEMPT, CONFIG_DEBUG_PREEMPT,
     CONFIG_DEBUG_SLAB, CONFIG_DEBUG_PAGEALLOC, CONFIG_DEBUG_MUTEXES,
-    CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_SPINLOCK_SLEEP²âÊÔ£¬²¢ÇÒͬʱ¶¼
+    CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_ATOMIC_SLEEP²âÊÔ£¬²¢ÇÒͬʱ¶¼
     ʹÄÜ¡£
 
 13£ºÒѾ­¶¼¹¹½¨²¢ÇÒʹÓûòÕß²»Ê¹Óà CONFIG_SMP ºÍ CONFIG_PREEMPT²âÊÔÖ´ÐÐʱ¼ä¡£
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index fb0e732..24b489f 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -121,7 +121,7 @@ extern int _cond_resched(void);
 # define might_resched() do { } while (0)
 #endif
 
-#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
   void __might_sleep(const char *file, int line, int preempt_offset);
 /**
  * might_sleep - annotation for functions that can sleep
diff --git a/kernel/sched.c b/kernel/sched.c
index 90ad7cf..a5f318b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -8018,7 +8018,7 @@ void __init sched_init(void)
 	scheduler_running = 1;
 }
 
-#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
 static inline int preempt_count_equals(int preempt_offset)
 {
 	int nested = (preempt_count() & ~PREEMPT_ACTIVE) + rcu_preempt_depth();
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index a7dd7b5..81a4f33 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -648,13 +648,15 @@ config TRACE_IRQFLAGS
 	  Enables hooks to interrupt enabling and disabling for
 	  either tracing or lock debugging.
 
-config DEBUG_SPINLOCK_SLEEP
-	bool "Spinlock debugging: sleep-inside-spinlock checking"
+config DEBUG_ATOMIC_SLEEP
+	bool "Sleep inside atomic section checking"
 	select PREEMPT_COUNT
 	depends on DEBUG_KERNEL
 	help
 	  If you say Y here, various routines which may sleep will become very
-	  noisy if they are called with a spinlock held.
+	  noisy if they are called inside atomic sections: when a spinlock is
+	  held, inside an rcu read side critical section, inside preempt disabled
+	  sections, inside an interrupt, etc...
 
 config DEBUG_LOCKING_API_SELFTESTS
 	bool "Locking API boot-time self-tests"
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4] sched: Isolate preempt counting in its own config option
  2011-06-08 17:48 ` [PATCH 2/4] sched: Isolate preempt counting in its own config option Frederic Weisbecker
@ 2011-06-08 19:40   ` Paul E. McKenney
  2011-06-08 19:47   ` Peter Zijlstra
  1 sibling, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2011-06-08 19:40 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: LKML, Ingo Molnar, Peter Zijlstra

On Wed, Jun 08, 2011 at 07:48:33PM +0200, Frederic Weisbecker wrote:
> Create a new CONFIG_PREEMPT_COUNT that handles the inc/dec
> of preempt count offset independently. So that the offset
> can be updated by preempt_disable() and preempt_enable()
> even without the need for CONFIG_PREEMPT beeing set.
> 
> This prepares to make CONFIG_DEBUG_SPINLOCK_SLEEP working
> with !CONFIG_PREEMPT where it currently doesn't detect
> code that sleeps inside explicit preemption disabled
> sections.
> 
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> ---
>  include/linux/bit_spinlock.h |    2 +-
>  include/linux/hardirq.h      |    4 ++--
>  include/linux/pagemap.h      |    4 ++--
>  include/linux/preempt.h      |   26 +++++++++++++++++---------
>  include/linux/rcupdate.h     |   12 ++++++------
>  include/linux/sched.h        |    2 +-
>  kernel/Kconfig.preempt       |    3 +++
>  kernel/sched.c               |    2 +-
>  8 files changed, 33 insertions(+), 22 deletions(-)
> 
> diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h
> index b4326bf..564d997 100644
> --- a/include/linux/bit_spinlock.h
> +++ b/include/linux/bit_spinlock.h
> @@ -88,7 +88,7 @@ static inline int bit_spin_is_locked(int bitnum, unsigned long *addr)
>  {
>  #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
>  	return test_bit(bitnum, addr);
> -#elif defined CONFIG_PREEMPT
> +#elif defined CONFIG_PREEMPT_COUNT
>  	return preempt_count();
>  #else
>  	return 1;
> diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
> index ba36217..f743883f 100644
> --- a/include/linux/hardirq.h
> +++ b/include/linux/hardirq.h
> @@ -93,7 +93,7 @@
>   */
>  #define in_nmi()	(preempt_count() & NMI_MASK)
> 
> -#if defined(CONFIG_PREEMPT)
> +#if defined(CONFIG_PREEMPT_COUNT)
>  # define PREEMPT_CHECK_OFFSET 1
>  #else
>  # define PREEMPT_CHECK_OFFSET 0
> @@ -115,7 +115,7 @@
>  #define in_atomic_preempt_off() \
>  		((preempt_count() & ~PREEMPT_ACTIVE) != PREEMPT_CHECK_OFFSET)
> 
> -#ifdef CONFIG_PREEMPT
> +#ifdef CONFIG_PREEMPT_COUNT
>  # define preemptible()	(preempt_count() == 0 && !irqs_disabled())
>  # define IRQ_EXIT_OFFSET (HARDIRQ_OFFSET-1)
>  #else
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 716875e..8e38d4c 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -134,7 +134,7 @@ static inline int page_cache_get_speculative(struct page *page)
>  	VM_BUG_ON(in_interrupt());
> 
>  #if !defined(CONFIG_SMP) && defined(CONFIG_TREE_RCU)
> -# ifdef CONFIG_PREEMPT
> +# ifdef CONFIG_PREEMPT_COUNT
>  	VM_BUG_ON(!in_atomic());
>  # endif
>  	/*
> @@ -172,7 +172,7 @@ static inline int page_cache_add_speculative(struct page *page, int count)
>  	VM_BUG_ON(in_interrupt());
> 
>  #if !defined(CONFIG_SMP) && defined(CONFIG_TREE_RCU)
> -# ifdef CONFIG_PREEMPT
> +# ifdef CONFIG_PREEMPT_COUNT
>  	VM_BUG_ON(!in_atomic());
>  # endif
>  	VM_BUG_ON(page_count(page) == 0);
> diff --git a/include/linux/preempt.h b/include/linux/preempt.h
> index 2e681d9..58969b2 100644
> --- a/include/linux/preempt.h
> +++ b/include/linux/preempt.h
> @@ -27,6 +27,21 @@
> 
>  asmlinkage void preempt_schedule(void);
> 
> +#define preempt_check_resched() \
> +do { \
> +	if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) \
> +		preempt_schedule(); \
> +} while (0)
> +
> +#else /* !CONFIG_PREEMPT */
> +
> +#define preempt_check_resched()		do { } while (0)
> +
> +#endif /* CONFIG_PREEMPT */
> +
> +
> +#ifdef CONFIG_PREEMPT_COUNT
> +
>  #define preempt_disable() \
>  do { \
>  	inc_preempt_count(); \
> @@ -39,12 +54,6 @@ do { \
>  	dec_preempt_count(); \
>  } while (0)
> 
> -#define preempt_check_resched() \
> -do { \
> -	if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) \
> -		preempt_schedule(); \
> -} while (0)
> -
>  #define preempt_enable() \
>  do { \
>  	preempt_enable_no_resched(); \
> @@ -80,18 +89,17 @@ do { \
>  	preempt_check_resched(); \
>  } while (0)
> 
> -#else
> +#else /* !CONFIG_PREEMPT_COUNT */
> 
>  #define preempt_disable()		do { } while (0)
>  #define preempt_enable_no_resched()	do { } while (0)
>  #define preempt_enable()		do { } while (0)
> -#define preempt_check_resched()		do { } while (0)
> 
>  #define preempt_disable_notrace()		do { } while (0)
>  #define preempt_enable_no_resched_notrace()	do { } while (0)
>  #define preempt_enable_notrace()		do { } while (0)
> 
> -#endif
> +#endif /* CONFIG_PREEMPT_COUNT */
> 
>  #ifdef CONFIG_PREEMPT_NOTIFIERS
> 
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 99f9aa7..8f4f881 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -239,7 +239,7 @@ extern int rcu_read_lock_bh_held(void);
>   * Check debug_lockdep_rcu_enabled() to prevent false positives during boot
>   * and while lockdep is disabled.
>   */
> -#ifdef CONFIG_PREEMPT
> +#ifdef CONFIG_PREEMPT_COUNT
>  static inline int rcu_read_lock_sched_held(void)
>  {
>  	int lockdep_opinion = 0;
> @@ -250,12 +250,12 @@ static inline int rcu_read_lock_sched_held(void)
>  		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
>  	return lockdep_opinion || preempt_count() != 0 || irqs_disabled();
>  }
> -#else /* #ifdef CONFIG_PREEMPT */
> +#else /* #ifdef CONFIG_PREEMPT_COUNT */
>  static inline int rcu_read_lock_sched_held(void)
>  {
>  	return 1;
>  }
> -#endif /* #else #ifdef CONFIG_PREEMPT */
> +#endif /* #else #ifdef CONFIG_PREEMPT_COUNT */
> 
>  #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
> 
> @@ -276,17 +276,17 @@ static inline int rcu_read_lock_bh_held(void)
>  	return 1;
>  }
> 
> -#ifdef CONFIG_PREEMPT
> +#ifdef CONFIG_PREEMPT_COUNT
>  static inline int rcu_read_lock_sched_held(void)
>  {
>  	return preempt_count() != 0 || irqs_disabled();
>  }
> -#else /* #ifdef CONFIG_PREEMPT */
> +#else /* #ifdef CONFIG_PREEMPT_COUNT */
>  static inline int rcu_read_lock_sched_held(void)
>  {
>  	return 1;
>  }
> -#endif /* #else #ifdef CONFIG_PREEMPT */
> +#endif /* #else #ifdef CONFIG_PREEMPT_COUNT */
> 
>  #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 483c1ed..4ecd5cb 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -2502,7 +2502,7 @@ extern int _cond_resched(void);
> 
>  extern int __cond_resched_lock(spinlock_t *lock);
> 
> -#ifdef CONFIG_PREEMPT
> +#ifdef CONFIG_PREEMPT_COUNT
>  #define PREEMPT_LOCK_OFFSET	PREEMPT_OFFSET
>  #else
>  #define PREEMPT_LOCK_OFFSET	0
> diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
> index bf987b9..24e7cb0 100644
> --- a/kernel/Kconfig.preempt
> +++ b/kernel/Kconfig.preempt
> @@ -35,6 +35,7 @@ config PREEMPT_VOLUNTARY
> 
>  config PREEMPT
>  	bool "Preemptible Kernel (Low-Latency Desktop)"
> +	select PREEMPT_COUNT
>  	help
>  	  This option reduces the latency of the kernel by making
>  	  all kernel code (that is not executing in a critical section)
> @@ -52,3 +53,5 @@ config PREEMPT
> 
>  endchoice
> 
> +config PREEMPT_COUNT
> +       bool
> \ No newline at end of file
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 01d9536..90ad7cf 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -2843,7 +2843,7 @@ void sched_fork(struct task_struct *p)
>  #if defined(CONFIG_SMP)
>  	p->on_cpu = 0;
>  #endif
> -#ifdef CONFIG_PREEMPT
> +#ifdef CONFIG_PREEMPT_COUNT
>  	/* Want to start with kernel preemption disabled. */
>  	task_thread_info(p)->preempt_count = 1;
>  #endif
> -- 
> 1.7.5.4
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/4] sched: Make sleeping inside spinlock detection working in !CONFIG_PREEMPT
  2011-06-08 17:48 ` [PATCH 3/4] sched: Make sleeping inside spinlock detection working in !CONFIG_PREEMPT Frederic Weisbecker
@ 2011-06-08 19:40   ` Paul E. McKenney
  0 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2011-06-08 19:40 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: LKML, Ingo Molnar, Peter Zijlstra

On Wed, Jun 08, 2011 at 07:48:34PM +0200, Frederic Weisbecker wrote:
> Select CONFIG_PREEMPT_COUNT when we enable the sleeping inside
> spinlock detection, so that the preempt offset gets correctly
> incremented/decremented from preempt_disable()/preempt_enable().
> 
> This makes the preempt count eventually working in !CONFIG_PREEMPT
> when that debug option is set and thus fixes the detection of explicit
> preemption disabled sections under such config. Code that sleeps
> in explicitly preempt disabled section can be finally spotted
> in non-preemptible kernels.
> 
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> ---
>  lib/Kconfig.debug |    1 +
>  1 files changed, 1 insertions(+), 0 deletions(-)
> 
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index 28afa4c..a7dd7b5 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -650,6 +650,7 @@ config TRACE_IRQFLAGS
> 
>  config DEBUG_SPINLOCK_SLEEP
>  	bool "Spinlock debugging: sleep-inside-spinlock checking"
> +	select PREEMPT_COUNT
>  	depends on DEBUG_KERNEL
>  	help
>  	  If you say Y here, various routines which may sleep will become very
> -- 
> 1.7.5.4
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 4/4] sched: Generalize sleep inside spinlock detection
  2011-06-08 17:48 ` [PATCH 4/4] sched: Generalize sleep inside spinlock detection Frederic Weisbecker
@ 2011-06-08 19:41   ` Paul E. McKenney
  0 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2011-06-08 19:41 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: LKML, Peter Zijlstra, Ingo Molnar, Randy Dunlap

On Wed, Jun 08, 2011 at 07:48:35PM +0200, Frederic Weisbecker wrote:
> The sleeping inside spinlock detection is actually used
> for more general sleeping inside atomic sections
> debugging: preemption disabled, rcu read side critical
> sections, interrupts, interrupt disabled, etc...
> 
> Change the name of the config and its help section to
> reflect its more general role.
> 
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Randy Dunlap <randy.dunlap@oracle.com>
> ---
>  Documentation/DocBook/kernel-hacking.tmpl  |    2 +-
>  Documentation/SubmitChecklist              |    2 +-
>  Documentation/development-process/4.Coding |    2 +-
>  Documentation/ja_JP/SubmitChecklist        |    2 +-
>  Documentation/zh_CN/SubmitChecklist        |    2 +-
>  include/linux/kernel.h                     |    2 +-
>  kernel/sched.c                             |    2 +-
>  lib/Kconfig.debug                          |    8 +++++---
>  8 files changed, 12 insertions(+), 10 deletions(-)
> 
> diff --git a/Documentation/DocBook/kernel-hacking.tmpl b/Documentation/DocBook/kernel-hacking.tmpl
> index 7b3f493..07a9c48 100644
> --- a/Documentation/DocBook/kernel-hacking.tmpl
> +++ b/Documentation/DocBook/kernel-hacking.tmpl
> @@ -409,7 +409,7 @@ cond_resched(); /* Will sleep */
> 
>    <para>
>     You should always compile your kernel
> -   <symbol>CONFIG_DEBUG_SPINLOCK_SLEEP</symbol> on, and it will warn
> +   <symbol>CONFIG_DEBUG_ATOMIC_SLEEP</symbol> on, and it will warn
>     you if you break these rules.  If you <emphasis>do</emphasis> break
>     the rules, you will eventually lock up your box.
>    </para>
> diff --git a/Documentation/SubmitChecklist b/Documentation/SubmitChecklist
> index da0382d..7b13be4 100644
> --- a/Documentation/SubmitChecklist
> +++ b/Documentation/SubmitChecklist
> @@ -53,7 +53,7 @@ kernel patches.
> 
>  12: Has been tested with CONFIG_PREEMPT, CONFIG_DEBUG_PREEMPT,
>      CONFIG_DEBUG_SLAB, CONFIG_DEBUG_PAGEALLOC, CONFIG_DEBUG_MUTEXES,
> -    CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_SPINLOCK_SLEEP all simultaneously
> +    CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_ATOMIC_SLEEP all simultaneously
>      enabled.
> 
>  13: Has been build- and runtime tested with and without CONFIG_SMP and
> diff --git a/Documentation/development-process/4.Coding b/Documentation/development-process/4.Coding
> index f3f1a46..83f5f5b 100644
> --- a/Documentation/development-process/4.Coding
> +++ b/Documentation/development-process/4.Coding
> @@ -244,7 +244,7 @@ testing purposes.  In particular, you should turn on:
>   - DEBUG_SLAB can find a variety of memory allocation and use errors; it
>     should be used on most development kernels.
> 
> - - DEBUG_SPINLOCK, DEBUG_SPINLOCK_SLEEP, and DEBUG_MUTEXES will find a
> + - DEBUG_SPINLOCK, DEBUG_ATOMIC_SLEEP, and DEBUG_MUTEXES will find a
>     number of common locking errors.
> 
>  There are quite a few other debugging options, some of which will be
> diff --git a/Documentation/ja_JP/SubmitChecklist b/Documentation/ja_JP/SubmitChecklist
> index 2df4576..cb5507b 100644
> --- a/Documentation/ja_JP/SubmitChecklist
> +++ b/Documentation/ja_JP/SubmitChecklist
> @@ -68,7 +68,7 @@ Linux カーネルパッチ投稿者向けチェックリスト
> 
>  12: CONFIG_PREEMPT, CONFIG_DEBUG_PREEMPT, CONFIG_DEBUG_SLAB,
>      CONFIG_DEBUG_PAGEALLOC, CONFIG_DEBUG_MUTEXES, CONFIG_DEBUG_SPINLOCK,
> -    CONFIG_DEBUG_SPINLOCK_SLEEP これら全てを同時に有効にして動作確認を
> +    CONFIG_DEBUG_ATOMIC_SLEEP これら全てを同時に有効にして動作確認を
>      行ってください。
> 
>  13: CONFIG_SMP, CONFIG_PREEMPT を有効にした場合と無効にした場合の両方で
> diff --git a/Documentation/zh_CN/SubmitChecklist b/Documentation/zh_CN/SubmitChecklist
> index 951415b..4c741d6 100644
> --- a/Documentation/zh_CN/SubmitChecklist
> +++ b/Documentation/zh_CN/SubmitChecklist
> @@ -67,7 +67,7 @@ Linux
> 
>  12???Ѿ?ͨ??CONFIG_PREEMPT, CONFIG_DEBUG_PREEMPT,
>      CONFIG_DEBUG_SLAB, CONFIG_DEBUG_PAGEALLOC, CONFIG_DEBUG_MUTEXES,
> -    CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_SPINLOCK_SLEEP???ԣ?????ͬʱ??
> +    CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_ATOMIC_SLEEP???ԣ?????ͬʱ??
>      ʹ?ܡ?
> 
>  13???Ѿ???????????ʹ?û??߲?ʹ?? CONFIG_SMP ?? CONFIG_PREEMPT????ִ??ʱ?䡣
> diff --git a/include/linux/kernel.h b/include/linux/kernel.h
> index fb0e732..24b489f 100644
> --- a/include/linux/kernel.h
> +++ b/include/linux/kernel.h
> @@ -121,7 +121,7 @@ extern int _cond_resched(void);
>  # define might_resched() do { } while (0)
>  #endif
> 
> -#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
> +#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
>    void __might_sleep(const char *file, int line, int preempt_offset);
>  /**
>   * might_sleep - annotation for functions that can sleep
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 90ad7cf..a5f318b 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -8018,7 +8018,7 @@ void __init sched_init(void)
>  	scheduler_running = 1;
>  }
> 
> -#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
> +#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
>  static inline int preempt_count_equals(int preempt_offset)
>  {
>  	int nested = (preempt_count() & ~PREEMPT_ACTIVE) + rcu_preempt_depth();
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index a7dd7b5..81a4f33 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -648,13 +648,15 @@ config TRACE_IRQFLAGS
>  	  Enables hooks to interrupt enabling and disabling for
>  	  either tracing or lock debugging.
> 
> -config DEBUG_SPINLOCK_SLEEP
> -	bool "Spinlock debugging: sleep-inside-spinlock checking"
> +config DEBUG_ATOMIC_SLEEP
> +	bool "Sleep inside atomic section checking"
>  	select PREEMPT_COUNT
>  	depends on DEBUG_KERNEL
>  	help
>  	  If you say Y here, various routines which may sleep will become very
> -	  noisy if they are called with a spinlock held.
> +	  noisy if they are called inside atomic sections: when a spinlock is
> +	  held, inside an rcu read side critical section, inside preempt disabled
> +	  sections, inside an interrupt, etc...
> 
>  config DEBUG_LOCKING_API_SELFTESTS
>  	bool "Locking API boot-time self-tests"
> -- 
> 1.7.5.4
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4] sched: Isolate preempt counting in its own config option
  2011-06-08 17:48 ` [PATCH 2/4] sched: Isolate preempt counting in its own config option Frederic Weisbecker
  2011-06-08 19:40   ` Paul E. McKenney
@ 2011-06-08 19:47   ` Peter Zijlstra
  2011-06-08 19:58     ` Paul E. McKenney
  1 sibling, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2011-06-08 19:47 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: LKML, Paul E. McKenney, Ingo Molnar

On Wed, 2011-06-08 at 19:48 +0200, Frederic Weisbecker wrote:
> 
> Create a new CONFIG_PREEMPT_COUNT that handles the inc/dec
> of preempt count offset independently. So that the offset
> can be updated by preempt_disable() and preempt_enable()
> even without the need for CONFIG_PREEMPT beeing set.
> 
> This prepares to make CONFIG_DEBUG_SPINLOCK_SLEEP working
> with !CONFIG_PREEMPT where it currently doesn't detect
> code that sleeps inside explicit preemption disabled
> sections. 

The last time this got proposed it got shot down due to the extra
inc/dec stuff all over the place increasing overhead significantly.



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4] sched: Isolate preempt counting in its own config option
  2011-06-08 19:47   ` Peter Zijlstra
@ 2011-06-08 19:58     ` Paul E. McKenney
  2011-06-08 20:09       ` Peter Zijlstra
  0 siblings, 1 reply; 16+ messages in thread
From: Paul E. McKenney @ 2011-06-08 19:58 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Frederic Weisbecker, LKML, Ingo Molnar

On Wed, Jun 08, 2011 at 09:47:14PM +0200, Peter Zijlstra wrote:
> On Wed, 2011-06-08 at 19:48 +0200, Frederic Weisbecker wrote:
> > 
> > Create a new CONFIG_PREEMPT_COUNT that handles the inc/dec
> > of preempt count offset independently. So that the offset
> > can be updated by preempt_disable() and preempt_enable()
> > even without the need for CONFIG_PREEMPT beeing set.
> > 
> > This prepares to make CONFIG_DEBUG_SPINLOCK_SLEEP working
> > with !CONFIG_PREEMPT where it currently doesn't detect
> > code that sleeps inside explicit preemption disabled
> > sections. 
> 
> The last time this got proposed it got shot down due to the extra
> inc/dec stuff all over the place increasing overhead significantly.

Even given that the extra inc/dec stuff only happens in kernels built
with DEBUG_SPINLOCK_SLEEP=y (DEBUG_ATOMIC_SLEEP=y after patch 4/4)?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4] sched: Isolate preempt counting in its own config option
  2011-06-08 19:58     ` Paul E. McKenney
@ 2011-06-08 20:09       ` Peter Zijlstra
  0 siblings, 0 replies; 16+ messages in thread
From: Peter Zijlstra @ 2011-06-08 20:09 UTC (permalink / raw)
  To: paulmck; +Cc: Frederic Weisbecker, LKML, Ingo Molnar

On Wed, 2011-06-08 at 12:58 -0700, Paul E. McKenney wrote:
> 
> On Wed, Jun 08, 2011 at 09:47:14PM +0200, Peter Zijlstra wrote:
> > On Wed, 2011-06-08 at 19:48 +0200, Frederic Weisbecker wrote:
> > > 
> > > Create a new CONFIG_PREEMPT_COUNT that handles the inc/dec
> > > of preempt count offset independently. So that the offset
> > > can be updated by preempt_disable() and preempt_enable()
> > > even without the need for CONFIG_PREEMPT beeing set.
> > > 
> > > This prepares to make CONFIG_DEBUG_SPINLOCK_SLEEP working
> > > with !CONFIG_PREEMPT where it currently doesn't detect
> > > code that sleeps inside explicit preemption disabled
> > > sections. 
> > 
> > The last time this got proposed it got shot down due to the extra
> > inc/dec stuff all over the place increasing overhead significantly.
> 
> Even given that the extra inc/dec stuff only happens in kernels built
> with DEBUG_SPINLOCK_SLEEP=y (DEBUG_ATOMIC_SLEEP=y after patch 4/4)? 

Ah, no that might be ok. That's what I get for trying to read email in
no time.



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Bug: ACPI, scheduling while atomic (was Re: [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT)
  2011-06-08 17:48 [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT Frederic Weisbecker
                   ` (3 preceding siblings ...)
  2011-06-08 17:48 ` [PATCH 4/4] sched: Generalize sleep inside spinlock detection Frederic Weisbecker
@ 2011-06-08 22:49 ` Frederic Weisbecker
  2011-08-25  3:57   ` Randy Dunlap
  4 siblings, 1 reply; 16+ messages in thread
From: Frederic Weisbecker @ 2011-06-08 22:49 UTC (permalink / raw)
  To: LKML, Len Brown
  Cc: Paul E. McKenney, Peter Zijlstra, Randy Dunlap, Ingo Molnar

On Wed, Jun 08, 2011 at 07:48:31PM +0200, Frederic Weisbecker wrote:
> Aside it may mostly avoid the need for a specific PROVE_RCU
> check when we sleep inside an rcu read side critical section.
> 
> Better make sleeping inside atomic sections work everywhere.

BTW, it has led to detect a bug in the ACPI code. It happens in
!CONFIG_PREEMPT:

[    0.160187] BUG: scheduling while atomic: swapper/0/0x10000002
[    0.166016] no locks held by swapper/0.
[    0.170014] Modules linked in:
[    0.173107] Pid: 0, comm: swapper Not tainted 2.6.39+ #124
[    0.180014] Call Trace:
[    0.182481]  [<ffffffff81048685>] __schedule_bug+0x85/0x90
[    0.187967]  [<ffffffff817da98c>] schedule+0x75c/0xa40
[    0.190022]  [<ffffffff8109a1fd>] ? trace_hardirqs_on+0xd/0x10
[    0.200023]  [<ffffffff813879c0>] ? acpi_ps_free_op+0x22/0x24
[    0.205776]  [<ffffffff810554a5>] __cond_resched+0x25/0x40
[    0.210022]  [<ffffffff817daf3b>] _cond_resched+0x2b/0x40
[    0.215420]  [<ffffffff81386cbe>] acpi_ps_complete_op+0x262/0x278
[    0.220023]  [<ffffffff813874df>] acpi_ps_parse_loop+0x80b/0x960
[    0.230023]  [<ffffffff81386607>] acpi_ps_parse_aml+0x98/0x274
[    0.235859]  [<ffffffff81384cbb>] acpi_ns_one_complete_parse+0x103/0x120
[    0.240021]  [<ffffffff810886da>] ? up+0x2a/0x50
[    0.244641]  [<ffffffff81384cf3>] acpi_ns_parse_table+0x1b/0x34
[    0.250022]  [<ffffffff8138242a>] acpi_ns_load_table+0x4a/0x8c
[    0.260023]  [<ffffffff8138947c>] acpi_load_tables+0x9c/0x15d
[    0.265774]  [<ffffffff81d0b0f8>] acpi_early_init+0x6c/0xf7
[    0.270022]  [<ffffffff81cd8d31>] start_kernel+0x400/0x415
[    0.275508]  [<ffffffff81cd8346>] x86_64_start_reservations+0x131/0x135
[    0.280022]  [<ffffffff81cd844d>] x86_64_start_kernel+0x103/0x112

ACPI_PREEMPTION_POINT() is called from acpi_ps_complete_op() and schedules
if !PREEMPT. But preemption is disabled as we are in early bootup.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Bug: ACPI, scheduling while atomic (was Re: [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT)
  2011-06-08 22:49 ` Bug: ACPI, scheduling while atomic (was Re: [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT) Frederic Weisbecker
@ 2011-08-25  3:57   ` Randy Dunlap
  2011-08-27 15:32     ` Frederic Weisbecker
  2011-09-26 22:33     ` Davidlohr Bueso
  0 siblings, 2 replies; 16+ messages in thread
From: Randy Dunlap @ 2011-08-25  3:57 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Len Brown, Paul E. McKenney, Peter Zijlstra, Ingo Molnar

On Thu, 9 Jun 2011 00:49:41 +0200 Frederic Weisbecker wrote:

> On Wed, Jun 08, 2011 at 07:48:31PM +0200, Frederic Weisbecker wrote:
> > Aside it may mostly avoid the need for a specific PROVE_RCU
> > check when we sleep inside an rcu read side critical section.
> > 
> > Better make sleeping inside atomic sections work everywhere.
> 
> BTW, it has led to detect a bug in the ACPI code. It happens in
> !CONFIG_PREEMPT:
> 
> [    0.160187] BUG: scheduling while atomic: swapper/0/0x10000002
> [    0.166016] no locks held by swapper/0.
> [    0.170014] Modules linked in:
> [    0.173107] Pid: 0, comm: swapper Not tainted 2.6.39+ #124
> [    0.180014] Call Trace:
> [    0.182481]  [<ffffffff81048685>] __schedule_bug+0x85/0x90
> [    0.187967]  [<ffffffff817da98c>] schedule+0x75c/0xa40
> [    0.190022]  [<ffffffff8109a1fd>] ? trace_hardirqs_on+0xd/0x10
> [    0.200023]  [<ffffffff813879c0>] ? acpi_ps_free_op+0x22/0x24
> [    0.205776]  [<ffffffff810554a5>] __cond_resched+0x25/0x40
> [    0.210022]  [<ffffffff817daf3b>] _cond_resched+0x2b/0x40
> [    0.215420]  [<ffffffff81386cbe>] acpi_ps_complete_op+0x262/0x278
> [    0.220023]  [<ffffffff813874df>] acpi_ps_parse_loop+0x80b/0x960
> [    0.230023]  [<ffffffff81386607>] acpi_ps_parse_aml+0x98/0x274
> [    0.235859]  [<ffffffff81384cbb>] acpi_ns_one_complete_parse+0x103/0x120
> [    0.240021]  [<ffffffff810886da>] ? up+0x2a/0x50
> [    0.244641]  [<ffffffff81384cf3>] acpi_ns_parse_table+0x1b/0x34
> [    0.250022]  [<ffffffff8138242a>] acpi_ns_load_table+0x4a/0x8c
> [    0.260023]  [<ffffffff8138947c>] acpi_load_tables+0x9c/0x15d
> [    0.265774]  [<ffffffff81d0b0f8>] acpi_early_init+0x6c/0xf7
> [    0.270022]  [<ffffffff81cd8d31>] start_kernel+0x400/0x415
> [    0.275508]  [<ffffffff81cd8346>] x86_64_start_reservations+0x131/0x135
> [    0.280022]  [<ffffffff81cd844d>] x86_64_start_kernel+0x103/0x112
> 
> ACPI_PREEMPTION_POINT() is called from acpi_ps_complete_op() and schedules
> if !PREEMPT. But preemption is disabled as we are in early bootup.

This still happens in 3.1-rc[123].
Was there a patch for it?

---
~Randy
*** Remember to use Documentation/SubmitChecklist when testing your code ***

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Bug: ACPI, scheduling while atomic (was Re: [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT)
  2011-08-25  3:57   ` Randy Dunlap
@ 2011-08-27 15:32     ` Frederic Weisbecker
  2011-09-26 22:33     ` Davidlohr Bueso
  1 sibling, 0 replies; 16+ messages in thread
From: Frederic Weisbecker @ 2011-08-27 15:32 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: LKML, Len Brown, Paul E. McKenney, Peter Zijlstra, Ingo Molnar

On Wed, Aug 24, 2011 at 08:57:45PM -0700, Randy Dunlap wrote:
> On Thu, 9 Jun 2011 00:49:41 +0200 Frederic Weisbecker wrote:
> 
> > On Wed, Jun 08, 2011 at 07:48:31PM +0200, Frederic Weisbecker wrote:
> > > Aside it may mostly avoid the need for a specific PROVE_RCU
> > > check when we sleep inside an rcu read side critical section.
> > > 
> > > Better make sleeping inside atomic sections work everywhere.
> > 
> > BTW, it has led to detect a bug in the ACPI code. It happens in
> > !CONFIG_PREEMPT:
> > 
> > [    0.160187] BUG: scheduling while atomic: swapper/0/0x10000002
> > [    0.166016] no locks held by swapper/0.
> > [    0.170014] Modules linked in:
> > [    0.173107] Pid: 0, comm: swapper Not tainted 2.6.39+ #124
> > [    0.180014] Call Trace:
> > [    0.182481]  [<ffffffff81048685>] __schedule_bug+0x85/0x90
> > [    0.187967]  [<ffffffff817da98c>] schedule+0x75c/0xa40
> > [    0.190022]  [<ffffffff8109a1fd>] ? trace_hardirqs_on+0xd/0x10
> > [    0.200023]  [<ffffffff813879c0>] ? acpi_ps_free_op+0x22/0x24
> > [    0.205776]  [<ffffffff810554a5>] __cond_resched+0x25/0x40
> > [    0.210022]  [<ffffffff817daf3b>] _cond_resched+0x2b/0x40
> > [    0.215420]  [<ffffffff81386cbe>] acpi_ps_complete_op+0x262/0x278
> > [    0.220023]  [<ffffffff813874df>] acpi_ps_parse_loop+0x80b/0x960
> > [    0.230023]  [<ffffffff81386607>] acpi_ps_parse_aml+0x98/0x274
> > [    0.235859]  [<ffffffff81384cbb>] acpi_ns_one_complete_parse+0x103/0x120
> > [    0.240021]  [<ffffffff810886da>] ? up+0x2a/0x50
> > [    0.244641]  [<ffffffff81384cf3>] acpi_ns_parse_table+0x1b/0x34
> > [    0.250022]  [<ffffffff8138242a>] acpi_ns_load_table+0x4a/0x8c
> > [    0.260023]  [<ffffffff8138947c>] acpi_load_tables+0x9c/0x15d
> > [    0.265774]  [<ffffffff81d0b0f8>] acpi_early_init+0x6c/0xf7
> > [    0.270022]  [<ffffffff81cd8d31>] start_kernel+0x400/0x415
> > [    0.275508]  [<ffffffff81cd8346>] x86_64_start_reservations+0x131/0x135
> > [    0.280022]  [<ffffffff81cd844d>] x86_64_start_kernel+0x103/0x112
> > 
> > ACPI_PREEMPTION_POINT() is called from acpi_ps_complete_op() and schedules
> > if !PREEMPT. But preemption is disabled as we are in early bootup.
> 
> This still happens in 3.1-rc[123].
> Was there a patch for it?

I think this patch should fix it, and may make it to mainline soon:
http://git.kernel.org/?p=linux/kernel/git/paulmck/linux-2.6-rcu.git;a=commitdiff;h=61b12d4413c866abbf0d0cf18f5a2bd4420ca15e

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Bug: ACPI, scheduling while atomic (was Re: [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT)
  2011-08-25  3:57   ` Randy Dunlap
  2011-08-27 15:32     ` Frederic Weisbecker
@ 2011-09-26 22:33     ` Davidlohr Bueso
  2011-09-26 22:54       ` Paul E. McKenney
  1 sibling, 1 reply; 16+ messages in thread
From: Davidlohr Bueso @ 2011-09-26 22:33 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: Frederic Weisbecker, LKML, Len Brown, Paul E. McKenney,
	Peter Zijlstra, Ingo Molnar

On Wed, 2011-08-24 at 20:57 -0700, Randy Dunlap wrote:
> On Thu, 9 Jun 2011 00:49:41 +0200 Frederic Weisbecker wrote:
> 
> > On Wed, Jun 08, 2011 at 07:48:31PM +0200, Frederic Weisbecker wrote:
> > > Aside it may mostly avoid the need for a specific PROVE_RCU
> > > check when we sleep inside an rcu read side critical section.
> > > 
> > > Better make sleeping inside atomic sections work everywhere.
> > 
> > BTW, it has led to detect a bug in the ACPI code. It happens in
> > !CONFIG_PREEMPT:
> > 
> > [    0.160187] BUG: scheduling while atomic: swapper/0/0x10000002
> > [    0.166016] no locks held by swapper/0.
> > [    0.170014] Modules linked in:
> > [    0.173107] Pid: 0, comm: swapper Not tainted 2.6.39+ #124
> > [    0.180014] Call Trace:
> > [    0.182481]  [<ffffffff81048685>] __schedule_bug+0x85/0x90
> > [    0.187967]  [<ffffffff817da98c>] schedule+0x75c/0xa40
> > [    0.190022]  [<ffffffff8109a1fd>] ? trace_hardirqs_on+0xd/0x10
> > [    0.200023]  [<ffffffff813879c0>] ? acpi_ps_free_op+0x22/0x24
> > [    0.205776]  [<ffffffff810554a5>] __cond_resched+0x25/0x40
> > [    0.210022]  [<ffffffff817daf3b>] _cond_resched+0x2b/0x40
> > [    0.215420]  [<ffffffff81386cbe>] acpi_ps_complete_op+0x262/0x278
> > [    0.220023]  [<ffffffff813874df>] acpi_ps_parse_loop+0x80b/0x960
> > [    0.230023]  [<ffffffff81386607>] acpi_ps_parse_aml+0x98/0x274
> > [    0.235859]  [<ffffffff81384cbb>] acpi_ns_one_complete_parse+0x103/0x120
> > [    0.240021]  [<ffffffff810886da>] ? up+0x2a/0x50
> > [    0.244641]  [<ffffffff81384cf3>] acpi_ns_parse_table+0x1b/0x34
> > [    0.250022]  [<ffffffff8138242a>] acpi_ns_load_table+0x4a/0x8c
> > [    0.260023]  [<ffffffff8138947c>] acpi_load_tables+0x9c/0x15d
> > [    0.265774]  [<ffffffff81d0b0f8>] acpi_early_init+0x6c/0xf7
> > [    0.270022]  [<ffffffff81cd8d31>] start_kernel+0x400/0x415
> > [    0.275508]  [<ffffffff81cd8346>] x86_64_start_reservations+0x131/0x135
> > [    0.280022]  [<ffffffff81cd844d>] x86_64_start_kernel+0x103/0x112
> > 
> > ACPI_PREEMPTION_POINT() is called from acpi_ps_complete_op() and schedules
> > if !PREEMPT. But preemption is disabled as we are in early bootup.
> 
> This still happens in 3.1-rc[123].
> Was there a patch for it?
> 

And still in -rc7. I was looking at Paul's github
(https://github.com/paulmckrcu/linux/) but can't find it.

> ---
> ~Randy
> *** Remember to use Documentation/SubmitChecklist when testing your code ***
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Bug: ACPI, scheduling while atomic (was Re: [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT)
  2011-09-26 22:33     ` Davidlohr Bueso
@ 2011-09-26 22:54       ` Paul E. McKenney
  0 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2011-09-26 22:54 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: Randy Dunlap, Frederic Weisbecker, LKML, Len Brown,
	Peter Zijlstra, Ingo Molnar

On Mon, Sep 26, 2011 at 07:33:34PM -0300, Davidlohr Bueso wrote:
> On Wed, 2011-08-24 at 20:57 -0700, Randy Dunlap wrote:
> > On Thu, 9 Jun 2011 00:49:41 +0200 Frederic Weisbecker wrote:
> > 
> > > On Wed, Jun 08, 2011 at 07:48:31PM +0200, Frederic Weisbecker wrote:
> > > > Aside it may mostly avoid the need for a specific PROVE_RCU
> > > > check when we sleep inside an rcu read side critical section.
> > > > 
> > > > Better make sleeping inside atomic sections work everywhere.
> > > 
> > > BTW, it has led to detect a bug in the ACPI code. It happens in
> > > !CONFIG_PREEMPT:
> > > 
> > > [    0.160187] BUG: scheduling while atomic: swapper/0/0x10000002
> > > [    0.166016] no locks held by swapper/0.
> > > [    0.170014] Modules linked in:
> > > [    0.173107] Pid: 0, comm: swapper Not tainted 2.6.39+ #124
> > > [    0.180014] Call Trace:
> > > [    0.182481]  [<ffffffff81048685>] __schedule_bug+0x85/0x90
> > > [    0.187967]  [<ffffffff817da98c>] schedule+0x75c/0xa40
> > > [    0.190022]  [<ffffffff8109a1fd>] ? trace_hardirqs_on+0xd/0x10
> > > [    0.200023]  [<ffffffff813879c0>] ? acpi_ps_free_op+0x22/0x24
> > > [    0.205776]  [<ffffffff810554a5>] __cond_resched+0x25/0x40
> > > [    0.210022]  [<ffffffff817daf3b>] _cond_resched+0x2b/0x40
> > > [    0.215420]  [<ffffffff81386cbe>] acpi_ps_complete_op+0x262/0x278
> > > [    0.220023]  [<ffffffff813874df>] acpi_ps_parse_loop+0x80b/0x960
> > > [    0.230023]  [<ffffffff81386607>] acpi_ps_parse_aml+0x98/0x274
> > > [    0.235859]  [<ffffffff81384cbb>] acpi_ns_one_complete_parse+0x103/0x120
> > > [    0.240021]  [<ffffffff810886da>] ? up+0x2a/0x50
> > > [    0.244641]  [<ffffffff81384cf3>] acpi_ns_parse_table+0x1b/0x34
> > > [    0.250022]  [<ffffffff8138242a>] acpi_ns_load_table+0x4a/0x8c
> > > [    0.260023]  [<ffffffff8138947c>] acpi_load_tables+0x9c/0x15d
> > > [    0.265774]  [<ffffffff81d0b0f8>] acpi_early_init+0x6c/0xf7
> > > [    0.270022]  [<ffffffff81cd8d31>] start_kernel+0x400/0x415
> > > [    0.275508]  [<ffffffff81cd8346>] x86_64_start_reservations+0x131/0x135
> > > [    0.280022]  [<ffffffff81cd844d>] x86_64_start_kernel+0x103/0x112
> > > 
> > > ACPI_PREEMPTION_POINT() is called from acpi_ps_complete_op() and schedules
> > > if !PREEMPT. But preemption is disabled as we are in early bootup.
> > 
> > This still happens in 3.1-rc[123].
> > Was there a patch for it?
> > 
> 
> And still in -rc7. I was looking at Paul's github
> (https://github.com/paulmckrcu/linux/) but can't find it.

I just updated this with Frederic's new patch series.  Or do you mean
that you can't access my github tree at all?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2011-09-26 22:54 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-08 17:48 [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT Frederic Weisbecker
2011-06-08 17:48 ` [PATCH 1/4] sched: Remove pointless in_atomic() definition check Frederic Weisbecker
2011-06-08 17:48 ` [PATCH 2/4] sched: Isolate preempt counting in its own config option Frederic Weisbecker
2011-06-08 19:40   ` Paul E. McKenney
2011-06-08 19:47   ` Peter Zijlstra
2011-06-08 19:58     ` Paul E. McKenney
2011-06-08 20:09       ` Peter Zijlstra
2011-06-08 17:48 ` [PATCH 3/4] sched: Make sleeping inside spinlock detection working in !CONFIG_PREEMPT Frederic Weisbecker
2011-06-08 19:40   ` Paul E. McKenney
2011-06-08 17:48 ` [PATCH 4/4] sched: Generalize sleep inside spinlock detection Frederic Weisbecker
2011-06-08 19:41   ` Paul E. McKenney
2011-06-08 22:49 ` Bug: ACPI, scheduling while atomic (was Re: [PATCH 0/4] sched: Make sleep inside atomic detection work on !PREEMPT) Frederic Weisbecker
2011-08-25  3:57   ` Randy Dunlap
2011-08-27 15:32     ` Frederic Weisbecker
2011-09-26 22:33     ` Davidlohr Bueso
2011-09-26 22:54       ` Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.