linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -rt 1/9] preempt rcu: check for underflow
@ 2007-07-30  2:45 Daniel Walker
  2007-07-30  2:45 ` [PATCH -rt 2/9] Dont allow non-threaded softirqs and threaded hardirqs Daniel Walker
                   ` (9 more replies)
  0 siblings, 10 replies; 26+ messages in thread
From: Daniel Walker @ 2007-07-30  2:45 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, linux-rt-users

[-- Attachment #1: add-warn-on-rcu-read-unlock-imbalance.patch --]
[-- Type: text/plain, Size: 656 bytes --]

Simple WARN_ON to catch any underflow in rcu_read_lock_nesting.

Signed-off-by: Daniel Walker <dwalker@mvista.com>

---
 kernel/rcupreempt.c |    6 ++++++
 1 file changed, 6 insertions(+)

Index: linux-2.6.22/kernel/rcupreempt.c
===================================================================
--- linux-2.6.22.orig/kernel/rcupreempt.c
+++ linux-2.6.22/kernel/rcupreempt.c
@@ -157,6 +157,12 @@ void __rcu_read_unlock(void)
 	}
 
 	local_irq_restore(oldirq);
+
+	/*
+	 * If our rcu_read_lock_nesting went negative, likely
+	 * something is wrong..
+	 */
+	WARN_ON(current->rcu_read_lock_nesting < 0);
 }
 
 static void __rcu_advance_callbacks(void)

-- 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH -rt 2/9] Dont allow non-threaded softirqs and threaded hardirqs
  2007-07-30  2:45 [PATCH -rt 1/9] preempt rcu: check for underflow Daniel Walker
@ 2007-07-30  2:45 ` Daniel Walker
  2007-07-30  9:23   ` Ingo Molnar
  2007-07-30  2:45 ` [PATCH -rt 3/9] Fix jiffies wrap issue in update_times Daniel Walker
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 26+ messages in thread
From: Daniel Walker @ 2007-07-30  2:45 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, linux-rt-users, Steven Rostedt

[-- Attachment #1: preempt-hardirqs-selects-preempt-softirqs.patch --]
[-- Type: text/plain, Size: 991 bytes --]

From: Steven Rostedt <rostedt@goodmis.org>

Ingo,

I think this was sent before, and it did cause problems before. Would
there be *any* reason to have non-threaded softirqs but threaded hardirqs.
I can see lots of issues with that.

This patch has selecting hardirqs also select softirqs as threads.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>

---
 kernel/Kconfig.preempt |    1 +
 1 file changed, 1 insertion(+)

Index: linux-2.6.22/kernel/Kconfig.preempt
===================================================================
--- linux-2.6.22.orig/kernel/Kconfig.preempt	2007-07-26 14:59:11.000000000 +0000
+++ linux-2.6.22/kernel/Kconfig.preempt	2007-07-26 14:59:48.000000000 +0000
@@ -106,6 +106,7 @@ config PREEMPT_HARDIRQS
 	bool "Thread Hardirqs"
 	default n
 	depends on !GENERIC_HARDIRQS_NO__DO_IRQ
+	select PREEMPT_SOFTIRQS
 	help
 	  This option reduces the latency of the kernel by 'threading'
           hardirqs. This means that all (or selected) hardirqs will run

-- 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH -rt 3/9] Fix jiffies wrap issue in update_times
  2007-07-30  2:45 [PATCH -rt 1/9] preempt rcu: check for underflow Daniel Walker
  2007-07-30  2:45 ` [PATCH -rt 2/9] Dont allow non-threaded softirqs and threaded hardirqs Daniel Walker
@ 2007-07-30  2:45 ` Daniel Walker
  2007-07-30  9:25   ` Ingo Molnar
  2007-07-30  2:45 ` [PATCH -rt 4/9] ifdef raise_softirq_irqoff wakeup Daniel Walker
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 26+ messages in thread
From: Daniel Walker @ 2007-07-30  2:45 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, linux-rt-users

[-- Attachment #1: initialize-last_tick-in-calc_load.patch --]
[-- Type: text/plain, Size: 775 bytes --]

In prior -rt verisons the last_tick value was called wall_jiffies 
and was initialized in this same way as below. If this value isn't
initialized the calc_load function gets skewed for several minutes
right after boot up. Skewed meaning always zero.

Signed-off-by: Daniel Walker <dwalker@mvista.com>

---
 kernel/timer.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6.22/kernel/timer.c
===================================================================
--- linux-2.6.22.orig/kernel/timer.c
+++ linux-2.6.22/kernel/timer.c
@@ -987,7 +987,7 @@ void run_local_timers(void)
  */
 static inline void update_times(void)
 {
-	static unsigned long last_tick;
+	static unsigned long last_tick = INITIAL_JIFFIES;
 	unsigned long ticks, flags;
 
 	/*

-- 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH -rt 4/9] ifdef raise_softirq_irqoff wakeup
  2007-07-30  2:45 [PATCH -rt 1/9] preempt rcu: check for underflow Daniel Walker
  2007-07-30  2:45 ` [PATCH -rt 2/9] Dont allow non-threaded softirqs and threaded hardirqs Daniel Walker
  2007-07-30  2:45 ` [PATCH -rt 3/9] Fix jiffies wrap issue in update_times Daniel Walker
@ 2007-07-30  2:45 ` Daniel Walker
  2007-07-30  9:27   ` Ingo Molnar
  2007-07-30  2:45 ` [PATCH -rt 5/9] net: fix mis-merge in qdisc_restart Daniel Walker
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 26+ messages in thread
From: Daniel Walker @ 2007-07-30  2:45 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, linux-rt-users

[-- Attachment #1: softirq-raise-wakeup-fix.patch --]
[-- Type: text/plain, Size: 846 bytes --]

raise_softirq is called every timer interrupt in run_local_timers(),
which causes a thread wakeup to happen every timer interrupt. This
happens in !CONFIG_PREEMPT_SOFTIRQS, which means the wakeup is most
likely not needed. In addition it also fouls calc_load() since it's,
agian, observing at least one thread running on every invocation.

Signed-off-by: Daniel Walker <dwalker@mvista.com>
 
---
 kernel/softirq.c |    2 ++
 1 file changed, 2 insertions(+)

Index: linux-2.6.22/kernel/softirq.c
===================================================================
--- linux-2.6.22.orig/kernel/softirq.c
+++ linux-2.6.22/kernel/softirq.c
@@ -508,7 +508,9 @@ inline fastcall void raise_softirq_irqof
 {
 	__do_raise_softirq_irqoff(nr);
 
+#ifdef CONFIG_PREEMPT_SOFTIRQS
 	wakeup_softirqd(nr);
+#endif
 }
 
 EXPORT_SYMBOL(raise_softirq_irqoff);

-- 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH -rt 5/9] net: fix mis-merge in qdisc_restart
  2007-07-30  2:45 [PATCH -rt 1/9] preempt rcu: check for underflow Daniel Walker
                   ` (2 preceding siblings ...)
  2007-07-30  2:45 ` [PATCH -rt 4/9] ifdef raise_softirq_irqoff wakeup Daniel Walker
@ 2007-07-30  2:45 ` Daniel Walker
  2007-07-30  9:30   ` Ingo Molnar
  2007-07-30  2:45 ` [PATCH -rt 6/9] spinlock/rt_lock random cleanups Daniel Walker
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 26+ messages in thread
From: Daniel Walker @ 2007-07-30  2:45 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, linux-rt-users

[-- Attachment #1: preempt-realtime-net-mismerge.patch --]
[-- Type: text/plain, Size: 747 bytes --]

This mismerge caused my networking to malfunction. The interface would
come up, but no traffic would make it in/out ..

Signed-off-by: Daniel Walker <dwalker@mvista.com>

---
 net/sched/sch_generic.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6.22/net/sched/sch_generic.c
===================================================================
--- linux-2.6.22.orig/net/sched/sch_generic.c
+++ linux-2.6.22/net/sched/sch_generic.c
@@ -156,7 +156,7 @@ static inline int qdisc_restart(struct n
 #ifdef CONFIG_PREEMPT_RT
 		netif_tx_lock(dev);
 #else
-		if (netif_tx_trylock(dev))
+		if (!netif_tx_trylock(dev))
 			/* Another CPU grabbed the driver tx lock */
 			return handle_dev_cpu_collision(skb, dev, q);
 #endif

-- 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH -rt 6/9] spinlock/rt_lock random cleanups
  2007-07-30  2:45 [PATCH -rt 1/9] preempt rcu: check for underflow Daniel Walker
                   ` (3 preceding siblings ...)
  2007-07-30  2:45 ` [PATCH -rt 5/9] net: fix mis-merge in qdisc_restart Daniel Walker
@ 2007-07-30  2:45 ` Daniel Walker
  2007-07-30  4:58   ` Ankita Garg
  2007-07-30  9:31   ` Ingo Molnar
  2007-07-30  2:45 ` [PATCH -rt 7/9] introduce PICK_FUNCTION Daniel Walker
                   ` (4 subsequent siblings)
  9 siblings, 2 replies; 26+ messages in thread
From: Daniel Walker @ 2007-07-30  2:45 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, linux-rt-users

[-- Attachment #1: locking-cleanup.patch --]
[-- Type: text/plain, Size: 2135 bytes --]

Signed-off-by: Daniel Walker <dwalker@mvista.com>

---
 include/linux/rt_lock.h  |    6 ++++--
 include/linux/spinlock.h |    5 +++--
 2 files changed, 7 insertions(+), 4 deletions(-)

Index: linux-2.6.22/include/linux/rt_lock.h
===================================================================
--- linux-2.6.22.orig/include/linux/rt_lock.h
+++ linux-2.6.22/include/linux/rt_lock.h
@@ -128,12 +128,14 @@ struct semaphore name = \
  */
 #define DECLARE_MUTEX_LOCKED COMPAT_DECLARE_MUTEX_LOCKED
 
-extern void fastcall __sema_init(struct semaphore *sem, int val, char *name, char *file, int line);
+extern void fastcall
+__sema_init(struct semaphore *sem, int val, char *name, char *file, int line);
 
 #define rt_sema_init(sem, val) \
 		__sema_init(sem, val, #sem, __FILE__, __LINE__)
 
-extern void fastcall __init_MUTEX(struct semaphore *sem, char *name, char *file, int line);
+extern void fastcall
+__init_MUTEX(struct semaphore *sem, char *name, char *file, int line);
 #define rt_init_MUTEX(sem) \
 		__init_MUTEX(sem, #sem, __FILE__, __LINE__)
 
Index: linux-2.6.22/include/linux/spinlock.h
===================================================================
--- linux-2.6.22.orig/include/linux/spinlock.h
+++ linux-2.6.22/include/linux/spinlock.h
@@ -126,7 +126,7 @@ extern int __lockfunc generic__raw_read_
 
 #ifdef CONFIG_DEBUG_SPINLOCK
  extern __lockfunc void _raw_spin_lock(raw_spinlock_t *lock);
-#define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock)
+# define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock)
  extern __lockfunc int _raw_spin_trylock(raw_spinlock_t *lock);
  extern __lockfunc void _raw_spin_unlock(raw_spinlock_t *lock);
  extern __lockfunc void _raw_read_lock(raw_rwlock_t *lock);
@@ -325,7 +325,8 @@ do {							\
 
 # define _read_trylock(rwl)	rt_read_trylock(rwl)
 # define _write_trylock(rwl)	rt_write_trylock(rwl)
-#define _write_trylock_irqsave(rwl, flags)  rt_write_trylock_irqsave(rwl, flags)
+# define _write_trylock_irqsave(rwl, flags) \
+	rt_write_trylock_irqsave(rwl, flags)
 
 # define _read_lock(rwl)	rt_read_lock(rwl)
 # define _write_lock(rwl)	rt_write_lock(rwl)

-- 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH -rt 7/9] introduce PICK_FUNCTION
  2007-07-30  2:45 [PATCH -rt 1/9] preempt rcu: check for underflow Daniel Walker
                   ` (4 preceding siblings ...)
  2007-07-30  2:45 ` [PATCH -rt 6/9] spinlock/rt_lock random cleanups Daniel Walker
@ 2007-07-30  2:45 ` Daniel Walker
  2007-07-30  9:39   ` Peter Zijlstra
  2007-07-30  2:45 ` [PATCH -rt 8/9] spinlocks/rwlocks: use PICK_FUNCTION() Daniel Walker
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 26+ messages in thread
From: Daniel Walker @ 2007-07-30  2:45 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, linux-rt-users

[-- Attachment #1: pickop-rt_lock_h.patch --]
[-- Type: text/plain, Size: 9072 bytes --]

PICK_FUNCTION() is similar to the other PICK_OP style macros, and was
created to replace them all. I used variable argument macros to handle
PICK_FUNC_2ARG/PICK_FUNC_1ARG. Otherwise the marcos are similar to the
original macros used for semaphores. The entire system is used to do a
compile time switch between two different locking APIs. For example,
real spinlocks (raw_spinlock_t) and mutexes (or sleeping spinlocks).

This new macro replaces all the duplication from lock type to lock type.
The result of this patch, and the next two, is a fairly nice simplification,
and consolidation. Although the seqlock changes are larger than the originals
I think over all the patchset is worth while.

Signed-off-by: Daniel Walker <dwalker@mvista.com>

---
 include/linux/pickop.h  |   32 +++++++++++
 include/linux/rt_lock.h |  129 +++++++++++++++---------------------------------
 2 files changed, 73 insertions(+), 88 deletions(-)

Index: linux-2.6.22/include/linux/pickop.h
===================================================================
--- /dev/null
+++ linux-2.6.22/include/linux/pickop.h
@@ -0,0 +1,32 @@
+#ifndef _LINUX_PICKOP_H
+#define _LINUX_PICKOP_H
+
+#undef TYPE_EQUAL
+#define TYPE_EQUAL(var, type) \
+		__builtin_types_compatible_p(typeof(var), type *)
+
+extern int __bad_func_type(void);
+
+#define PICK_FUNCTION(type1, type2, func1, func2, arg0, ...)		\
+do {									\
+	if (TYPE_EQUAL((arg0), type1))					\
+		func1((type1 *)(arg0), ##__VA_ARGS__);			\
+	else if (TYPE_EQUAL((arg0), type2))				\
+		func2((type2 *)(arg0), ##__VA_ARGS__);			\
+	else __bad_func_type();						\
+} while (0)
+
+#define PICK_FUNCTION_RET(type1, type2, func1, func2, arg0, ...)	\
+({									\
+	unsigned long __ret;						\
+									\
+	if (TYPE_EQUAL((arg0), type1))					\
+		__ret = func1((type1 *)(arg0), ##__VA_ARGS__);		\
+	else if (TYPE_EQUAL((arg0), type2))				\
+		__ret = func2((type2 *)(arg0), ##__VA_ARGS__);		\
+	else __ret = __bad_func_type();					\
+									\
+	__ret;								\
+})
+
+#endif /* _LINUX_PICKOP_H */
Index: linux-2.6.22/include/linux/rt_lock.h
===================================================================
--- linux-2.6.22.orig/include/linux/rt_lock.h
+++ linux-2.6.22/include/linux/rt_lock.h
@@ -156,76 +156,40 @@ extern void fastcall rt_up(struct semaph
 
 extern int __bad_func_type(void);
 
-#undef TYPE_EQUAL
-#define TYPE_EQUAL(var, type) \
-		__builtin_types_compatible_p(typeof(var), type *)
-
-#define PICK_FUNC_1ARG(type1, type2, func1, func2, arg)			\
-do {									\
-	if (TYPE_EQUAL((arg), type1))					\
-		func1((type1 *)(arg));					\
-	else if (TYPE_EQUAL((arg), type2))				\
-		func2((type2 *)(arg));					\
-	else __bad_func_type();						\
-} while (0)
+#include <linux/pickop.h>
 
-#define PICK_FUNC_1ARG_RET(type1, type2, func1, func2, arg)		\
-({									\
-	unsigned long __ret;						\
-									\
-	if (TYPE_EQUAL((arg), type1))					\
-		__ret = func1((type1 *)(arg));				\
-	else if (TYPE_EQUAL((arg), type2))				\
-		__ret = func2((type2 *)(arg));				\
-	else __ret = __bad_func_type();					\
-									\
-	__ret;								\
-})
-
-#define PICK_FUNC_2ARG(type1, type2, func1, func2, arg0, arg1)		\
-do {									\
-	if (TYPE_EQUAL((arg0), type1))					\
-		func1((type1 *)(arg0), arg1);				\
-	else if (TYPE_EQUAL((arg0), type2))				\
-		func2((type2 *)(arg0), arg1);				\
-	else __bad_func_type();						\
-} while (0)
+/*
+ * PICK_SEM_OP() is a small redirector to allow less typing of the lock
+ * types struct compat_semaphore, struct semaphore, at the front of the
+ * PICK_FUNCTION macro.
+ */
+#define PICK_SEM_OP(...) PICK_FUNCTION(struct compat_semaphore,	\
+	struct semaphore, ##__VA_ARGS__)
+#define PICK_SEM_OP_RET(...) PICK_FUNCTION_RET(struct compat_semaphore,\
+	struct semaphore, ##__VA_ARGS__)
 
 #define sema_init(sem, val) \
-	PICK_FUNC_2ARG(struct compat_semaphore, struct semaphore, \
-		compat_sema_init, rt_sema_init, sem, val)
+	PICK_SEM_OP(compat_sema_init, rt_sema_init, sem, val)
 
-#define init_MUTEX(sem) \
-	PICK_FUNC_1ARG(struct compat_semaphore, struct semaphore, \
-		compat_init_MUTEX, rt_init_MUTEX, sem)
+#define init_MUTEX(sem) PICK_SEM_OP(compat_init_MUTEX, rt_init_MUTEX, sem)
 
 #define init_MUTEX_LOCKED(sem) \
-	PICK_FUNC_1ARG(struct compat_semaphore, struct semaphore, \
-		compat_init_MUTEX_LOCKED, rt_init_MUTEX_LOCKED, sem)
+	PICK_SEM_OP(compat_init_MUTEX_LOCKED, rt_init_MUTEX_LOCKED, sem)
 
-#define down(sem) \
-	PICK_FUNC_1ARG(struct compat_semaphore, struct semaphore, \
-		compat_down, rt_down, sem)
+#define down(sem) PICK_SEM_OP(compat_down, rt_down, sem)
 
 #define down_interruptible(sem) \
-	PICK_FUNC_1ARG_RET(struct compat_semaphore, struct semaphore, \
-		compat_down_interruptible, rt_down_interruptible, sem)
+	PICK_SEM_OP_RET(compat_down_interruptible, rt_down_interruptible, sem)
 
 #define down_trylock(sem) \
-	PICK_FUNC_1ARG_RET(struct compat_semaphore, struct semaphore, \
-		compat_down_trylock, rt_down_trylock, sem)
+	PICK_SEM_OP_RET(compat_down_trylock, rt_down_trylock, sem)
 
-#define up(sem) \
-	PICK_FUNC_1ARG(struct compat_semaphore, struct semaphore, \
-		compat_up, rt_up, sem)
+#define up(sem) PICK_SEM_OP(compat_up, rt_up, sem)
 
 #define sem_is_locked(sem) \
-	PICK_FUNC_1ARG_RET(struct compat_semaphore, struct semaphore, \
-		compat_sem_is_locked, rt_sem_is_locked, sem)
+	PICK_SEM_OP_RET(compat_sem_is_locked, rt_sem_is_locked, sem)
 
-#define sema_count(sem) \
-	PICK_FUNC_1ARG_RET(struct compat_semaphore, struct semaphore, \
-		compat_sema_count, rt_sema_count, sem)
+#define sema_count(sem) PICK_SEM_OP_RET(compat_sema_count, rt_sema_count, sem)
 
 /*
  * rwsems:
@@ -272,58 +236,47 @@ extern void fastcall rt_downgrade_write(
 
 # define rt_rwsem_is_locked(rws)	(rt_mutex_is_locked(&(rws)->lock))
 
-#define init_rwsem(rwsem) \
-	PICK_FUNC_1ARG(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_init_rwsem, rt_init_rwsem, rwsem)
-
-#define down_read(rwsem) \
-	PICK_FUNC_1ARG(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_down_read, rt_down_read, rwsem)
+#define PICK_RWSEM_OP(...) PICK_FUNCTION(struct compat_rw_semaphore,	\
+	struct rw_semaphore, ##__VA_ARGS__)
+#define PICK_RWSEM_OP_RET(...) PICK_FUNCTION_RET(struct compat_rw_semaphore,\
+	struct rw_semaphore, ##__VA_ARGS__)
+
+#define init_rwsem(rwsem) PICK_RWSEM_OP(compat_init_rwsem, rt_init_rwsem, rwsem)
+
+#define down_read(rwsem) PICK_RWSEM_OP(compat_down_read, rt_down_read, rwsem)
 
 #define down_read_non_owner(rwsem) \
-	PICK_FUNC_1ARG(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_down_read_non_owner, rt_down_read_non_owner, rwsem)
+	PICK_RWSEM_OP(compat_down_read_non_owner, rt_down_read_non_owner, rwsem)
 
 #define down_read_trylock(rwsem) \
-	PICK_FUNC_1ARG_RET(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_down_read_trylock, rt_down_read_trylock, rwsem)
+	PICK_RWSEM_OP_RET(compat_down_read_trylock, rt_down_read_trylock, rwsem)
 
-#define down_write(rwsem) \
-	PICK_FUNC_1ARG(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_down_write, rt_down_write, rwsem)
+#define down_write(rwsem) PICK_RWSEM_OP(compat_down_write, rt_down_write, rwsem)
 
 #define down_read_nested(rwsem, subclass) \
-	PICK_FUNC_2ARG(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_down_read_nested, rt_down_read_nested, rwsem, subclass)
-
+	PICK_RWSEM_OP(compat_down_read_nested, rt_down_read_nested,	\
+		rwsem, subclass)
 
 #define down_write_nested(rwsem, subclass) \
-	PICK_FUNC_2ARG(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_down_write_nested, rt_down_write_nested, rwsem, subclass)
+	PICK_RWSEM_OP(compat_down_write_nested, rt_down_write_nested,	\
+		rwsem, subclass)
 
 #define down_write_trylock(rwsem) \
-	PICK_FUNC_1ARG_RET(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_down_write_trylock, rt_down_write_trylock, rwsem)
+	PICK_RWSEM_OP_RET(compat_down_write_trylock, rt_down_write_trylock,\
+		rwsem)
 
-#define up_read(rwsem) \
-	PICK_FUNC_1ARG(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_up_read, rt_up_read, rwsem)
+#define up_read(rwsem) PICK_RWSEM_OP(compat_up_read, rt_up_read, rwsem)
 
 #define up_read_non_owner(rwsem) \
-	PICK_FUNC_1ARG(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_up_read_non_owner, rt_up_read_non_owner, rwsem)
+	PICK_RWSEM_OP(compat_up_read_non_owner, rt_up_read_non_owner, rwsem)
 
-#define up_write(rwsem) \
-	PICK_FUNC_1ARG(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_up_write, rt_up_write, rwsem)
+#define up_write(rwsem) PICK_RWSEM_OP(compat_up_write, rt_up_write, rwsem)
 
 #define downgrade_write(rwsem) \
-	PICK_FUNC_1ARG(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_downgrade_write, rt_downgrade_write, rwsem)
+	PICK_RWSEM_OP(compat_downgrade_write, rt_downgrade_write, rwsem)
 
 #define rwsem_is_locked(rwsem) \
-	PICK_FUNC_1ARG_RET(struct compat_rw_semaphore, struct rw_semaphore, \
-		compat_rwsem_is_locked, rt_rwsem_is_locked, rwsem)
+	PICK_RWSEM_OP_RET(compat_rwsem_is_locked, rt_rwsem_is_locked, rwsem)
 
 #endif /* CONFIG_PREEMPT_RT */
 

-- 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH -rt 8/9] spinlocks/rwlocks: use PICK_FUNCTION()
  2007-07-30  2:45 [PATCH -rt 1/9] preempt rcu: check for underflow Daniel Walker
                   ` (5 preceding siblings ...)
  2007-07-30  2:45 ` [PATCH -rt 7/9] introduce PICK_FUNCTION Daniel Walker
@ 2007-07-30  2:45 ` Daniel Walker
  2007-07-30  2:45 ` [PATCH -rt 9/9] seqlocks: use PICK_FUNCTION Daniel Walker
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 26+ messages in thread
From: Daniel Walker @ 2007-07-30  2:45 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, linux-rt-users

[-- Attachment #1: pickop-spinlock-rwlocks.patch --]
[-- Type: text/plain, Size: 20137 bytes --]

Reaplace old PICK_OP style macros with the new PICK_FUNCTION macro.

Signed-off-by: Daniel Walker <dwalker@mvista.com>

---
 include/linux/sched.h    |   13 -
 include/linux/spinlock.h |  345 ++++++++++++++---------------------------------
 kernel/rtmutex.c         |    2 
 lib/dec_and_lock.c       |    2 
 4 files changed, 111 insertions(+), 251 deletions(-)

Index: linux-2.6.22/include/linux/sched.h
===================================================================
--- linux-2.6.22.orig/include/linux/sched.h
+++ linux-2.6.22/include/linux/sched.h
@@ -1997,17 +1997,8 @@ extern int __cond_resched_raw_spinlock(r
 extern int __cond_resched_spinlock(spinlock_t *spinlock);
 
 #define cond_resched_lock(lock) \
-({								\
-	int __ret;						\
-								\
-	if (TYPE_EQUAL((lock), raw_spinlock_t))	 		\
-		__ret = __cond_resched_raw_spinlock((raw_spinlock_t *)lock);\
-	else if (TYPE_EQUAL(lock, spinlock_t))			\
-		__ret = __cond_resched_spinlock((spinlock_t *)lock); \
-	else __ret = __bad_spinlock_type();			\
-								\
-	__ret;							\
-})
+	PICK_SPIN_OP_RET(__cond_resched_raw_spinlock, __cond_resched_spinlock,\
+		 lock)
 
 extern int cond_resched_softirq(void);
 extern int cond_resched_softirq_context(void);
Index: linux-2.6.22/include/linux/spinlock.h
===================================================================
--- linux-2.6.22.orig/include/linux/spinlock.h
+++ linux-2.6.22/include/linux/spinlock.h
@@ -91,6 +91,7 @@
 #include <linux/stringify.h>
 #include <linux/bottom_half.h>
 #include <linux/irqflags.h>
+#include <linux/pickop.h>
 
 #include <asm/system.h>
 
@@ -162,7 +163,7 @@ extern void __lockfunc rt_spin_unlock_wa
 extern int __lockfunc
 rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags);
 extern int __lockfunc rt_spin_trylock(spinlock_t *lock);
-extern int _atomic_dec_and_spin_lock(atomic_t *atomic, spinlock_t *lock);
+extern int _atomic_dec_and_spin_lock(spinlock_t *lock, atomic_t *atomic);
 
 /*
  * lockdep-less calls, for derived types like rwlock:
@@ -243,54 +244,6 @@ do {							\
 #  define _spin_trylock_irqsave(l,f)	TSNBCONRT(l)
 #endif
 
-#undef TYPE_EQUAL
-#define TYPE_EQUAL(lock, type) \
-		__builtin_types_compatible_p(typeof(lock), type *)
-
-#define PICK_OP(op, lock)						\
-do {									\
-	if (TYPE_EQUAL((lock), raw_spinlock_t))				\
-		__spin##op((raw_spinlock_t *)(lock));			\
-	else if (TYPE_EQUAL(lock, spinlock_t))				\
-		_spin##op((spinlock_t *)(lock));			\
-	else __bad_spinlock_type();					\
-} while (0)
-
-#define PICK_OP_RET(op, lock...)					\
-({									\
-	unsigned long __ret;						\
-									\
-	if (TYPE_EQUAL((lock), raw_spinlock_t))	 			\
-		__ret = __spin##op((raw_spinlock_t *)(lock));		\
-	else if (TYPE_EQUAL(lock, spinlock_t))				\
-		__ret = _spin##op((spinlock_t *)(lock));		\
-	else __ret = __bad_spinlock_type();				\
-									\
-	__ret;								\
-})
-
-#define PICK_OP2(op, lock, flags)					\
-do {									\
-	if (TYPE_EQUAL((lock), raw_spinlock_t))				\
-		__spin##op((raw_spinlock_t *)(lock), flags);		\
-	else if (TYPE_EQUAL(lock, spinlock_t))				\
-		_spin##op((spinlock_t *)(lock), flags);			\
-	else __bad_spinlock_type();					\
-} while (0)
-
-#define PICK_OP2_RET(op, lock, flags)					\
-({									\
-	unsigned long __ret;						\
-									\
-	if (TYPE_EQUAL((lock), raw_spinlock_t))				\
-		__ret = __spin##op((raw_spinlock_t *)(lock), flags);	\
-	else if (TYPE_EQUAL(lock, spinlock_t))				\
-		__ret = _spin##op((spinlock_t *)(lock), flags);		\
-	else __bad_spinlock_type();					\
-									\
-	__ret;								\
-})
-
 extern void __lockfunc rt_write_lock(rwlock_t *rwlock);
 extern void __lockfunc rt_read_lock(rwlock_t *rwlock);
 extern int __lockfunc rt_write_trylock(rwlock_t *rwlock);
@@ -349,76 +302,10 @@ do {							\
 # define _read_unlock_irqrestore(rwl, f)	rt_read_unlock(rwl)
 # define _write_unlock_irqrestore(rwl, f)	rt_write_unlock(rwl)
 
-#define __PICK_RW_OP(optype, op, lock)					\
-do {									\
-	if (TYPE_EQUAL((lock), raw_rwlock_t))				\
-		__##optype##op((raw_rwlock_t *)(lock));			\
-	else if (TYPE_EQUAL(lock, rwlock_t))				\
-		##op((rwlock_t *)(lock));				\
-	else __bad_rwlock_type();					\
-} while (0)
-
-#define PICK_RW_OP(optype, op, lock)					\
-do {									\
-	if (TYPE_EQUAL((lock), raw_rwlock_t))				\
-		__##optype##op((raw_rwlock_t *)(lock));			\
-	else if (TYPE_EQUAL(lock, rwlock_t))				\
-		_##optype##op((rwlock_t *)(lock));			\
-	else __bad_rwlock_type();					\
-} while (0)
-
-#define __PICK_RW_OP_RET(optype, op, lock...)				\
-({									\
-	unsigned long __ret;						\
-									\
-	if (TYPE_EQUAL((lock), raw_rwlock_t))		  		\
-		__ret = __##optype##op((raw_rwlock_t *)(lock));		\
-	else if (TYPE_EQUAL(lock, rwlock_t))				\
-		__ret = _##optype##op((rwlock_t *)(lock));		\
-	else __ret = __bad_rwlock_type();				\
-									\
-	__ret;								\
-})
-
-#define PICK_RW_OP_RET(optype, op, lock...)				\
-({									\
-	unsigned long __ret;						\
-									\
-	if (TYPE_EQUAL((lock), raw_rwlock_t))				\
-		__ret = __##optype##op((raw_rwlock_t *)(lock));		\
-	else if (TYPE_EQUAL(lock, rwlock_t))				\
-		__ret = _##optype##op((rwlock_t *)(lock));		\
-	else __ret = __bad_rwlock_type();				\
-									\
-	__ret;								\
-})
-
-#define PICK_RW_OP2(optype, op, lock, flags)				\
-do {									\
-	if (TYPE_EQUAL((lock), raw_rwlock_t))				\
-		__##optype##op((raw_rwlock_t *)(lock), flags);		\
-	else if (TYPE_EQUAL(lock, rwlock_t))				\
-		_##optype##op((rwlock_t *)(lock), flags);		\
-	else __bad_rwlock_type();					\
-} while (0)
-
-#define PICK_RW_OP2_RET(optype, op, lock, flags)			\
-({									\
-	unsigned long __ret;						\
-									\
-	if (TYPE_EQUAL((lock), raw_rwlock_t))				\
-		__ret = __##optype##op((raw_rwlock_t *)(lock), flags);	\
-	else if (TYPE_EQUAL(lock, rwlock_t))				\
-		__ret = _##optype##op((rwlock_t *)(lock), flags);	\
-	else __bad_rwlock_type();					\
-									\
-	__ret;								\
-})
-
 #ifdef CONFIG_DEBUG_SPINLOCK
   extern void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name,
 				   struct lock_class_key *key);
-# define _raw_spin_lock_init(lock)				\
+# define _raw_spin_lock_init(lock, name, file, line)		\
 do {								\
 	static struct lock_class_key __key;			\
 								\
@@ -428,25 +315,28 @@ do {								\
 #else
 #define __raw_spin_lock_init(lock) \
 	do { *(lock) = RAW_SPIN_LOCK_UNLOCKED(lock); } while (0)
-# define _raw_spin_lock_init(lock) __raw_spin_lock_init(lock)
+# define _raw_spin_lock_init(lock, name, file, line) __raw_spin_lock_init(lock)
 #endif
 
-#define PICK_OP_INIT(op, lock)						\
-do {									\
-	if (TYPE_EQUAL((lock), raw_spinlock_t))				\
-		_raw_spin##op((raw_spinlock_t *)(lock));		\
-	else if (TYPE_EQUAL(lock, spinlock_t))				\
-		_spin##op((spinlock_t *)(lock), #lock, __FILE__, __LINE__); \
-	else __bad_spinlock_type();					\
-} while (0)
-
+/*
+ * PICK_SPIN_OP()/PICK_RW_OP() are simple redirectors for PICK_FUNCTION
+ */
+#define PICK_SPIN_OP(...)	\
+	PICK_FUNCTION(raw_spinlock_t, spinlock_t, ##__VA_ARGS__)
+#define PICK_SPIN_OP_RET(...)	\
+	PICK_FUNCTION_RET(raw_spinlock_t, spinlock_t, ##__VA_ARGS__)
+#define PICK_RW_OP(...)	PICK_FUNCTION(raw_rwlock_t, rwlock_t, ##__VA_ARGS__)
+#define PICK_RW_OP_RET(...)	\
+	PICK_FUNCTION_RET(raw_rwlock_t, rwlock_t, ##__VA_ARGS__)
 
-#define spin_lock_init(lock)		PICK_OP_INIT(_lock_init, lock)
+#define spin_lock_init(lock) \
+	PICK_SPIN_OP(_raw_spin_lock_init, _spin_lock_init, lock, #lock,	\
+		__FILE__, __LINE__)
 
 #ifdef CONFIG_DEBUG_SPINLOCK
   extern void __raw_rwlock_init(raw_rwlock_t *lock, const char *name,
 				struct lock_class_key *key);
-# define _raw_rwlock_init(lock)					\
+# define _raw_rwlock_init(lock, name, file, line)		\
 do {								\
 	static struct lock_class_key __key;			\
 								\
@@ -455,83 +345,82 @@ do {								\
 #else
 #define __raw_rwlock_init(lock) \
 	do { *(lock) = RAW_RW_LOCK_UNLOCKED(lock); } while (0)
-# define _raw_rwlock_init(lock) __raw_rwlock_init(lock)
+# define _raw_rwlock_init(lock, name, file, line) __raw_rwlock_init(lock)
 #endif
 
-#define __PICK_RW_OP_INIT(optype, op, lock)				\
-do {									\
-	if (TYPE_EQUAL((lock), raw_rwlock_t))				\
-		_raw_##optype##op((raw_rwlock_t *)(lock));		\
-	else if (TYPE_EQUAL(lock, rwlock_t))				\
-		_##optype##op((rwlock_t *)(lock), #lock, __FILE__, __LINE__);\
-	else __bad_spinlock_type();					\
-} while (0)
-
-#define rwlock_init(lock)	__PICK_RW_OP_INIT(rwlock, _init, lock)
+#define rwlock_init(lock) \
+	PICK_RW_OP(_raw_rwlock_init, _rwlock_init, lock, #lock,	\
+		__FILE__, __LINE__)
 
 #define __spin_is_locked(lock)	__raw_spin_is_locked(&(lock)->raw_lock)
 
-#define spin_is_locked(lock)	PICK_OP_RET(_is_locked, lock)
+#define spin_is_locked(lock)	\
+	PICK_SPIN_OP_RET(__spin_is_locked, _spin_is_locked, lock)
 
 #define __spin_unlock_wait(lock) __raw_spin_unlock_wait(&(lock)->raw_lock)
 
-#define spin_unlock_wait(lock)	PICK_OP(_unlock_wait, lock)
+#define spin_unlock_wait(lock) \
+	PICK_SPIN_OP(__spin_unlock_wait, _spin_unlock_wait, lock)
+
 /*
  * Define the various spin_lock and rw_lock methods.  Note we define these
  * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various
  * methods are defined as nops in the case they are not required.
  */
-// #define spin_trylock(lock)	_spin_trylock(lock)
-#define spin_trylock(lock)	__cond_lock(lock, PICK_OP_RET(_trylock, lock))
+#define spin_trylock(lock)	\
+	__cond_lock(lock, PICK_SPIN_OP_RET(__spin_trylock, _spin_trylock, lock))
 
-//#define read_trylock(lock)	_read_trylock(lock)
-#define read_trylock(lock)	__cond_lock(lock, PICK_RW_OP_RET(read, _trylock, lock))
+#define read_trylock(lock)	\
+	__cond_lock(lock, PICK_RW_OP_RET(__read_trylock, _read_trylock, lock))
 
-//#define write_trylock(lock)	_write_trylock(lock)
-#define write_trylock(lock)	__cond_lock(lock, PICK_RW_OP_RET(write, _trylock, lock))
+#define write_trylock(lock)	\
+	__cond_lock(lock, PICK_RW_OP_RET(__write_trylock, _write_trylock, lock))
 
 #define write_trylock_irqsave(lock, flags) \
-	__cond_lock(lock, PICK_RW_OP2_RET(write, _trylock_irqsave, lock, &flags))
+	__cond_lock(lock, PICK_RW_OP_RET(__write_trylock_irqsave, 	\
+		_write_trylock_irqsave, lock, &flags))
 
 #define __spin_can_lock(lock)	__raw_spin_can_lock(&(lock)->raw_lock)
 #define __read_can_lock(lock)	__raw_read_can_lock(&(lock)->raw_lock)
 #define __write_can_lock(lock)	__raw_write_can_lock(&(lock)->raw_lock)
 
 #define spin_can_lock(lock) \
-	__cond_lock(lock, PICK_OP_RET(_can_lock, lock))
+	__cond_lock(lock, PICK_SPIN_OP_RET(__spin_can_lock, _spin_can_lock,\
+		lock))
 
 #define read_can_lock(lock) \
-	__cond_lock(lock, PICK_RW_OP_RET(read, _can_lock, lock))
+	__cond_lock(lock, PICK_RW_OP_RET(__read_can_lock, _read_can_lock, lock))
 
 #define write_can_lock(lock) \
-	__cond_lock(lock, PICK_RW_OP_RET(write, _can_lock, lock))
+	__cond_lock(lock, PICK_RW_OP_RET(__write_can_lock, _write_can_lock,\
+		lock))
 
-// #define spin_lock(lock)	_spin_lock(lock)
-#define spin_lock(lock)		PICK_OP(_lock, lock)
+#define spin_lock(lock) PICK_SPIN_OP(__spin_lock, _spin_lock, lock)
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define spin_lock_nested(lock, subclass) PICK_OP2(_lock_nested, lock, subclass)
+# define spin_lock_nested(lock, subclass)	\
+	PICK_SPIN_OP(__spin_lock_nested, _spin_lock_nested, lock, subclass)
 #else
 # define spin_lock_nested(lock, subclass) spin_lock(lock)
 #endif
 
-//#define write_lock(lock)	_write_lock(lock)
-#define write_lock(lock)	PICK_RW_OP(write, _lock, lock)
+#define write_lock(lock) PICK_RW_OP(__write_lock, _write_lock, lock)
 
-// #define read_lock(lock)	_read_lock(lock)
-#define read_lock(lock)		PICK_RW_OP(read, _lock, lock)
+#define read_lock(lock)	PICK_RW_OP(__read_lock, _read_lock, lock)
 
 # define spin_lock_irqsave(lock, flags)				\
 do {								\
 	BUILD_CHECK_IRQ_FLAGS(flags);				\
-	flags = PICK_OP_RET(_lock_irqsave, lock);		\
+	flags = PICK_SPIN_OP_RET(__spin_lock_irqsave, _spin_lock_irqsave, \
+			lock);						\
 } while (0)
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 # define spin_lock_irqsave_nested(lock, flags, subclass)		\
 do {									\
 	BUILD_CHECK_IRQ_FLAGS(flags);					\
-	flags = PICK_OP2_RET(_lock_irqsave_nested, lock, subclass);	\
+	flags = PICK_SPIN_OP_RET(__spin_lock_irqsave_nested, 		\
+		_spin_lock_irqsave_nested, lock, subclass);		\
 } while (0)
 #else
 # define spin_lock_irqsave_nested(lock, flags, subclass) \
@@ -541,112 +430,92 @@ do {									\
 # define read_lock_irqsave(lock, flags)				\
 do {								\
 	BUILD_CHECK_IRQ_FLAGS(flags);				\
-	flags = PICK_RW_OP_RET(read, _lock_irqsave, lock);	\
+	flags = PICK_RW_OP_RET(__read_lock_irqsave, _read_lock_irqsave, lock);\
 } while (0)
 
 # define write_lock_irqsave(lock, flags)			\
 do {								\
 	BUILD_CHECK_IRQ_FLAGS(flags);				\
-	flags = PICK_RW_OP_RET(write, _lock_irqsave, lock);	\
+	flags = PICK_RW_OP_RET(__write_lock_irqsave, _write_lock_irqsave,lock);\
 } while (0)
 
-// #define spin_lock_irq(lock)	_spin_lock_irq(lock)
-// #define spin_lock_bh(lock)	_spin_lock_bh(lock)
-#define spin_lock_irq(lock)	PICK_OP(_lock_irq, lock)
-#define spin_lock_bh(lock)	PICK_OP(_lock_bh, lock)
-
-// #define read_lock_irq(lock)	_read_lock_irq(lock)
-// #define read_lock_bh(lock)	_read_lock_bh(lock)
-#define read_lock_irq(lock)	PICK_RW_OP(read, _lock_irq, lock)
-#define read_lock_bh(lock)	PICK_RW_OP(read, _lock_bh, lock)
-
-// #define write_lock_irq(lock)		_write_lock_irq(lock)
-// #define write_lock_bh(lock)		_write_lock_bh(lock)
-#define write_lock_irq(lock)	PICK_RW_OP(write, _lock_irq, lock)
-#define write_lock_bh(lock)	PICK_RW_OP(write, _lock_bh, lock)
-
-// #define spin_unlock(lock)	_spin_unlock(lock)
-// #define write_unlock(lock)	_write_unlock(lock)
-// #define read_unlock(lock)	_read_unlock(lock)
-#define spin_unlock(lock)	PICK_OP(_unlock, lock)
-#define read_unlock(lock)	PICK_RW_OP(read, _unlock, lock)
-#define write_unlock(lock)	PICK_RW_OP(write, _unlock, lock)
+#define spin_lock_irq(lock) PICK_SPIN_OP(__spin_lock_irq, _spin_lock_irq, lock)
 
-// #define spin_unlock(lock)	_spin_unlock_no_resched(lock)
-#define spin_unlock_no_resched(lock) \
-				PICK_OP(_unlock_no_resched, lock)
+#define spin_lock_bh(lock) PICK_SPIN_OP(__spin_lock_bh, _spin_lock_bh, lock)
 
-//#define spin_unlock_irqrestore(lock, flags)
-//		_spin_unlock_irqrestore(lock, flags)
-//#define spin_unlock_irq(lock)	_spin_unlock_irq(lock)
-//#define spin_unlock_bh(lock)	_spin_unlock_bh(lock)
-#define spin_unlock_irqrestore(lock, flags)		\
-do {							\
-	BUILD_CHECK_IRQ_FLAGS(flags);			\
-	PICK_OP2(_unlock_irqrestore, lock, flags);	\
-} while (0)
+#define read_lock_irq(lock) PICK_RW_OP(__read_lock_irq, _read_lock_irq, lock)
 
-#define spin_unlock_irq(lock)	PICK_OP(_unlock_irq, lock)
-#define spin_unlock_bh(lock)	PICK_OP(_unlock_bh, lock)
+#define read_lock_bh(lock) PICK_RW_OP(__read_lock_bh, _read_lock_bh, lock)
 
-// #define read_unlock_irqrestore(lock, flags)
-// 		_read_unlock_irqrestore(lock, flags)
-// #define read_unlock_irq(lock)	_read_unlock_irq(lock)
-// #define read_unlock_bh(lock)	_read_unlock_bh(lock)
-#define read_unlock_irqrestore(lock, flags)			\
-do {								\
-	BUILD_CHECK_IRQ_FLAGS(flags);				\
-	PICK_RW_OP2(read, _unlock_irqrestore, lock, flags);	\
+#define write_lock_irq(lock) PICK_RW_OP(__write_lock_irq, _write_lock_irq, lock)
+
+#define write_lock_bh(lock) PICK_RW_OP(__write_lock_bh, _write_lock_bh, lock)
+
+#define spin_unlock(lock) PICK_SPIN_OP(__spin_unlock, _spin_unlock, lock)
+
+#define read_unlock(lock) PICK_RW_OP(__read_unlock, _read_unlock, lock)
+
+#define write_unlock(lock) PICK_RW_OP(__write_unlock, _write_unlock, lock)
+
+#define spin_unlock_no_resched(lock) \
+	PICK_SPIN_OP(__spin_unlock_no_resched, _spin_unlock_no_resched, lock)
+
+#define spin_unlock_irqrestore(lock, flags)				\
+do {									\
+	BUILD_CHECK_IRQ_FLAGS(flags);					\
+	PICK_SPIN_OP(__spin_unlock_irqrestore, _spin_unlock_irqrestore,	\
+		     lock, flags);					\
 } while (0)
 
-#define read_unlock_irq(lock)	PICK_RW_OP(read, _unlock_irq, lock)
-#define read_unlock_bh(lock)	PICK_RW_OP(read, _unlock_bh, lock)
+#define spin_unlock_irq(lock)	\
+	PICK_SPIN_OP(__spin_unlock_irq, _spin_unlock_irq, lock)
+#define spin_unlock_bh(lock)	\
+	PICK_SPIN_OP(__spin_unlock_bh, _spin_unlock_bh, lock)
 
-// #define write_unlock_irqrestore(lock, flags)
-// 	_write_unlock_irqrestore(lock, flags)
-// #define write_unlock_irq(lock)			_write_unlock_irq(lock)
-// #define write_unlock_bh(lock)			_write_unlock_bh(lock)
-#define write_unlock_irqrestore(lock, flags)			\
-do {								\
-	BUILD_CHECK_IRQ_FLAGS(flags);				\
-	PICK_RW_OP2(write, _unlock_irqrestore, lock, flags);	\
+#define read_unlock_irqrestore(lock, flags)				\
+do {									\
+	BUILD_CHECK_IRQ_FLAGS(flags);					\
+	PICK_RW_OP(__read_unlock_irqrestore, _read_unlock_irqrestore,	\
+		lock, flags);						\
 } while (0)
-#define write_unlock_irq(lock)	PICK_RW_OP(write, _unlock_irq, lock)
-#define write_unlock_bh(lock)	PICK_RW_OP(write, _unlock_bh, lock)
 
-// #define spin_trylock_bh(lock)	_spin_trylock_bh(lock)
-#define spin_trylock_bh(lock)	__cond_lock(lock, PICK_OP_RET(_trylock_bh, lock))
+#define read_unlock_irq(lock)	\
+	PICK_RW_OP(__read_unlock_irq, _read_unlock_irq, lock)
+#define read_unlock_bh(lock) PICK_RW_OP(__read_unlock_bh, _read_unlock_bh, lock)
 
-// #define spin_trylock_irq(lock)
+#define write_unlock_irqrestore(lock, flags)				\
+do {									\
+	BUILD_CHECK_IRQ_FLAGS(flags);					\
+	PICK_RW_OP(__write_unlock_irqrestore, _write_unlock_irqrestore, \
+		lock, flags);						\
+} while (0)
+#define write_unlock_irq(lock)	\
+	PICK_RW_OP(__write_unlock_irq, _write_unlock_irq, lock)
 
-#define spin_trylock_irq(lock)	__cond_lock(lock, PICK_OP_RET(_trylock_irq, lock))
+#define write_unlock_bh(lock)	\
+	PICK_RW_OP(__write_unlock_bh, _write_unlock_bh, lock)
 
-// #define spin_trylock_irqsave(lock, flags)
+#define spin_trylock_bh(lock)	\
+	__cond_lock(lock, PICK_SPIN_OP_RET(__spin_trylock_bh, _spin_trylock_bh,\
+		lock))
+
+#define spin_trylock_irq(lock)	\
+	__cond_lock(lock, PICK_SPIN_OP_RET(__spin_trylock_irq,		\
+		__spin_trylock_irq, lock))
 
 #define spin_trylock_irqsave(lock, flags) \
-		__cond_lock(lock, PICK_OP2_RET(_trylock_irqsave, lock, &flags))
+	__cond_lock(lock, PICK_SPIN_OP_RET(__spin_trylock_irqsave, 	\
+		_spin_trylock_irqsave, lock, &flags))
 
 /* "lock on reference count zero" */
 #ifndef ATOMIC_DEC_AND_LOCK
 # include <asm/atomic.h>
-  extern int __atomic_dec_and_spin_lock(atomic_t *atomic, raw_spinlock_t *lock);
+  extern int __atomic_dec_and_spin_lock(raw_spinlock_t *lock, atomic_t *atomic);
 #endif
 
 #define atomic_dec_and_lock(atomic, lock)				\
-__cond_lock(lock, ({							\
-	unsigned long __ret;						\
-									\
-	if (TYPE_EQUAL(lock, raw_spinlock_t))				\
-		__ret = __atomic_dec_and_spin_lock(atomic,		\
-					(raw_spinlock_t *)(lock));	\
-	else if (TYPE_EQUAL(lock, spinlock_t))				\
-		__ret = _atomic_dec_and_spin_lock(atomic,		\
-					(spinlock_t *)(lock));		\
-	else __ret = __bad_spinlock_type();				\
-									\
-	__ret;								\
-}))
-
+	__cond_lock(lock, PICK_SPIN_OP_RET(__atomic_dec_and_spin_lock,	\
+		_atomic_dec_and_spin_lock, lock, atomic))
 
 /*
  *  bit-based spin_lock()
Index: linux-2.6.22/kernel/rtmutex.c
===================================================================
--- linux-2.6.22.orig/kernel/rtmutex.c
+++ linux-2.6.22/kernel/rtmutex.c
@@ -857,7 +857,7 @@ int __lockfunc rt_spin_trylock_irqsave(s
 }
 EXPORT_SYMBOL(rt_spin_trylock_irqsave);
 
-int _atomic_dec_and_spin_lock(atomic_t *atomic, spinlock_t *lock)
+int _atomic_dec_and_spin_lock(spinlock_t *lock, atomic_t *atomic)
 {
 	/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
 	if (atomic_add_unless(atomic, -1, 1))
Index: linux-2.6.22/lib/dec_and_lock.c
===================================================================
--- linux-2.6.22.orig/lib/dec_and_lock.c
+++ linux-2.6.22/lib/dec_and_lock.c
@@ -17,7 +17,7 @@
  * because the spin-lock and the decrement must be
  * "atomic".
  */
-int __atomic_dec_and_spin_lock(atomic_t *atomic, raw_spinlock_t *lock)
+int __atomic_dec_and_spin_lock(raw_spinlock_t *lock, atomic_t *atomic)
 {
 #ifdef CONFIG_SMP
 	/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */

-- 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH -rt 9/9] seqlocks: use PICK_FUNCTION
  2007-07-30  2:45 [PATCH -rt 1/9] preempt rcu: check for underflow Daniel Walker
                   ` (6 preceding siblings ...)
  2007-07-30  2:45 ` [PATCH -rt 8/9] spinlocks/rwlocks: use PICK_FUNCTION() Daniel Walker
@ 2007-07-30  2:45 ` Daniel Walker
  2007-08-06  7:21   ` Ingo Molnar
  2007-07-30  5:26 ` [PATCH -rt 1/9] preempt rcu: check for underflow Paul E. McKenney
  2007-07-30  9:22 ` Ingo Molnar
  9 siblings, 1 reply; 26+ messages in thread
From: Daniel Walker @ 2007-07-30  2:45 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, linux-rt-users

[-- Attachment #1: pickop-seqlocks.patch --]
[-- Type: text/plain, Size: 8562 bytes --]

Replace the old PICK_OP style macros with PICK_FUNCTION. Although,
seqlocks has some alien code, which I also replaced as can be seen
from the line count below.

Signed-off-by: Daniel Walker <dwalker@mvista.com>

---
 include/linux/seqlock.h |  234 +++++++++++++++++++++++++++---------------------
 1 file changed, 134 insertions(+), 100 deletions(-)

Index: linux-2.6.22/include/linux/seqlock.h
===================================================================
--- linux-2.6.22.orig/include/linux/seqlock.h
+++ linux-2.6.22/include/linux/seqlock.h
@@ -90,6 +90,12 @@ static inline void __write_seqlock(seqlo
 	smp_wmb();
 }
 
+static __always_inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
+{
+	__write_seqlock(sl);
+	return 0;
+}
+
 static inline void __write_sequnlock(seqlock_t *sl)
 {
 	smp_wmb();
@@ -97,6 +103,8 @@ static inline void __write_sequnlock(seq
 	spin_unlock(&sl->lock);
 }
 
+#define __write_sequnlock_irqrestore(sl, flags)	__write_sequnlock(sl)
+
 static inline int __write_tryseqlock(seqlock_t *sl)
 {
 	int ret = spin_trylock(&sl->lock);
@@ -149,6 +157,28 @@ static __always_inline void __write_seql
 	smp_wmb();
 }
 
+static __always_inline unsigned long
+__write_seqlock_irqsave_raw(raw_seqlock_t *sl)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	__write_seqlock_raw(sl);
+	return flags;
+}
+
+static __always_inline void __write_seqlock_irq_raw(raw_seqlock_t *sl)
+{
+	local_irq_disable();
+	__write_seqlock_raw(sl);
+}
+
+static __always_inline void __write_seqlock_bh_raw(raw_seqlock_t *sl)
+{
+	local_bh_disable();
+	__write_seqlock_raw(sl);
+}
+
 static __always_inline void __write_sequnlock_raw(raw_seqlock_t *sl)
 {
 	smp_wmb();
@@ -156,6 +186,27 @@ static __always_inline void __write_sequ
 	spin_unlock(&sl->lock);
 }
 
+static __always_inline void
+__write_sequnlock_irqrestore_raw(raw_seqlock_t *sl, unsigned long flags)
+{
+	__write_sequnlock_raw(sl);
+	local_irq_restore(flags);
+	preempt_check_resched();
+}
+
+static __always_inline void __write_sequnlock_irq_raw(raw_seqlock_t *sl)
+{
+	__write_sequnlock_raw(sl);
+	local_irq_enable();
+	preempt_check_resched();
+}
+
+static __always_inline void __write_sequnlock_bh_raw(raw_seqlock_t *sl)
+{
+	__write_sequnlock_raw(sl);
+	local_bh_enable();
+}
+
 static __always_inline int __write_tryseqlock_raw(raw_seqlock_t *sl)
 {
 	int ret = spin_trylock(&sl->lock);
@@ -182,60 +233,92 @@ static __always_inline int __read_seqret
 
 extern int __bad_seqlock_type(void);
 
-#define PICK_SEQOP(op, lock)					\
+/*
+ * PICK_SEQ_OP() is a small redirector to allow less typing of the lock
+ * types raw_seqlock_t, seqlock_t, at the front of the PICK_FUNCTION
+ * macro.
+ */
+#define PICK_SEQ_OP(...) PICK_FUNCTION(raw_seqlock_t, seqlock_t, ##__VA_ARGS__)
+#define PICK_SEQ_OP_RET(...) \
+	PICK_FUNCTION_RET(raw_seqlock_t, seqlock_t, ##__VA_ARGS__)
+
+#define write_seqlock(sl) PICK_SEQ_OP(__write_seqlock_raw, __write_seqlock, sl)
+
+#define write_sequnlock(sl)	\
+	PICK_SEQ_OP(__write_sequnlock_raw, __write_sequnlock, sl)
+
+#define write_tryseqlock(sl)	\
+	PICK_SEQ_OP_RET(__write_tryseqlock_raw, __write_tryseqlock, sl)
+
+#define read_seqbegin(sl) 	\
+	PICK_SEQ_OP_RET(__read_seqbegin_raw, __read_seqbegin, sl)
+
+#define read_seqretry(sl, iv)	\
+	PICK_SEQ_OP_RET(__read_seqretry_raw, __read_seqretry, sl, iv)
+
+#define write_seqlock_irqsave(lock, flags)			\
 do {								\
-	if (TYPE_EQUAL((lock), raw_seqlock_t))			\
-		op##_raw((raw_seqlock_t *)(lock));		\
-	else if (TYPE_EQUAL((lock), seqlock_t))			\
-		op((seqlock_t *)(lock));			\
-	else __bad_seqlock_type();				\
+	flags = PICK_SEQ_OP_RET(__write_seqlock_irqsave_raw,	\
+		__write_seqlock_irqsave, lock);			\
 } while (0)
 
-#define PICK_SEQOP_RET(op, lock)				\
-({								\
-	unsigned long __ret;					\
-								\
-	if (TYPE_EQUAL((lock), raw_seqlock_t))			\
-		__ret = op##_raw((raw_seqlock_t *)(lock));	\
-	else if (TYPE_EQUAL((lock), seqlock_t))			\
-		__ret = op((seqlock_t *)(lock));		\
-	else __ret = __bad_seqlock_type();			\
-								\
-	__ret;							\
-})
-
-#define PICK_SEQOP_CONST_RET(op, lock)				\
-({								\
-	unsigned long __ret;					\
-								\
-	if (TYPE_EQUAL((lock), raw_seqlock_t))			\
-		__ret = op##_raw((const raw_seqlock_t *)(lock));\
-	else if (TYPE_EQUAL((lock), seqlock_t))			\
-		__ret = op((seqlock_t *)(lock));		\
-	else __ret = __bad_seqlock_type();			\
-								\
-	__ret;							\
-})
-
-#define PICK_SEQOP2_CONST_RET(op, lock, arg)				\
- ({									\
-	unsigned long __ret;						\
-									\
-	if (TYPE_EQUAL((lock), raw_seqlock_t))				\
-		__ret = op##_raw((const raw_seqlock_t *)(lock), (arg));	\
-	else if (TYPE_EQUAL((lock), seqlock_t))				\
-		__ret = op((seqlock_t *)(lock), (arg));			\
-	else __ret = __bad_seqlock_type();				\
-									\
-	__ret;								\
-})
-
-
-#define write_seqlock(sl)	PICK_SEQOP(__write_seqlock, sl)
-#define write_sequnlock(sl)	PICK_SEQOP(__write_sequnlock, sl)
-#define write_tryseqlock(sl)	PICK_SEQOP_RET(__write_tryseqlock, sl)
-#define read_seqbegin(sl)	PICK_SEQOP_CONST_RET(__read_seqbegin, sl)
-#define read_seqretry(sl, iv)	PICK_SEQOP2_CONST_RET(__read_seqretry, sl, iv)
+#define write_seqlock_irq(lock)	\
+	PICK_SEQ_OP(__write_seqlock_irq_raw, __write_seqlock, lock)
+
+#define write_seqlock_bh(lock)	\
+	PICK_SEQ_OP(__write_seqlock_bh_raw, __write_seqlock, lock)
+
+#define write_sequnlock_irqrestore(lock, flags)		\
+	PICK_SEQ_OP(__write_sequnlock_irqrestore_raw,	\
+		__write_sequnlock_irqrestore, lock, flags)
+
+#define write_sequnlock_bh(lock)	\
+	PICK_SEQ_OP(__write_sequnlock_bh_raw, __write_sequnlock, lock)
+
+#define write_sequnlock_irq(lock)	\
+	PICK_SEQ_OP(__write_sequnlock_irq_raw, __write_sequnlock, lock)
+
+static __always_inline
+unsigned long __read_seqbegin_irqsave_raw(raw_seqlock_t *sl)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	__read_seqbegin_raw(sl);
+	return flags;
+}
+
+static __always_inline unsigned long __read_seqbegin_irqsave(seqlock_t *sl)
+{
+	__read_seqbegin(sl);
+	return 0;
+}
+
+#define read_seqbegin_irqsave(lock, flags)			\
+do {								\
+	flags = PICK_SEQ_OP_RET(__read_seqbegin_irqsave_raw,	\
+		__read_seqbegin_irqsave, lock);			\
+} while (0)
+
+static __always_inline int
+__read_seqretry_irqrestore(seqlock_t *sl, unsigned iv, unsigned long flags)
+{
+	return __read_seqretry(sl, iv);
+}
+
+static __always_inline int
+__read_seqretry_irqrestore_raw(raw_seqlock_t *sl, unsigned iv,
+			       unsigned long flags)
+{
+	int ret = read_seqretry(sl, iv);
+	local_irq_restore(flags);
+	preempt_check_resched();
+	return ret;
+}
+
+#define read_seqretry_irqrestore(lock, iv, flags)			\
+	PICK_SEQ_OP_RET(__read_seqretry_irqrestore_raw, 		\
+		__read_seqretry_irqrestore, lock, iv, flags)
 
 /*
  * Version using sequence counter only.
@@ -286,53 +369,4 @@ static inline void write_seqcount_end(se
 	smp_wmb();
 	s->sequence++;
 }
-
-#define PICK_IRQOP(op, lock)					\
-do {								\
-	if (TYPE_EQUAL((lock), raw_seqlock_t))			\
-		op();						\
-	else if (TYPE_EQUAL((lock), seqlock_t))			\
-		{ /* nothing */ }				\
-	else __bad_seqlock_type();				\
-} while (0)
-
-#define PICK_IRQOP2(op, arg, lock)				\
-do {								\
-	if (TYPE_EQUAL((lock), raw_seqlock_t))			\
-		op(arg);					\
-	else if (TYPE_EQUAL(lock, seqlock_t))			\
-		{ /* nothing */ }				\
-	else __bad_seqlock_type();				\
-} while (0)
-
-
-
-/*
- * Possible sw/hw IRQ protected versions of the interfaces.
- */
-#define write_seqlock_irqsave(lock, flags)				\
-	do { PICK_IRQOP2(local_irq_save, flags, lock); write_seqlock(lock); } while (0)
-#define write_seqlock_irq(lock)						\
-	do { PICK_IRQOP(local_irq_disable, lock); write_seqlock(lock); } while (0)
-#define write_seqlock_bh(lock)						\
-        do { PICK_IRQOP(local_bh_disable, lock); write_seqlock(lock); } while (0)
-
-#define write_sequnlock_irqrestore(lock, flags)				\
-	do { write_sequnlock(lock); PICK_IRQOP2(local_irq_restore, flags, lock); preempt_check_resched(); } while(0)
-#define write_sequnlock_irq(lock)					\
-	do { write_sequnlock(lock); PICK_IRQOP(local_irq_enable, lock); preempt_check_resched(); } while(0)
-#define write_sequnlock_bh(lock)					\
-	do { write_sequnlock(lock); PICK_IRQOP(local_bh_enable, lock); } while(0)
-
-#define read_seqbegin_irqsave(lock, flags)				\
-	({ PICK_IRQOP2(local_irq_save, flags, lock); read_seqbegin(lock); })
-
-#define read_seqretry_irqrestore(lock, iv, flags)			\
-	({								\
-		int ret = read_seqretry(lock, iv);			\
-		PICK_IRQOP2(local_irq_restore, flags, lock);		\
-		preempt_check_resched(); 				\
-		ret;							\
-	})
-
 #endif /* __LINUX_SEQLOCK_H */

-- 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 6/9] spinlock/rt_lock random cleanups
  2007-07-30  2:45 ` [PATCH -rt 6/9] spinlock/rt_lock random cleanups Daniel Walker
@ 2007-07-30  4:58   ` Ankita Garg
  2007-07-30 15:48     ` Daniel Walker
  2007-07-30  9:31   ` Ingo Molnar
  1 sibling, 1 reply; 26+ messages in thread
From: Ankita Garg @ 2007-07-30  4:58 UTC (permalink / raw)
  To: Daniel Walker; +Cc: mingo, linux-kernel, linux-rt-users

On Sun, Jul 29, 2007 at 07:45:40PM -0700, Daniel Walker wrote:
> Signed-off-by: Daniel Walker <dwalker@mvista.com>
> 
> ---
>  include/linux/rt_lock.h  |    6 ++++--
>  include/linux/spinlock.h |    5 +++--
>  2 files changed, 7 insertions(+), 4 deletions(-)
> 
> Index: linux-2.6.22/include/linux/rt_lock.h
> ===================================================================
> --- linux-2.6.22.orig/include/linux/rt_lock.h
> +++ linux-2.6.22/include/linux/rt_lock.h
> @@ -128,12 +128,14 @@ struct semaphore name = \
>   */
>  #define DECLARE_MUTEX_LOCKED COMPAT_DECLARE_MUTEX_LOCKED
> 
> -extern void fastcall __sema_init(struct semaphore *sem, int val, char *name, char *file, int line);
> +extern void fastcall
> +__sema_init(struct semaphore *sem, int val, char *name, char *file, int line);
> 
>  #define rt_sema_init(sem, val) \
>  		__sema_init(sem, val, #sem, __FILE__, __LINE__)
> 
> -extern void fastcall __init_MUTEX(struct semaphore *sem, char *name, char *file, int line);
> +extern void fastcall
> +__init_MUTEX(struct semaphore *sem, char *name, char *file, int line);
>  #define rt_init_MUTEX(sem) \
>  		__init_MUTEX(sem, #sem, __FILE__, __LINE__)
> 
> Index: linux-2.6.22/include/linux/spinlock.h
> ===================================================================
> --- linux-2.6.22.orig/include/linux/spinlock.h
> +++ linux-2.6.22/include/linux/spinlock.h
> @@ -126,7 +126,7 @@ extern int __lockfunc generic__raw_read_
> 
>  #ifdef CONFIG_DEBUG_SPINLOCK
>   extern __lockfunc void _raw_spin_lock(raw_spinlock_t *lock);
> -#define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock)
> +# define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock)

Any reason behind including a space here?

>   extern __lockfunc int _raw_spin_trylock(raw_spinlock_t *lock);
>   extern __lockfunc void _raw_spin_unlock(raw_spinlock_t *lock);
>   extern __lockfunc void _raw_read_lock(raw_rwlock_t *lock);
> @@ -325,7 +325,8 @@ do {							\
> 
>  # define _read_trylock(rwl)	rt_read_trylock(rwl)
>  # define _write_trylock(rwl)	rt_write_trylock(rwl)
> -#define _write_trylock_irqsave(rwl, flags)  rt_write_trylock_irqsave(rwl, flags)
> +# define _write_trylock_irqsave(rwl, flags) \
> +	rt_write_trylock_irqsave(rwl, flags)
> 
>  # define _read_lock(rwl)	rt_read_lock(rwl)
>  # define _write_lock(rwl)	rt_write_lock(rwl)
> 

-- 
Regards,
Ankita Garg (ankita@in.ibm.com)
Linux Technology Center
IBM India Systems & Technology Labs, 
Bangalore, India   

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 1/9] preempt rcu: check for underflow
  2007-07-30  2:45 [PATCH -rt 1/9] preempt rcu: check for underflow Daniel Walker
                   ` (7 preceding siblings ...)
  2007-07-30  2:45 ` [PATCH -rt 9/9] seqlocks: use PICK_FUNCTION Daniel Walker
@ 2007-07-30  5:26 ` Paul E. McKenney
  2007-07-30  9:22 ` Ingo Molnar
  9 siblings, 0 replies; 26+ messages in thread
From: Paul E. McKenney @ 2007-07-30  5:26 UTC (permalink / raw)
  To: Daniel Walker; +Cc: mingo, linux-kernel, linux-rt-users

On Sun, Jul 29, 2007 at 07:45:35PM -0700, Daniel Walker wrote:
> Simple WARN_ON to catch any underflow in rcu_read_lock_nesting.
> 
> Signed-off-by: Daniel Walker <dwalker@mvista.com>

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> ---
>  kernel/rcupreempt.c |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> Index: linux-2.6.22/kernel/rcupreempt.c
> ===================================================================
> --- linux-2.6.22.orig/kernel/rcupreempt.c
> +++ linux-2.6.22/kernel/rcupreempt.c
> @@ -157,6 +157,12 @@ void __rcu_read_unlock(void)
>  	}
> 
>  	local_irq_restore(oldirq);
> +
> +	/*
> +	 * If our rcu_read_lock_nesting went negative, likely
> +	 * something is wrong..
> +	 */
> +	WARN_ON(current->rcu_read_lock_nesting < 0);
>  }
> 
>  static void __rcu_advance_callbacks(void)
> 
> -- 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 1/9] preempt rcu: check for underflow
  2007-07-30  2:45 [PATCH -rt 1/9] preempt rcu: check for underflow Daniel Walker
                   ` (8 preceding siblings ...)
  2007-07-30  5:26 ` [PATCH -rt 1/9] preempt rcu: check for underflow Paul E. McKenney
@ 2007-07-30  9:22 ` Ingo Molnar
  2007-07-30 15:48   ` Daniel Walker
  9 siblings, 1 reply; 26+ messages in thread
From: Ingo Molnar @ 2007-07-30  9:22 UTC (permalink / raw)
  To: Daniel Walker; +Cc: linux-kernel, linux-rt-users


* Daniel Walker <dwalker@mvista.com> wrote:

> +	/*
> +	 * If our rcu_read_lock_nesting went negative, likely
> +	 * something is wrong..
> +	 */
> +	WARN_ON(current->rcu_read_lock_nesting < 0);

have you actually caught any rcu locking problem this way? Double 
unlocks should be caught by lockdep already, at a higher level.

in any case i've added a slightly different form of this change to the 
-rt queue that will also check for counter overflows. But i'm not sure 
we want to litter the code with trivial checks like this, so i'm keeping 
it separate and if it does not trigger anything real i'll remove it.

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 2/9] Dont allow non-threaded softirqs and threaded hardirqs
  2007-07-30  2:45 ` [PATCH -rt 2/9] Dont allow non-threaded softirqs and threaded hardirqs Daniel Walker
@ 2007-07-30  9:23   ` Ingo Molnar
  2007-07-30 11:28     ` Steven Rostedt
  0 siblings, 1 reply; 26+ messages in thread
From: Ingo Molnar @ 2007-07-30  9:23 UTC (permalink / raw)
  To: Daniel Walker; +Cc: linux-kernel, linux-rt-users, Steven Rostedt


* Daniel Walker <dwalker@mvista.com> wrote:

> From: Steven Rostedt <rostedt@goodmis.org>
> 
> I think this was sent before, and it did cause problems before. Would 
> there be *any* reason to have non-threaded softirqs but threaded 
> hardirqs. I can see lots of issues with that.

please elaborate in precise terms: what issues can you see?

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 3/9] Fix jiffies wrap issue in update_times
  2007-07-30  2:45 ` [PATCH -rt 3/9] Fix jiffies wrap issue in update_times Daniel Walker
@ 2007-07-30  9:25   ` Ingo Molnar
  0 siblings, 0 replies; 26+ messages in thread
From: Ingo Molnar @ 2007-07-30  9:25 UTC (permalink / raw)
  To: Daniel Walker; +Cc: linux-kernel, linux-rt-users


* Daniel Walker <dwalker@mvista.com> wrote:

> In prior -rt verisons the last_tick value was called wall_jiffies and 
> was initialized in this same way as below. If this value isn't 
> initialized the calc_load function gets skewed for several minutes 
> right after boot up. Skewed meaning always zero.

thanks, applied.

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 4/9] ifdef raise_softirq_irqoff wakeup
  2007-07-30  2:45 ` [PATCH -rt 4/9] ifdef raise_softirq_irqoff wakeup Daniel Walker
@ 2007-07-30  9:27   ` Ingo Molnar
  2007-07-30 15:48     ` Daniel Walker
  0 siblings, 1 reply; 26+ messages in thread
From: Ingo Molnar @ 2007-07-30  9:27 UTC (permalink / raw)
  To: Daniel Walker; +Cc: linux-kernel, linux-rt-users


* Daniel Walker <dwalker@mvista.com> wrote:

> @@ -508,7 +508,9 @@ inline fastcall void raise_softirq_irqof
>  {
>  	__do_raise_softirq_irqoff(nr);
>  
> +#ifdef CONFIG_PREEMPT_SOFTIRQS
>  	wakeup_softirqd(nr);
> +#endif

thanks, applied. People rarely run the -rt kernel just to turn off 
PREEMPT_RT, that's why this bug was there ;-)

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 5/9] net: fix mis-merge in qdisc_restart
  2007-07-30  2:45 ` [PATCH -rt 5/9] net: fix mis-merge in qdisc_restart Daniel Walker
@ 2007-07-30  9:30   ` Ingo Molnar
  0 siblings, 0 replies; 26+ messages in thread
From: Ingo Molnar @ 2007-07-30  9:30 UTC (permalink / raw)
  To: Daniel Walker; +Cc: linux-kernel, linux-rt-users


* Daniel Walker <dwalker@mvista.com> wrote:

> This mismerge caused my networking to malfunction. The interface would 
> come up, but no traffic would make it in/out ..

your patch only affects the !CONFIG_PREEMPT_RT case, so with that 
qualification indeed that happened, and i've applied your fix.

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 6/9] spinlock/rt_lock random cleanups
  2007-07-30  2:45 ` [PATCH -rt 6/9] spinlock/rt_lock random cleanups Daniel Walker
  2007-07-30  4:58   ` Ankita Garg
@ 2007-07-30  9:31   ` Ingo Molnar
  1 sibling, 0 replies; 26+ messages in thread
From: Ingo Molnar @ 2007-07-30  9:31 UTC (permalink / raw)
  To: Daniel Walker; +Cc: linux-kernel, linux-rt-users


* Daniel Walker <dwalker@mvista.com> wrote:

> spinlock/rt_lock random cleanups

thanks, applied.

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 7/9] introduce PICK_FUNCTION
  2007-07-30  2:45 ` [PATCH -rt 7/9] introduce PICK_FUNCTION Daniel Walker
@ 2007-07-30  9:39   ` Peter Zijlstra
  2007-07-30 16:16     ` Daniel Walker
  0 siblings, 1 reply; 26+ messages in thread
From: Peter Zijlstra @ 2007-07-30  9:39 UTC (permalink / raw)
  To: Daniel Walker; +Cc: mingo, linux-kernel, linux-rt-users

On Sun, 2007-07-29 at 19:45 -0700, Daniel Walker wrote:

> +#undef TYPE_EQUAL
> +#define TYPE_EQUAL(var, type) \
> +		__builtin_types_compatible_p(typeof(var), type *)
> +

If you're going to touch this code, could you perhaps change TYPE_EQUAL
to do as it says and read like so:

	__builtin_types_compatible_p(typeof(var), type)




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 2/9] Dont allow non-threaded softirqs and threaded hardirqs
  2007-07-30  9:23   ` Ingo Molnar
@ 2007-07-30 11:28     ` Steven Rostedt
  0 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2007-07-30 11:28 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Daniel Walker, linux-kernel, linux-rt-users


--
On Mon, 30 Jul 2007, Ingo Molnar wrote:
> > From: Steven Rostedt <rostedt@goodmis.org>
> >
> > I think this was sent before, and it did cause problems before. Would
> > there be *any* reason to have non-threaded softirqs but threaded
> > hardirqs. I can see lots of issues with that.
>
> please elaborate in precise terms: what issues can you see?
>

Hi Ingo,

I don't remember the exact details, I can try to find the thread. But I
remember someone was having their system lock up strangly. We later found
that they had softirqs as normal softirqs and interrupts as threads.  I
think there was some driver that didn't expect the softirq to preempt the
irq handler.  Perhaps the softirq was using spin_lock_irq while the irq
thread was just using spin_lock, which I can see as being something
normal.

The standard Linux does not expect an interrupt to be preempted by a
softirq, and with interrupts as threads but not softirqs, I can see that
happening a lot.

-- Steve


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 6/9] spinlock/rt_lock random cleanups
  2007-07-30  4:58   ` Ankita Garg
@ 2007-07-30 15:48     ` Daniel Walker
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Walker @ 2007-07-30 15:48 UTC (permalink / raw)
  To: Ankita Garg; +Cc: mingo, linux-kernel, linux-rt-users

On Mon, 2007-07-30 at 10:28 +0530, Ankita Garg wrote:
> On Sun, Jul 29, 2007 at 07:45:40PM -0700, Daniel Walker wrote:
> > Signed-off-by: Daniel Walker <dwalker@mvista.com>
> > 
> > ---
> >  include/linux/rt_lock.h  |    6 ++++--
> >  include/linux/spinlock.h |    5 +++--
> >  2 files changed, 7 insertions(+), 4 deletions(-)
> > 
> > Index: linux-2.6.22/include/linux/rt_lock.h
> > ===================================================================
> > --- linux-2.6.22.orig/include/linux/rt_lock.h
> > +++ linux-2.6.22/include/linux/rt_lock.h
> > @@ -128,12 +128,14 @@ struct semaphore name = \
> >   */
> >  #define DECLARE_MUTEX_LOCKED COMPAT_DECLARE_MUTEX_LOCKED
> > 
> > -extern void fastcall __sema_init(struct semaphore *sem, int val, char *name, char *file, int line);
> > +extern void fastcall
> > +__sema_init(struct semaphore *sem, int val, char *name, char *file, int line);
> > 
> >  #define rt_sema_init(sem, val) \
> >  		__sema_init(sem, val, #sem, __FILE__, __LINE__)
> > 
> > -extern void fastcall __init_MUTEX(struct semaphore *sem, char *name, char *file, int line);
> > +extern void fastcall
> > +__init_MUTEX(struct semaphore *sem, char *name, char *file, int line);
> >  #define rt_init_MUTEX(sem) \
> >  		__init_MUTEX(sem, #sem, __FILE__, __LINE__)
> > 
> > Index: linux-2.6.22/include/linux/spinlock.h
> > ===================================================================
> > --- linux-2.6.22.orig/include/linux/spinlock.h
> > +++ linux-2.6.22/include/linux/spinlock.h
> > @@ -126,7 +126,7 @@ extern int __lockfunc generic__raw_read_
> > 
> >  #ifdef CONFIG_DEBUG_SPINLOCK
> >   extern __lockfunc void _raw_spin_lock(raw_spinlock_t *lock);
> > -#define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock)
> > +# define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock)
> 
> Any reason behind including a space here?

Yes . Sometimes a space is added when a define is embedded inside
#ifdefs , example below.

#ifdef CONFIG_DEBUG_SPINLOCK
# define DEBUG_MACRO do_somedebug_here()
#endif

That's usually the method Ingo uses, and it matches the code surrounding
it in this particular patch.

Daniel


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 1/9] preempt rcu: check for underflow
  2007-07-30  9:22 ` Ingo Molnar
@ 2007-07-30 15:48   ` Daniel Walker
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Walker @ 2007-07-30 15:48 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel, linux-rt-users

On Mon, 2007-07-30 at 11:22 +0200, Ingo Molnar wrote:
> * Daniel Walker <dwalker@mvista.com> wrote:
> 
> > +	/*
> > +	 * If our rcu_read_lock_nesting went negative, likely
> > +	 * something is wrong..
> > +	 */
> > +	WARN_ON(current->rcu_read_lock_nesting < 0);
> 
> have you actually caught any rcu locking problem this way? Double 
> unlocks should be caught by lockdep already, at a higher level.
> 
> in any case i've added a slightly different form of this change to the 
> -rt queue that will also check for counter overflows. But i'm not sure 
> we want to litter the code with trivial checks like this, so i'm keeping 
> it separate and if it does not trigger anything real i'll remove it.

I haven't caught anything with it, but this code would have made it much
easier to catch the single rcu unlock in sys_sched_yield() which was
silent in PREEMPT_RT, and hung !PREEMPT_RT ..

It's fine with me, if you have another method.

Daniel


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 4/9] ifdef raise_softirq_irqoff wakeup
  2007-07-30  9:27   ` Ingo Molnar
@ 2007-07-30 15:48     ` Daniel Walker
  2007-08-06  7:20       ` Ingo Molnar
  0 siblings, 1 reply; 26+ messages in thread
From: Daniel Walker @ 2007-07-30 15:48 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel, linux-rt-users

On Mon, 2007-07-30 at 11:27 +0200, Ingo Molnar wrote:
> * Daniel Walker <dwalker@mvista.com> wrote:
> 
> > @@ -508,7 +508,9 @@ inline fastcall void raise_softirq_irqof
> >  {
> >  	__do_raise_softirq_irqoff(nr);
> >  
> > +#ifdef CONFIG_PREEMPT_SOFTIRQS
> >  	wakeup_softirqd(nr);
> > +#endif
> 
> thanks, applied. People rarely run the -rt kernel just to turn off 
> PREEMPT_RT, that's why this bug was there ;-)

Ultimately, all modes should function correctly right?

Daniel


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 7/9] introduce PICK_FUNCTION
  2007-07-30  9:39   ` Peter Zijlstra
@ 2007-07-30 16:16     ` Daniel Walker
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Walker @ 2007-07-30 16:16 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, linux-kernel, linux-rt-users

On Mon, 2007-07-30 at 11:39 +0200, Peter Zijlstra wrote:
> On Sun, 2007-07-29 at 19:45 -0700, Daniel Walker wrote:
> 
> > +#undef TYPE_EQUAL
> > +#define TYPE_EQUAL(var, type) \
> > +		__builtin_types_compatible_p(typeof(var), type *)
> > +
> 
> If you're going to touch this code, could you perhaps change TYPE_EQUAL
> to do as it says and read like so:
> 
> 	__builtin_types_compatible_p(typeof(var), type)
> 

Yeah that makes sense .. I'll add it to my tree ..

Daniel


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 4/9] ifdef raise_softirq_irqoff wakeup
  2007-07-30 15:48     ` Daniel Walker
@ 2007-08-06  7:20       ` Ingo Molnar
  0 siblings, 0 replies; 26+ messages in thread
From: Ingo Molnar @ 2007-08-06  7:20 UTC (permalink / raw)
  To: Daniel Walker; +Cc: linux-kernel, linux-rt-users


* Daniel Walker <dwalker@mvista.com> wrote:

> On Mon, 2007-07-30 at 11:27 +0200, Ingo Molnar wrote:
> > * Daniel Walker <dwalker@mvista.com> wrote:
> > 
> > > @@ -508,7 +508,9 @@ inline fastcall void raise_softirq_irqof
> > >  {
> > >  	__do_raise_softirq_irqoff(nr);
> > >  
> > > +#ifdef CONFIG_PREEMPT_SOFTIRQS
> > >  	wakeup_softirqd(nr);
> > > +#endif
> > 
> > thanks, applied. People rarely run the -rt kernel just to turn off 
> > PREEMPT_RT, that's why this bug was there ;-)
> 
> Ultimately, all modes should function correctly right?

yes, of course - especially once any of the components nears upstream 
integration ;-)

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 9/9] seqlocks: use PICK_FUNCTION
  2007-07-30  2:45 ` [PATCH -rt 9/9] seqlocks: use PICK_FUNCTION Daniel Walker
@ 2007-08-06  7:21   ` Ingo Molnar
  2007-08-08 19:40     ` Daniel Walker
  0 siblings, 1 reply; 26+ messages in thread
From: Ingo Molnar @ 2007-08-06  7:21 UTC (permalink / raw)
  To: Daniel Walker; +Cc: linux-kernel, linux-rt-users


* Daniel Walker <dwalker@mvista.com> wrote:

> Replace the old PICK_OP style macros with PICK_FUNCTION. Although, 
> seqlocks has some alien code, which I also replaced as can be seen 
> from the line count below.

ok, i very much like the cleanup effects here, could you resend your 
latest version of this (with Peter's suggested cleanup) against -rc2-rt2 
so that i can apply it?

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -rt 9/9] seqlocks: use PICK_FUNCTION
  2007-08-06  7:21   ` Ingo Molnar
@ 2007-08-08 19:40     ` Daniel Walker
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Walker @ 2007-08-08 19:40 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel, linux-rt-users

On Mon, 2007-08-06 at 09:21 +0200, Ingo Molnar wrote:
> * Daniel Walker <dwalker@mvista.com> wrote:
> 
> > Replace the old PICK_OP style macros with PICK_FUNCTION. Although, 
> > seqlocks has some alien code, which I also replaced as can be seen 
> > from the line count below.
> 
> ok, i very much like the cleanup effects here, could you resend your 
> latest version of this (with Peter's suggested cleanup) against -rc2-rt2 
> so that i can apply it?

Ok, sent them privately. Updated to 2.6.23-rc2-rt2 w/ Peter's
suggestion. You'll get two sets the first had some unrefreshed hunks in
the 3/3 patch .

There is one thing I was wondering about in seqlock.h . There was a
class of these macro's like below,

#define PICK_SEQOP_CONST_RET(op, lock)                          \
({                                                              \
        unsigned long __ret;                                    \
                                                                \
        if (TYPE_EQUAL((lock), raw_seqlock_t))                  \
                __ret = op##_raw((const raw_seqlock_t *)(lock));\
        else if (TYPE_EQUAL((lock), seqlock_t))                 \
                __ret = op((seqlock_t *)(lock));                \
        else __ret = __bad_seqlock_type();                      \
                                                                \
        __ret;                                                  \
})

Where the variable is specifically casted to "const raw_seqlock_t *".. I
ended up dropping these all together, and I'm wonder what the adverse
effects of that are .. The casting seems superfluous to me since you
know already that the type are compatible at that point. Any thoughts?

Daniel


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2007-08-08 19:46 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-07-30  2:45 [PATCH -rt 1/9] preempt rcu: check for underflow Daniel Walker
2007-07-30  2:45 ` [PATCH -rt 2/9] Dont allow non-threaded softirqs and threaded hardirqs Daniel Walker
2007-07-30  9:23   ` Ingo Molnar
2007-07-30 11:28     ` Steven Rostedt
2007-07-30  2:45 ` [PATCH -rt 3/9] Fix jiffies wrap issue in update_times Daniel Walker
2007-07-30  9:25   ` Ingo Molnar
2007-07-30  2:45 ` [PATCH -rt 4/9] ifdef raise_softirq_irqoff wakeup Daniel Walker
2007-07-30  9:27   ` Ingo Molnar
2007-07-30 15:48     ` Daniel Walker
2007-08-06  7:20       ` Ingo Molnar
2007-07-30  2:45 ` [PATCH -rt 5/9] net: fix mis-merge in qdisc_restart Daniel Walker
2007-07-30  9:30   ` Ingo Molnar
2007-07-30  2:45 ` [PATCH -rt 6/9] spinlock/rt_lock random cleanups Daniel Walker
2007-07-30  4:58   ` Ankita Garg
2007-07-30 15:48     ` Daniel Walker
2007-07-30  9:31   ` Ingo Molnar
2007-07-30  2:45 ` [PATCH -rt 7/9] introduce PICK_FUNCTION Daniel Walker
2007-07-30  9:39   ` Peter Zijlstra
2007-07-30 16:16     ` Daniel Walker
2007-07-30  2:45 ` [PATCH -rt 8/9] spinlocks/rwlocks: use PICK_FUNCTION() Daniel Walker
2007-07-30  2:45 ` [PATCH -rt 9/9] seqlocks: use PICK_FUNCTION Daniel Walker
2007-08-06  7:21   ` Ingo Molnar
2007-08-08 19:40     ` Daniel Walker
2007-07-30  5:26 ` [PATCH -rt 1/9] preempt rcu: check for underflow Paul E. McKenney
2007-07-30  9:22 ` Ingo Molnar
2007-07-30 15:48   ` Daniel Walker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).