* [patch 1/3] spinlock fix #1 @ 2005-01-20 11:43 Ingo Molnar 2005-01-20 11:59 ` [patch 2/3] spinlock fix #2: generalize [spin|rw]lock yielding Ingo Molnar 0 siblings, 1 reply; 7+ messages in thread From: Ingo Molnar @ 2005-01-20 11:43 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel i've split up spinlocking-fixes.patch into 3 parts and reworked them. This is the first one, against BK-curr: it fixes the BUILD_LOCK_OPS() bug by introducing the following 3 new locking primitives: spin_trylock_test(lock) read_trylock_test(lock) write_trylock_test(lock) this is what is needed by BUILD_LOCK_OPS(): a nonintrusive test to check whether the real (intrusive) trylock op would succeed or not. Semantics and naming is completely symmetric to the trylock counterpart. No changes to exit.c. build/boot-tested on x86. Architectures that want to support PREEMPT need to add these definitions. Ingo Signed-off-by: Ingo Molnar <mingo@elte.hu> --- linux/kernel/spinlock.c.orig +++ linux/kernel/spinlock.c @@ -173,8 +173,8 @@ EXPORT_SYMBOL(_write_lock); * (We do this in a function because inlining it would be excessive.) */ -#define BUILD_LOCK_OPS(op, locktype, is_locked_fn) \ -void __lockfunc _##op##_lock(locktype *lock) \ +#define BUILD_LOCK_OPS(op, locktype) \ +void __lockfunc _##op##_lock(locktype##_t *lock) \ { \ preempt_disable(); \ for (;;) { \ @@ -183,7 +183,7 @@ void __lockfunc _##op##_lock(locktype *l preempt_enable(); \ if (!(lock)->break_lock) \ (lock)->break_lock = 1; \ - while (is_locked_fn(lock) && (lock)->break_lock) \ + while (!op##_trylock_test(lock) && (lock)->break_lock) \ cpu_relax(); \ preempt_disable(); \ } \ @@ -191,7 +191,7 @@ void __lockfunc _##op##_lock(locktype *l \ EXPORT_SYMBOL(_##op##_lock); \ \ -unsigned long __lockfunc _##op##_lock_irqsave(locktype *lock) \ +unsigned long __lockfunc _##op##_lock_irqsave(locktype##_t *lock) \ { \ unsigned long flags; \ \ @@ -205,7 +205,7 @@ unsigned long __lockfunc _##op##_lock_ir preempt_enable(); \ if (!(lock)->break_lock) \ (lock)->break_lock = 1; \ - while (is_locked_fn(lock) && (lock)->break_lock) \ + while (!op##_trylock_test(lock) && (lock)->break_lock) \ cpu_relax(); \ preempt_disable(); \ } \ @@ -214,14 +214,14 @@ unsigned long __lockfunc _##op##_lock_ir \ EXPORT_SYMBOL(_##op##_lock_irqsave); \ \ -void __lockfunc _##op##_lock_irq(locktype *lock) \ +void __lockfunc _##op##_lock_irq(locktype##_t *lock) \ { \ _##op##_lock_irqsave(lock); \ } \ \ EXPORT_SYMBOL(_##op##_lock_irq); \ \ -void __lockfunc _##op##_lock_bh(locktype *lock) \ +void __lockfunc _##op##_lock_bh(locktype##_t *lock) \ { \ unsigned long flags; \ \ @@ -246,9 +246,9 @@ EXPORT_SYMBOL(_##op##_lock_bh) * _[spin|read|write]_lock_irqsave() * _[spin|read|write]_lock_bh() */ -BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked); -BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked); -BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked); +BUILD_LOCK_OPS(spin, spinlock); +BUILD_LOCK_OPS(read, rwlock); +BUILD_LOCK_OPS(write, rwlock); #endif /* CONFIG_PREEMPT */ --- linux/include/linux/spinlock.h.orig +++ linux/include/linux/spinlock.h @@ -584,4 +584,10 @@ static inline int bit_spin_is_locked(int #define DEFINE_SPINLOCK(x) spinlock_t x = SPIN_LOCK_UNLOCKED #define DEFINE_RWLOCK(x) rwlock_t x = RW_LOCK_UNLOCKED +/** + * spin_trylock_test - would spin_trylock() succeed? + * @lock: the spinlock in question. + */ +#define spin_trylock_test(lock) (!spin_is_locked(lock)) + #endif /* __LINUX_SPINLOCK_H */ --- linux/include/asm-i386/spinlock.h.orig +++ linux/include/asm-i386/spinlock.h @@ -188,6 +188,18 @@ typedef struct { #define rwlock_is_locked(x) ((x)->lock != RW_LOCK_BIAS) +/** + * read_trylock_test - would read_trylock() succeed? + * @lock: the rwlock in question. + */ +#define read_trylock_test(x) (atomic_read((atomic_t *)&(x)->lock) > 0) + +/** + * write_trylock_test - would write_trylock() succeed? + * @lock: the rwlock in question. + */ +#define write_trylock_test(x) ((x)->lock == RW_LOCK_BIAS) + /* * On x86, we implement read-write locks as a 32-bit counter * with the high bit (sign) being the "contended" bit. ^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 2/3] spinlock fix #2: generalize [spin|rw]lock yielding 2005-01-20 11:43 [patch 1/3] spinlock fix #1 Ingo Molnar @ 2005-01-20 11:59 ` Ingo Molnar 2005-01-20 12:09 ` [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 Ingo Molnar 0 siblings, 1 reply; 7+ messages in thread From: Ingo Molnar @ 2005-01-20 11:59 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, Paul Mackerras this patch generalizes the facility of targeted lock-yielding originally implemented for ppc64. This facility enables a virtual CPU to indicate towards the hypervisor which other virtual CPU this CPU is 'blocked on', and hence which CPU the hypervisor should yield the current timeslice to, in order to make progress on releasing the lock. On physical platforms these functions default to cpu_relax(). Since physical platforms are in the overwhelming majority i've added the two new functions to the new asm-generic/spinlock.h include file - here i hope we can later on move more generic spinlock bits as well. this patch solves the ppc64/PREEMPT functionality-regression reported by Paul Mackerras. I only tested it on x86, Paul, would you mind to test it on ppc64? Ingo Signed-off-by: Ingo Molnar <mingo@elte.hu> --- linux/kernel/spinlock.c.orig +++ linux/kernel/spinlock.c @@ -184,7 +184,7 @@ void __lockfunc _##op##_lock(locktype##_ if (!(lock)->break_lock) \ (lock)->break_lock = 1; \ while (!op##_trylock_test(lock) && (lock)->break_lock) \ - cpu_relax(); \ + locktype##_yield(lock); \ preempt_disable(); \ } \ } \ @@ -206,7 +206,7 @@ unsigned long __lockfunc _##op##_lock_ir if (!(lock)->break_lock) \ (lock)->break_lock = 1; \ while (!op##_trylock_test(lock) && (lock)->break_lock) \ - cpu_relax(); \ + locktype##_yield(lock); \ preempt_disable(); \ } \ return flags; \ --- linux/arch/ppc64/lib/locks.c.orig +++ linux/arch/ppc64/lib/locks.c @@ -23,7 +23,7 @@ /* waiting for a spinlock... */ #if defined(CONFIG_PPC_SPLPAR) || defined(CONFIG_PPC_ISERIES) -void __spin_yield(spinlock_t *lock) +void spinlock_yield(spinlock_t *lock) { unsigned int lock_value, holder_cpu, yield_count; struct paca_struct *holder_paca; @@ -54,7 +54,7 @@ void __spin_yield(spinlock_t *lock) * This turns out to be the same for read and write locks, since * we only know the holder if it is write-locked. */ -void __rw_yield(rwlock_t *rw) +void rwlock_yield(rwlock_t *rw) { int lock_value; unsigned int holder_cpu, yield_count; @@ -87,7 +87,7 @@ void spin_unlock_wait(spinlock_t *lock) while (lock->lock) { HMT_low(); if (SHARED_PROCESSOR) - __spin_yield(lock); + spinlock_yield(lock); } HMT_medium(); } --- linux/include/asm-ia64/spinlock.h.orig +++ linux/include/asm-ia64/spinlock.h @@ -17,6 +17,8 @@ #include <asm/intrinsics.h> #include <asm/system.h> +#include <asm-generic/spinlock.h> + typedef struct { volatile unsigned int lock; #ifdef CONFIG_PREEMPT --- linux/include/asm-generic/spinlock.h.orig +++ linux/include/asm-generic/spinlock.h @@ -0,0 +1,11 @@ +#ifndef _ASM_GENERIC_SPINLOCK_H +#define _ASM_GENERIC_SPINLOCK_H + +/* + * Virtual platforms might use these to + * yield to specific virtual CPUs: + */ +#define spinlock_yield(lock) cpu_relax() +#define rwlock_yield(lock) cpu_relax() + +#endif /* _ASM_GENERIC_SPINLOCK_H */ --- linux/include/linux/spinlock.h.orig +++ linux/include/linux/spinlock.h @@ -206,6 +206,8 @@ typedef struct { #define _raw_spin_unlock(lock) do { (void)(lock); } while(0) #endif /* CONFIG_DEBUG_SPINLOCK */ +#define spinlock_yield(lock) (void)(lock) + /* RW spinlocks: No debug version */ #if (__GNUC__ > 2) @@ -224,6 +226,8 @@ typedef struct { #define _raw_read_trylock(lock) ({ (void)(lock); (1); }) #define _raw_write_trylock(lock) ({ (void)(lock); (1); }) +#define rwlock_yield(lock) (void)(lock) + #define _spin_trylock(lock) ({preempt_disable(); _raw_spin_trylock(lock) ? \ 1 : ({preempt_enable(); 0;});}) --- linux/include/asm-i386/spinlock.h.orig +++ linux/include/asm-i386/spinlock.h @@ -7,6 +7,8 @@ #include <linux/config.h> #include <linux/compiler.h> +#include <asm-generic/spinlock.h> + asmlinkage int printk(const char * fmt, ...) __attribute__ ((format (printf, 1, 2))); --- linux/include/asm-ppc64/spinlock.h.orig +++ linux/include/asm-ppc64/spinlock.h @@ -64,11 +64,11 @@ static __inline__ void _raw_spin_unlock( #if defined(CONFIG_PPC_SPLPAR) || defined(CONFIG_PPC_ISERIES) /* We only yield to the hypervisor if we are in shared processor mode */ #define SHARED_PROCESSOR (get_paca()->lppaca.shared_proc) -extern void __spin_yield(spinlock_t *lock); -extern void __rw_yield(rwlock_t *lock); +extern void spinlock_yield(spinlock_t *lock); +extern void rwlock_yield(rwlock_t *lock); #else /* SPLPAR || ISERIES */ -#define __spin_yield(x) barrier() -#define __rw_yield(x) barrier() +#define spinlock_yield(x) barrier() +#define rwlock_yield(x) barrier() #define SHARED_PROCESSOR 0 #endif extern void spin_unlock_wait(spinlock_t *lock); @@ -109,7 +109,7 @@ static void __inline__ _raw_spin_lock(sp do { HMT_low(); if (SHARED_PROCESSOR) - __spin_yield(lock); + spinlock_yield(lock); } while (likely(lock->lock != 0)); HMT_medium(); } @@ -127,7 +127,7 @@ static void __inline__ _raw_spin_lock_fl do { HMT_low(); if (SHARED_PROCESSOR) - __spin_yield(lock); + spinlock_yield(lock); } while (likely(lock->lock != 0)); HMT_medium(); local_irq_restore(flags_dis); @@ -201,7 +201,7 @@ static void __inline__ _raw_read_lock(rw do { HMT_low(); if (SHARED_PROCESSOR) - __rw_yield(rw); + rwlock_yield(rw); } while (likely(rw->lock < 0)); HMT_medium(); } @@ -258,7 +258,7 @@ static void __inline__ _raw_write_lock(r do { HMT_low(); if (SHARED_PROCESSOR) - __rw_yield(rw); + rwlock_yield(rw); } while (likely(rw->lock != 0)); HMT_medium(); } --- linux/include/asm-x86_64/spinlock.h.orig +++ linux/include/asm-x86_64/spinlock.h @@ -6,6 +6,8 @@ #include <asm/page.h> #include <linux/config.h> +#include <asm-generic/spinlock.h> + extern int printk(const char * fmt, ...) __attribute__ ((format (printf, 1, 2))); ^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 2005-01-20 11:59 ` [patch 2/3] spinlock fix #2: generalize [spin|rw]lock yielding Ingo Molnar @ 2005-01-20 12:09 ` Ingo Molnar 2005-01-20 12:18 ` [patch] stricter type-checking rwlock " Ingo Molnar 2005-01-20 22:51 ` [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 J.A. Magallon 0 siblings, 2 replies; 7+ messages in thread From: Ingo Molnar @ 2005-01-20 12:09 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel this patch would have caught the bug in -BK-curr (that patch #1 fixes), via a compiler warning. Test-built/booted on x86. Ingo Signed-off-by: Ingo Molnar <mingo@elte.hu> --- linux/include/asm-i386/spinlock.h.orig +++ linux/include/asm-i386/spinlock.h @@ -36,7 +36,10 @@ typedef struct { #define SPIN_LOCK_UNLOCKED (spinlock_t) { 1 SPINLOCK_MAGIC_INIT } -#define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) +static inline void spin_lock_init(spinlock_t *lock) +{ + *lock = SPIN_LOCK_UNLOCKED; +} /* * Simple spin lock operations. There are two variants, one clears IRQ's @@ -45,8 +48,17 @@ typedef struct { * We make no fairness assumptions. They have a cost. */ -#define spin_is_locked(x) (*(volatile signed char *)(&(x)->lock) <= 0) -#define spin_unlock_wait(x) do { barrier(); } while(spin_is_locked(x)) +static inline int spin_is_locked(spinlock_t *lock) +{ + return *(volatile signed char *)(&lock->lock) <= 0; +} + +static inline void spin_unlock_wait(spinlock_t *lock) +{ + do { + barrier(); + } while (spin_is_locked(lock)); +} #define spin_lock_string \ "\n1:\t" \ ^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch] stricter type-checking rwlock primitives, x86 2005-01-20 12:09 ` [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 Ingo Molnar @ 2005-01-20 12:18 ` Ingo Molnar 2005-01-20 12:22 ` [patch] minor spinlock cleanups Ingo Molnar 2005-01-20 22:51 ` [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 J.A. Magallon 1 sibling, 1 reply; 7+ messages in thread From: Ingo Molnar @ 2005-01-20 12:18 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel turn x86 rwlock macros into inline functions, to get stricter type-checking. Test-built/booted on x86. (patch comes after all previous spinlock patches.) Ingo Signed-off-by: Ingo Molnar <mingo@elte.hu> --- linux/include/asm-i386/spinlock.h.orig +++ linux/include/asm-i386/spinlock.h @@ -198,21 +198,33 @@ typedef struct { #define RW_LOCK_UNLOCKED (rwlock_t) { RW_LOCK_BIAS RWLOCK_MAGIC_INIT } -#define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) +static inline void rwlock_init(rwlock_t *rw) +{ + *rw = RW_LOCK_UNLOCKED; +} -#define rwlock_is_locked(x) ((x)->lock != RW_LOCK_BIAS) +static inline int rwlock_is_locked(rwlock_t *rw) +{ + return rw->lock != RW_LOCK_BIAS; +} /** * read_trylock_test - would read_trylock() succeed? * @lock: the rwlock in question. */ -#define read_trylock_test(x) (atomic_read((atomic_t *)&(x)->lock) > 0) +static inline int read_trylock_test(rwlock_t *rw) +{ + return atomic_read((atomic_t *)&rw->lock) > 0; +} /** * write_trylock_test - would write_trylock() succeed? * @lock: the rwlock in question. */ -#define write_trylock_test(x) ((x)->lock == RW_LOCK_BIAS) +static inline int write_trylock_test(rwlock_t *rw) +{ + return atomic_read((atomic_t *)&rw->lock) == RW_LOCK_BIAS; +} /* * On x86, we implement read-write locks as a 32-bit counter @@ -241,8 +253,16 @@ static inline void _raw_write_lock(rwloc __build_write_lock(rw, "__write_lock_failed"); } -#define _raw_read_unlock(rw) asm volatile("lock ; incl %0" :"=m" ((rw)->lock) : : "memory") -#define _raw_write_unlock(rw) asm volatile("lock ; addl $" RW_LOCK_BIAS_STR ",%0":"=m" ((rw)->lock) : : "memory") +static inline void _raw_read_unlock(rwlock_t *rw) +{ + asm volatile("lock ; incl %0" :"=m" (rw->lock) : : "memory"); +} + +static inline void _raw_write_unlock(rwlock_t *rw) +{ + asm volatile("lock ; addl $" RW_LOCK_BIAS_STR + ",%0":"=m" (rw->lock) : : "memory"); +} static inline int _raw_read_trylock(rwlock_t *lock) { ^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch] minor spinlock cleanups 2005-01-20 12:18 ` [patch] stricter type-checking rwlock " Ingo Molnar @ 2005-01-20 12:22 ` Ingo Molnar 0 siblings, 0 replies; 7+ messages in thread From: Ingo Molnar @ 2005-01-20 12:22 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel cleanup: remove stale semicolon from linux/spinlock.h and stale space from asm-i386/spinlock.h. Ingo Signed-off-by: Ingo Molnar <mingo@elte.hu> --- linux/include/linux/spinlock.h.orig +++ linux/include/linux/spinlock.h @@ -202,7 +202,7 @@ typedef struct { #define _raw_spin_lock(lock) do { (void)(lock); } while(0) #define spin_is_locked(lock) ((void)(lock), 0) #define _raw_spin_trylock(lock) (((void)(lock), 1)) -#define spin_unlock_wait(lock) (void)(lock); +#define spin_unlock_wait(lock) (void)(lock) #define _raw_spin_unlock(lock) do { (void)(lock); } while(0) #endif /* CONFIG_DEBUG_SPINLOCK */ --- linux/include/asm-i386/spinlock.h.orig +++ linux/include/asm-i386/spinlock.h @@ -92,7 +92,7 @@ static inline void spin_unlock_wait(spin * (except on PPro SMP or if we are using OOSTORE) * (PPro errata 66, 92) */ - + #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE) #define spin_unlock_string \ ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 2005-01-20 12:09 ` [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 Ingo Molnar 2005-01-20 12:18 ` [patch] stricter type-checking rwlock " Ingo Molnar @ 2005-01-20 22:51 ` J.A. Magallon 1 sibling, 0 replies; 7+ messages in thread From: J.A. Magallon @ 2005-01-20 22:51 UTC (permalink / raw) To: linux-kernel [-- Attachment #1: Type: text/plain, Size: 998 bytes --] On 2005.01.20, Ingo Molnar wrote: > > this patch would have caught the bug in -BK-curr (that patch #1 fixes), > via a compiler warning. Test-built/booted on x86. > > Ingo > > Signed-off-by: Ingo Molnar <mingo@elte.hu> > > --- linux/include/asm-i386/spinlock.h.orig > +++ linux/include/asm-i386/spinlock.h > @@ -36,7 +36,10 @@ typedef struct { > > #define SPIN_LOCK_UNLOCKED (spinlock_t) { 1 SPINLOCK_MAGIC_INIT } > > -#define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) > +static inline void spin_lock_init(spinlock_t *lock) Will have any real effect if you add things like: +static inline void spin_lock_init(spinlock_t *lock) __attribute__((__pure__)); ?? TIA -- J.A. Magallon <jamagallon()able!es> \ Software is like sex: werewolf!able!es \ It's better when it's free Mandrakelinux release 10.2 (Cooker) for i586 Linux 2.6.10-jam4 (gcc 3.4.3 (Mandrakelinux 10.2 3.4.3-3mdk)) #2 [-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch @ 2005-01-19 9:18 Peter Chubb 2005-01-19 9:20 ` Ingo Molnar 0 siblings, 1 reply; 7+ messages in thread From: Peter Chubb @ 2005-01-19 9:18 UTC (permalink / raw) To: Ingo Molnar Cc: Peter Chubb, Tony Luck, Darren Williams, Andrew Morton, Chris Wedgwood, torvalds, benh, linux-kernel, Ia64 Linux, Christoph Hellwig >>>>> "Ingo" == Ingo Molnar <mingo@elte.hu> writes: Ingo> * Peter Chubb <peterc@gelato.unsw.edu.au> wrote: >> Here's a patch that adds the missing read_is_locked() and >> write_is_locked() macros for IA64. When combined with Ingo's >> patch, I can boot an SMP kernel with CONFIG_PREEMPT on. >> >> However, I feel these macros are misnamed: read_is_locked() returns >> true if the lock is held for writing; write_is_locked() returns >> true if the lock is held for reading or writing. Ingo> well, 'read_is_locked()' means: "will a read_lock() succeed" Fail, surely? -- Dr Peter Chubb http://www.gelato.unsw.edu.au peterc AT gelato.unsw.edu.au The technical we do immediately, the political takes *forever* ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch 2005-01-19 9:18 Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch Peter Chubb @ 2005-01-19 9:20 ` Ingo Molnar 2005-01-19 21:43 ` Paul Mackerras 0 siblings, 1 reply; 7+ messages in thread From: Ingo Molnar @ 2005-01-19 9:20 UTC (permalink / raw) To: Peter Chubb Cc: Tony Luck, Darren Williams, Andrew Morton, Chris Wedgwood, torvalds, benh, linux-kernel, Ia64 Linux, Christoph Hellwig * Peter Chubb <peterc@gelato.unsw.edu.au> wrote: > >> Here's a patch that adds the missing read_is_locked() and > >> write_is_locked() macros for IA64. When combined with Ingo's > >> patch, I can boot an SMP kernel with CONFIG_PREEMPT on. > >> > >> However, I feel these macros are misnamed: read_is_locked() returns > >> true if the lock is held for writing; write_is_locked() returns > >> true if the lock is held for reading or writing. > > Ingo> well, 'read_is_locked()' means: "will a read_lock() succeed" > > Fail, surely? yeah ... and with that i proved beyond doubt that the naming is indeed unintuitive :-) Ingo ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch 2005-01-19 9:20 ` Ingo Molnar @ 2005-01-19 21:43 ` Paul Mackerras 2005-01-20 2:34 ` [PATCH RFC] 'spinlock/rwlock fixes' V3 [1/1] Chris Wedgwood 0 siblings, 1 reply; 7+ messages in thread From: Paul Mackerras @ 2005-01-19 21:43 UTC (permalink / raw) To: Ingo Molnar Cc: Peter Chubb, Tony Luck, Darren Williams, Andrew Morton, Chris Wedgwood, torvalds, benh, linux-kernel, Ia64 Linux, Christoph Hellwig Ingo Molnar writes: > * Peter Chubb <peterc@gelato.unsw.edu.au> wrote: > > > >> Here's a patch that adds the missing read_is_locked() and > > >> write_is_locked() macros for IA64. When combined with Ingo's > > >> patch, I can boot an SMP kernel with CONFIG_PREEMPT on. > > >> > > >> However, I feel these macros are misnamed: read_is_locked() returns > > >> true if the lock is held for writing; write_is_locked() returns > > >> true if the lock is held for reading or writing. > > > > Ingo> well, 'read_is_locked()' means: "will a read_lock() succeed" > > > > Fail, surely? > > yeah ... and with that i proved beyond doubt that the naming is indeed > unintuitive :-) Yes. Intuitively read_is_locked() is true when someone has done a read_lock and write_is_locked() is true when someone has done a write lock. I suggest read_poll(), write_poll(), spin_poll(), which are like {read,write,spin}_trylock but don't do the atomic op to get the lock, that is, they don't change the lock value but return true if the trylock would succeed, assuming no other cpu takes the lock in the meantime. Regards, Paul. ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH RFC] 'spinlock/rwlock fixes' V3 [1/1] 2005-01-19 21:43 ` Paul Mackerras @ 2005-01-20 2:34 ` Chris Wedgwood 2005-01-20 3:01 ` Andrew Morton 0 siblings, 1 reply; 7+ messages in thread From: Chris Wedgwood @ 2005-01-20 2:34 UTC (permalink / raw) To: Paul Mackerras Cc: linux-kernel, Ingo Molnar, Peter Chubb, Tony Luck, Darren Williams, Andrew Morton, torvalds, Benjamin Herrenschmidt, Ia64 Linux, Christoph Hellwig, William Lee Irwin III, Jesse Barnes On Thu, Jan 20, 2005 at 08:43:30AM +1100, Paul Mackerras wrote: > I suggest read_poll(), write_poll(), spin_poll(), which are like > {read,write,spin}_trylock but don't do the atomic op to get the > lock, that is, they don't change the lock value but return true if > the trylock would succeed, assuming no other cpu takes the lock in > the meantime. I'm not personally convinced *_poll is any clearer really, I would if this is vague prefer longer more obvious names but that's just me. Because spin_is_locked is used in quite a few places I would leave that one alone for now --- I'm not saying we can't change this name, but it should be a separate issue IMO. Because rwlock_is_locked isn't used in many places changing that isn't a big deal. As a compromise I have the following patch in my quilt tree based upon what a few people have said in this thread already. This is again the "-CURRENT bk" tree as of a few minutes ago and seems to be working as expected. * i386: rename spinlock_t -> lock to slock to catch possible macro abuse problems * i386, ia64: rename rwlock_is_locked to rwlock_write_locked as this is IMO a better name * i386, ia64: add rwlock_read_locked (if people are OK with these, I can do the other architectures) * generic: fix kernel/exit.c to use rwlock_write_locked * generic: fix kernel/spinlock.c Comments? --- include/asm-i386/spinlock.h | 26 ++++++++++++++++++-------- include/asm-ia64/spinlock.h | 12 +++++++++++- kernel/exit.c | 2 +- kernel/spinlock.c | 4 ++-- 4 files changed, 32 insertions(+), 12 deletions(-) ===== include/asm-i386/spinlock.h 1.16 vs edited ===== Index: cw-current/include/asm-i386/spinlock.h =================================================================== --- cw-current.orig/include/asm-i386/spinlock.h 2005-01-19 17:37:27.497810394 -0800 +++ cw-current/include/asm-i386/spinlock.h 2005-01-19 17:37:30.044914512 -0800 @@ -15,7 +15,7 @@ */ typedef struct { - volatile unsigned int lock; + volatile unsigned int slock; #ifdef CONFIG_DEBUG_SPINLOCK unsigned magic; #endif @@ -43,7 +43,7 @@ * We make no fairness assumptions. They have a cost. */ -#define spin_is_locked(x) (*(volatile signed char *)(&(x)->lock) <= 0) +#define spin_is_locked(x) (*(volatile signed char *)(&(x)->slock) <= 0) #define spin_unlock_wait(x) do { barrier(); } while(spin_is_locked(x)) #define spin_lock_string \ @@ -83,7 +83,7 @@ #define spin_unlock_string \ "movb $1,%0" \ - :"=m" (lock->lock) : : "memory" + :"=m" (lock->slock) : : "memory" static inline void _raw_spin_unlock(spinlock_t *lock) @@ -101,7 +101,7 @@ #define spin_unlock_string \ "xchgb %b0, %1" \ - :"=q" (oldval), "=m" (lock->lock) \ + :"=q" (oldval), "=m" (lock->slock) \ :"0" (oldval) : "memory" static inline void _raw_spin_unlock(spinlock_t *lock) @@ -123,7 +123,7 @@ char oldval; __asm__ __volatile__( "xchgb %b0,%1" - :"=q" (oldval), "=m" (lock->lock) + :"=q" (oldval), "=m" (lock->slock) :"0" (0) : "memory"); return oldval > 0; } @@ -138,7 +138,7 @@ #endif __asm__ __volatile__( spin_lock_string - :"=m" (lock->lock) : : "memory"); + :"=m" (lock->slock) : : "memory"); } static inline void _raw_spin_lock_flags (spinlock_t *lock, unsigned long flags) @@ -151,7 +151,7 @@ #endif __asm__ __volatile__( spin_lock_string_flags - :"=m" (lock->lock) : "r" (flags) : "memory"); + :"=m" (lock->slock) : "r" (flags) : "memory"); } /* @@ -186,7 +186,17 @@ #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) -#define rwlock_is_locked(x) ((x)->lock != RW_LOCK_BIAS) +/** + * rwlock_read_locked - would read_trylock() fail? + * @lock: the rwlock in question. + */ +#define rwlock_read_locked(x) (atomic_read((atomic_t *)&(x)->lock) <= 0) + +/** + * rwlock_write_locked - would write_trylock() fail? + * @lock: the rwlock in question. + */ +#define rwlock_write_locked(x) ((x)->lock != RW_LOCK_BIAS) /* * On x86, we implement read-write locks as a 32-bit counter Index: cw-current/include/asm-ia64/spinlock.h =================================================================== --- cw-current.orig/include/asm-ia64/spinlock.h 2005-01-19 17:37:27.498810435 -0800 +++ cw-current/include/asm-ia64/spinlock.h 2005-01-19 17:37:30.044914512 -0800 @@ -126,7 +126,17 @@ #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0 } #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) -#define rwlock_is_locked(x) (*(volatile int *) (x) != 0) + +/* rwlock_read_locked - would read_trylock() fail? + * @lock: the rwlock in question. + */ +#define rwlock_read_locked(x) (*(volatile int *) (x) < 0) + +/** + * rwlock_write_locked - would write_trylock() fail? + * @lock: the rwlock in question. + */ +#define rwlock_write_locked(x) (*(volatile int *) (x) != 0) #define _raw_read_lock(rw) \ do { \ Index: cw-current/kernel/exit.c =================================================================== --- cw-current.orig/kernel/exit.c 2005-01-19 17:37:27.498810435 -0800 +++ cw-current/kernel/exit.c 2005-01-19 18:14:21.601934388 -0800 @@ -862,7 +862,7 @@ if (!p->sighand) BUG(); if (!spin_is_locked(&p->sighand->siglock) && - !rwlock_is_locked(&tasklist_lock)) + !rwlock_write_locked(&tasklist_lock)) BUG(); #endif return pid_task(p->pids[PIDTYPE_TGID].pid_list.next, PIDTYPE_TGID); Index: cw-current/kernel/spinlock.c =================================================================== --- cw-current.orig/kernel/spinlock.c 2005-01-19 17:37:27.498810435 -0800 +++ cw-current/kernel/spinlock.c 2005-01-19 17:37:30.048914675 -0800 @@ -247,8 +247,8 @@ * _[spin|read|write]_lock_bh() */ BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked); -BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked); -BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked); +BUILD_LOCK_OPS(read, rwlock_t, rwlock_read_locked); +BUILD_LOCK_OPS(write, rwlock_t, rwlock_write_locked); #endif /* CONFIG_PREEMPT */ ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RFC] 'spinlock/rwlock fixes' V3 [1/1] 2005-01-20 2:34 ` [PATCH RFC] 'spinlock/rwlock fixes' V3 [1/1] Chris Wedgwood @ 2005-01-20 3:01 ` Andrew Morton 2005-01-20 3:18 ` Chris Wedgwood 0 siblings, 1 reply; 7+ messages in thread From: Andrew Morton @ 2005-01-20 3:01 UTC (permalink / raw) To: Chris Wedgwood Cc: paulus, linux-kernel, mingo, peterc, tony.luck, dsw, torvalds, benh, linux-ia64, hch, wli, jbarnes Given the general confusion and the difficulty of defining and understanding the semantics of these predicates. And given that the foo_is_locked() predicates have a history of being used to implement ghastly kludges, how about we simply nuke this statement: Chris Wedgwood <cw@f00f.org> wrote: > > if (!spin_is_locked(&p->sighand->siglock) && > - !rwlock_is_locked(&tasklist_lock)) > + !rwlock_write_locked(&tasklist_lock)) and be done with the whole thing? I mean, do we really want these things in the kernel anyway? We've never needed them before. If we reeeealy need the debug check, just do BUG_ON(read_trylock(...)) ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RFC] 'spinlock/rwlock fixes' V3 [1/1] 2005-01-20 3:01 ` Andrew Morton @ 2005-01-20 3:18 ` Chris Wedgwood 2005-01-20 8:59 ` Peter Chubb 0 siblings, 1 reply; 7+ messages in thread From: Chris Wedgwood @ 2005-01-20 3:18 UTC (permalink / raw) To: Andrew Morton Cc: paulus, linux-kernel, mingo, peterc, tony.luck, dsw, torvalds, benh, linux-ia64, hch, wli, jbarnes On Wed, Jan 19, 2005 at 07:01:04PM -0800, Andrew Morton wrote: > ... how about we simply nuke this statement: > > Chris Wedgwood <cw@f00f.org> wrote: > > > > if (!spin_is_locked(&p->sighand->siglock) && > > - !rwlock_is_locked(&tasklist_lock)) > > + !rwlock_write_locked(&tasklist_lock)) > > and be done with the whole thing? I'm all for killing that. I'll happily send a patch once the dust settles. It still isn't enough to rid of the rwlock_read_locked and rwlock_write_locked usage in kernel/spinlock.c as those are needed for the cpu_relax() calls so we have to decide on suitable names still... ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RFC] 'spinlock/rwlock fixes' V3 [1/1] 2005-01-20 3:18 ` Chris Wedgwood @ 2005-01-20 8:59 ` Peter Chubb 2005-01-20 15:51 ` Linus Torvalds 0 siblings, 1 reply; 7+ messages in thread From: Peter Chubb @ 2005-01-20 8:59 UTC (permalink / raw) To: Chris Wedgwood Cc: Andrew Morton, paulus, linux-kernel, mingo, peterc, tony.luck, dsw, torvalds, benh, linux-ia64, hch, wli, jbarnes >>>>> "Chris" == Chris Wedgwood <cw@f00f.org> writes: Chris> On Wed, Jan 19, 2005 at 07:01:04PM -0800, Andrew Morton wrote: Chris> It still isn't enough to rid of the rwlock_read_locked and Chris> rwlock_write_locked usage in kernel/spinlock.c as those are Chris> needed for the cpu_relax() calls so we have to decide on Chris> suitable names still... I suggest reversing the sense of the macros, and having read_can_lock() and write_can_lock() Meaning: read_can_lock() --- a read_lock() would have succeeded write_can_lock() --- a write_lock() would have succeeded. IA64 implementation: #define read_can_lock(x) (*(volatile int *)x >= 0) #define write_can_lock(x) (*(volatile int *)x == 0) Then use them as !read_can_lock(x) where you want the old semantics. The compiler ought to be smart enough to optimise the boolean ops. --- Dr Peter Chubb http://www.gelato.unsw.edu.au peterc AT gelato.unsw.edu.au The technical we do immediately, the political takes *forever* ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RFC] 'spinlock/rwlock fixes' V3 [1/1] 2005-01-20 8:59 ` Peter Chubb @ 2005-01-20 15:51 ` Linus Torvalds 2005-01-20 16:08 ` [patch 1/3] spinlock fix #1, *_can_lock() primitives Ingo Molnar 0 siblings, 1 reply; 7+ messages in thread From: Linus Torvalds @ 2005-01-20 15:51 UTC (permalink / raw) To: Peter Chubb Cc: Chris Wedgwood, Andrew Morton, paulus, linux-kernel, mingo, tony.luck, dsw, benh, linux-ia64, hch, wli, jbarnes On Thu, 20 Jan 2005, Peter Chubb wrote: > > I suggest reversing the sense of the macros, and having read_can_lock() > and write_can_lock() > > Meaning: > read_can_lock() --- a read_lock() would have succeeded > write_can_lock() --- a write_lock() would have succeeded. Yes. This has the advantage of being readable, and the "sense" of the test always being obvious. We have a sense problem with the "trylock()" cases - some return "it was locked" (semaphores), and some return "I succeeded" (spinlocks), so not only is the sense not immediately obvious from the usage, it's actually _different_ for semaphores and for spinlocks. So I like "read_can_lock()", since it's also obvious what it returns. (And yes, we should fix the semaphore trylock return code, dammit.) Linus ^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 1/3] spinlock fix #1, *_can_lock() primitives 2005-01-20 15:51 ` Linus Torvalds @ 2005-01-20 16:08 ` Ingo Molnar 2005-01-20 16:11 ` [patch 2/3] spinlock fix #2: generalize [spin|rw]lock yielding Ingo Molnar 0 siblings, 1 reply; 7+ messages in thread From: Ingo Molnar @ 2005-01-20 16:08 UTC (permalink / raw) To: Linus Torvalds Cc: Peter Chubb, Chris Wedgwood, Andrew Morton, paulus, linux-kernel, tony.luck, dsw, benh, linux-ia64, hch, wli, jbarnes * Linus Torvalds <torvalds@osdl.org> wrote: > We have a sense problem with the "trylock()" cases - some return "it > was locked" (semaphores), and some return "I succeeded" (spinlocks), > so not only is the sense not immediately obvious from the usage, it's > actually _different_ for semaphores and for spinlocks. well, this is primarily a problem of the semaphore primitives. anyway, here's my first patch again, with s/trylock_test/can_lock/. Ingo -- it fixes the BUILD_LOCK_OPS() bug by introducing the following 3 new locking primitives: spin_can_lock(lock) read_can_lock(lock) write_can_lock(lock) this is what is needed by BUILD_LOCK_OPS(): a nonintrusive test to check whether the real (intrusive) trylock op would succeed or not. Semantics and naming is completely symmetric to the trylock counterpart. No changes to exit.c. build/boot-tested on x86. Architectures that want to support PREEMPT need to add these definitions. Signed-off-by: Ingo Molnar <mingo@elte.hu> --- linux/kernel/spinlock.c.orig +++ linux/kernel/spinlock.c @@ -173,8 +173,8 @@ EXPORT_SYMBOL(_write_lock); * (We do this in a function because inlining it would be excessive.) */ -#define BUILD_LOCK_OPS(op, locktype, is_locked_fn) \ -void __lockfunc _##op##_lock(locktype *lock) \ +#define BUILD_LOCK_OPS(op, locktype) \ +void __lockfunc _##op##_lock(locktype##_t *lock) \ { \ preempt_disable(); \ for (;;) { \ @@ -183,7 +183,7 @@ void __lockfunc _##op##_lock(locktype *l preempt_enable(); \ if (!(lock)->break_lock) \ (lock)->break_lock = 1; \ - while (is_locked_fn(lock) && (lock)->break_lock) \ + while (!op##_can_lock(lock) && (lock)->break_lock) \ cpu_relax(); \ preempt_disable(); \ } \ @@ -191,7 +191,7 @@ void __lockfunc _##op##_lock(locktype *l \ EXPORT_SYMBOL(_##op##_lock); \ \ -unsigned long __lockfunc _##op##_lock_irqsave(locktype *lock) \ +unsigned long __lockfunc _##op##_lock_irqsave(locktype##_t *lock) \ { \ unsigned long flags; \ \ @@ -205,7 +205,7 @@ unsigned long __lockfunc _##op##_lock_ir preempt_enable(); \ if (!(lock)->break_lock) \ (lock)->break_lock = 1; \ - while (is_locked_fn(lock) && (lock)->break_lock) \ + while (!op##_can_lock(lock) && (lock)->break_lock) \ cpu_relax(); \ preempt_disable(); \ } \ @@ -214,14 +214,14 @@ unsigned long __lockfunc _##op##_lock_ir \ EXPORT_SYMBOL(_##op##_lock_irqsave); \ \ -void __lockfunc _##op##_lock_irq(locktype *lock) \ +void __lockfunc _##op##_lock_irq(locktype##_t *lock) \ { \ _##op##_lock_irqsave(lock); \ } \ \ EXPORT_SYMBOL(_##op##_lock_irq); \ \ -void __lockfunc _##op##_lock_bh(locktype *lock) \ +void __lockfunc _##op##_lock_bh(locktype##_t *lock) \ { \ unsigned long flags; \ \ @@ -246,9 +246,9 @@ EXPORT_SYMBOL(_##op##_lock_bh) * _[spin|read|write]_lock_irqsave() * _[spin|read|write]_lock_bh() */ -BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked); -BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked); -BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked); +BUILD_LOCK_OPS(spin, spinlock); +BUILD_LOCK_OPS(read, rwlock); +BUILD_LOCK_OPS(write, rwlock); #endif /* CONFIG_PREEMPT */ --- linux/include/linux/spinlock.h.orig +++ linux/include/linux/spinlock.h @@ -584,4 +584,10 @@ static inline int bit_spin_is_locked(int #define DEFINE_SPINLOCK(x) spinlock_t x = SPIN_LOCK_UNLOCKED #define DEFINE_RWLOCK(x) rwlock_t x = RW_LOCK_UNLOCKED +/** + * spin_can_lock - would spin_trylock() succeed? + * @lock: the spinlock in question. + */ +#define spin_can_lock(lock) (!spin_is_locked(lock)) + #endif /* __LINUX_SPINLOCK_H */ --- linux/include/asm-i386/spinlock.h.orig +++ linux/include/asm-i386/spinlock.h @@ -188,6 +188,18 @@ typedef struct { #define rwlock_is_locked(x) ((x)->lock != RW_LOCK_BIAS) +/** + * read_can_lock - would read_trylock() succeed? + * @lock: the rwlock in question. + */ +#define read_can_lock(x) (atomic_read((atomic_t *)&(x)->lock) > 0) + +/** + * write_can_lock - would write_trylock() succeed? + * @lock: the rwlock in question. + */ +#define write_can_lock(x) ((x)->lock == RW_LOCK_BIAS) + /* * On x86, we implement read-write locks as a 32-bit counter * with the high bit (sign) being the "contended" bit. ^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 2/3] spinlock fix #2: generalize [spin|rw]lock yielding 2005-01-20 16:08 ` [patch 1/3] spinlock fix #1, *_can_lock() primitives Ingo Molnar @ 2005-01-20 16:11 ` Ingo Molnar 2005-01-20 16:12 ` [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 Ingo Molnar 0 siblings, 1 reply; 7+ messages in thread From: Ingo Molnar @ 2005-01-20 16:11 UTC (permalink / raw) To: Linus Torvalds Cc: Peter Chubb, Chris Wedgwood, Andrew Morton, paulus, linux-kernel, tony.luck, dsw, benh, linux-ia64, hch, wli, jbarnes [patch respun with s/trylock_test/can_lock/] -- this patch generalizes the facility of targeted lock-yielding originally implemented for ppc64. This facility enables a virtual CPU to indicate towards the hypervisor which other virtual CPU this CPU is 'blocked on', and hence which CPU the hypervisor should yield the current timeslice to, in order to make progress on releasing the lock. On physical platforms these functions default to cpu_relax(). Since physical platforms are in the overwhelming majority i've added the two new functions to the new asm-generic/spinlock.h include file - here i hope we can later on move more generic spinlock bits as well. this patch solves the ppc64/PREEMPT functionality-regression reported by Paul Mackerras. I only tested it on x86, Paul, would you mind to test it on ppc64? Ingo Signed-off-by: Ingo Molnar <mingo@elte.hu> --- linux/kernel/spinlock.c.orig +++ linux/kernel/spinlock.c @@ -184,7 +184,7 @@ void __lockfunc _##op##_lock(locktype##_ if (!(lock)->break_lock) \ (lock)->break_lock = 1; \ while (!op##_can_lock(lock) && (lock)->break_lock) \ - cpu_relax(); \ + locktype##_yield(lock); \ preempt_disable(); \ } \ } \ @@ -206,7 +206,7 @@ unsigned long __lockfunc _##op##_lock_ir if (!(lock)->break_lock) \ (lock)->break_lock = 1; \ while (!op##_can_lock(lock) && (lock)->break_lock) \ - cpu_relax(); \ + locktype##_yield(lock); \ preempt_disable(); \ } \ return flags; \ --- linux/arch/ppc64/lib/locks.c.orig +++ linux/arch/ppc64/lib/locks.c @@ -23,7 +23,7 @@ /* waiting for a spinlock... */ #if defined(CONFIG_PPC_SPLPAR) || defined(CONFIG_PPC_ISERIES) -void __spin_yield(spinlock_t *lock) +void spinlock_yield(spinlock_t *lock) { unsigned int lock_value, holder_cpu, yield_count; struct paca_struct *holder_paca; @@ -54,7 +54,7 @@ void __spin_yield(spinlock_t *lock) * This turns out to be the same for read and write locks, since * we only know the holder if it is write-locked. */ -void __rw_yield(rwlock_t *rw) +void rwlock_yield(rwlock_t *rw) { int lock_value; unsigned int holder_cpu, yield_count; @@ -87,7 +87,7 @@ void spin_unlock_wait(spinlock_t *lock) while (lock->lock) { HMT_low(); if (SHARED_PROCESSOR) - __spin_yield(lock); + spinlock_yield(lock); } HMT_medium(); } --- linux/include/asm-ia64/spinlock.h.orig +++ linux/include/asm-ia64/spinlock.h @@ -17,6 +17,8 @@ #include <asm/intrinsics.h> #include <asm/system.h> +#include <asm-generic/spinlock.h> + typedef struct { volatile unsigned int lock; #ifdef CONFIG_PREEMPT --- linux/include/asm-generic/spinlock.h.orig +++ linux/include/asm-generic/spinlock.h @@ -0,0 +1,11 @@ +#ifndef _ASM_GENERIC_SPINLOCK_H +#define _ASM_GENERIC_SPINLOCK_H + +/* + * Virtual platforms might use these to + * yield to specific virtual CPUs: + */ +#define spinlock_yield(lock) cpu_relax() +#define rwlock_yield(lock) cpu_relax() + +#endif /* _ASM_GENERIC_SPINLOCK_H */ --- linux/include/linux/spinlock.h.orig +++ linux/include/linux/spinlock.h @@ -206,6 +206,8 @@ typedef struct { #define _raw_spin_unlock(lock) do { (void)(lock); } while(0) #endif /* CONFIG_DEBUG_SPINLOCK */ +#define spinlock_yield(lock) (void)(lock) + /* RW spinlocks: No debug version */ #if (__GNUC__ > 2) @@ -224,6 +226,8 @@ typedef struct { #define _raw_read_trylock(lock) ({ (void)(lock); (1); }) #define _raw_write_trylock(lock) ({ (void)(lock); (1); }) +#define rwlock_yield(lock) (void)(lock) + #define _spin_trylock(lock) ({preempt_disable(); _raw_spin_trylock(lock) ? \ 1 : ({preempt_enable(); 0;});}) --- linux/include/asm-i386/spinlock.h.orig +++ linux/include/asm-i386/spinlock.h @@ -7,6 +7,8 @@ #include <linux/config.h> #include <linux/compiler.h> +#include <asm-generic/spinlock.h> + asmlinkage int printk(const char * fmt, ...) __attribute__ ((format (printf, 1, 2))); --- linux/include/asm-ppc64/spinlock.h.orig +++ linux/include/asm-ppc64/spinlock.h @@ -64,11 +64,11 @@ static __inline__ void _raw_spin_unlock( #if defined(CONFIG_PPC_SPLPAR) || defined(CONFIG_PPC_ISERIES) /* We only yield to the hypervisor if we are in shared processor mode */ #define SHARED_PROCESSOR (get_paca()->lppaca.shared_proc) -extern void __spin_yield(spinlock_t *lock); -extern void __rw_yield(rwlock_t *lock); +extern void spinlock_yield(spinlock_t *lock); +extern void rwlock_yield(rwlock_t *lock); #else /* SPLPAR || ISERIES */ -#define __spin_yield(x) barrier() -#define __rw_yield(x) barrier() +#define spinlock_yield(x) barrier() +#define rwlock_yield(x) barrier() #define SHARED_PROCESSOR 0 #endif extern void spin_unlock_wait(spinlock_t *lock); @@ -109,7 +109,7 @@ static void __inline__ _raw_spin_lock(sp do { HMT_low(); if (SHARED_PROCESSOR) - __spin_yield(lock); + spinlock_yield(lock); } while (likely(lock->lock != 0)); HMT_medium(); } @@ -127,7 +127,7 @@ static void __inline__ _raw_spin_lock_fl do { HMT_low(); if (SHARED_PROCESSOR) - __spin_yield(lock); + spinlock_yield(lock); } while (likely(lock->lock != 0)); HMT_medium(); local_irq_restore(flags_dis); @@ -201,7 +201,7 @@ static void __inline__ _raw_read_lock(rw do { HMT_low(); if (SHARED_PROCESSOR) - __rw_yield(rw); + rwlock_yield(rw); } while (likely(rw->lock < 0)); HMT_medium(); } @@ -258,7 +258,7 @@ static void __inline__ _raw_write_lock(r do { HMT_low(); if (SHARED_PROCESSOR) - __rw_yield(rw); + rwlock_yield(rw); } while (likely(rw->lock != 0)); HMT_medium(); } --- linux/include/asm-x86_64/spinlock.h.orig +++ linux/include/asm-x86_64/spinlock.h @@ -6,6 +6,8 @@ #include <asm/page.h> #include <linux/config.h> +#include <asm-generic/spinlock.h> + extern int printk(const char * fmt, ...) __attribute__ ((format (printf, 1, 2))); ^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 2005-01-20 16:11 ` [patch 2/3] spinlock fix #2: generalize [spin|rw]lock yielding Ingo Molnar @ 2005-01-20 16:12 ` Ingo Molnar 0 siblings, 0 replies; 7+ messages in thread From: Ingo Molnar @ 2005-01-20 16:12 UTC (permalink / raw) To: Linus Torvalds Cc: Peter Chubb, Chris Wedgwood, Andrew Morton, paulus, linux-kernel, tony.luck, dsw, benh, linux-ia64, hch, wli, jbarnes [this patch didnt change due to can_lock but i've resent it so that the patch stream is complete.] -- this patch would have caught the bug in -BK-curr (that patch #1 fixes), via a compiler warning. Test-built/booted on x86. Ingo Signed-off-by: Ingo Molnar <mingo@elte.hu> --- linux/include/asm-i386/spinlock.h.orig +++ linux/include/asm-i386/spinlock.h @@ -36,7 +36,10 @@ typedef struct { #define SPIN_LOCK_UNLOCKED (spinlock_t) { 1 SPINLOCK_MAGIC_INIT } -#define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) +static inline void spin_lock_init(spinlock_t *lock) +{ + *lock = SPIN_LOCK_UNLOCKED; +} /* * Simple spin lock operations. There are two variants, one clears IRQ's @@ -45,8 +48,17 @@ typedef struct { * We make no fairness assumptions. They have a cost. */ -#define spin_is_locked(x) (*(volatile signed char *)(&(x)->lock) <= 0) -#define spin_unlock_wait(x) do { barrier(); } while(spin_is_locked(x)) +static inline int spin_is_locked(spinlock_t *lock) +{ + return *(volatile signed char *)(&lock->lock) <= 0; +} + +static inline void spin_unlock_wait(spinlock_t *lock) +{ + do { + barrier(); + } while (spin_is_locked(lock)); +} #define spin_lock_string \ "\n1:\t" \ ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2005-01-20 22:51 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2005-01-20 11:43 [patch 1/3] spinlock fix #1 Ingo Molnar 2005-01-20 11:59 ` [patch 2/3] spinlock fix #2: generalize [spin|rw]lock yielding Ingo Molnar 2005-01-20 12:09 ` [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 Ingo Molnar 2005-01-20 12:18 ` [patch] stricter type-checking rwlock " Ingo Molnar 2005-01-20 12:22 ` [patch] minor spinlock cleanups Ingo Molnar 2005-01-20 22:51 ` [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 J.A. Magallon -- strict thread matches above, loose matches on Subject: below -- 2005-01-19 9:18 Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch Peter Chubb 2005-01-19 9:20 ` Ingo Molnar 2005-01-19 21:43 ` Paul Mackerras 2005-01-20 2:34 ` [PATCH RFC] 'spinlock/rwlock fixes' V3 [1/1] Chris Wedgwood 2005-01-20 3:01 ` Andrew Morton 2005-01-20 3:18 ` Chris Wedgwood 2005-01-20 8:59 ` Peter Chubb 2005-01-20 15:51 ` Linus Torvalds 2005-01-20 16:08 ` [patch 1/3] spinlock fix #1, *_can_lock() primitives Ingo Molnar 2005-01-20 16:11 ` [patch 2/3] spinlock fix #2: generalize [spin|rw]lock yielding Ingo Molnar 2005-01-20 16:12 ` [patch 3/3] spinlock fix #3: type-checking spinlock primitives, x86 Ingo Molnar
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).