All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv4 01/11] atomic: add *_dec_not_zero
@ 2011-07-27  9:47 ` Sven Eckelmann
  0 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch
  Cc: linux-m32r-ja, linux-mips, linux-ia64, linux-doc,
	Benjamin Herrenschmidt, H. Peter Anvin, Heiko Carstens,
	Randy Dunlap, Paul Mackerras, Helge Deller, sparclinux,
	Sven Eckelmann, linux-s390, Russell King, user-mode-linux-devel,
	Richard Weinberger, Hirokazu Takata, x86, James E.J. Bottomley,
	Ingo Molnar, Matt Turner, Fenghua Yu, Arnd Bergmann, Jeff Dike,
	Chris Metcalf, linux-m32r

Introduce an *_dec_not_zero operation.  Make this a special case of
*_add_unless because batman-adv uses atomic_dec_not_zero in different
places like re-broadcast queue or aggregation queue management. There
are other non-final patches which may also want to use this macro.

Reported-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Helge Deller <deller@gmx.de>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux390@de.ibm.com
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-alpha@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-ia64@vger.kernel.org
Cc: linux-m32r@ml.linux-m32r.org
Cc: linux-m32r-ja@ml.linux-m32r.org
Cc: linux-mips@linux-mips.org
Cc: linux-parisc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: user-mode-linux-devel@lists.sourceforge.net
---
David S. Miller recommended this change in
 https://lists.open-mesh.org/pipermail/b.a.t.m.a.n/2011-May/004560.html

Arnd Bergmann wanted to apply it in 201106172320.26476.arnd@arndb.de

... and then Arun Sharma created a big merge conflict with
https://lkml.org/lkml/2011/6/6/430

I don't think that it is a a good idea to asume that everyone still agrees
with the patch after I've rewritten it.

 Documentation/atomic_ops.txt       |    1 +
 arch/alpha/include/asm/atomic.h    |    1 +
 arch/alpha/include/asm/local.h     |    1 +
 arch/arm/include/asm/atomic.h      |    1 +
 arch/ia64/include/asm/atomic.h     |    1 +
 arch/m32r/include/asm/local.h      |    1 +
 arch/mips/include/asm/atomic.h     |    1 +
 arch/mips/include/asm/local.h      |    1 +
 arch/parisc/include/asm/atomic.h   |    1 +
 arch/powerpc/include/asm/atomic.h  |    1 +
 arch/powerpc/include/asm/local.h   |    1 +
 arch/s390/include/asm/atomic.h     |    1 +
 arch/sparc/include/asm/atomic_64.h |    1 +
 arch/tile/include/asm/atomic_32.h  |    1 +
 arch/tile/include/asm/atomic_64.h  |    1 +
 arch/um/sys-i386/atomic64_cx8_32.S |   28 ++++++++++++++++++++++++++++
 arch/x86/include/asm/atomic64_32.h |   12 ++++++++++++
 arch/x86/include/asm/atomic64_64.h |    1 +
 arch/x86/include/asm/local.h       |    1 +
 arch/x86/lib/atomic64_32.c         |    4 ++++
 arch/x86/lib/atomic64_386_32.S     |   21 +++++++++++++++++++++
 arch/x86/lib/atomic64_cx8_32.S     |   28 ++++++++++++++++++++++++++++
 include/asm-generic/atomic-long.h  |    2 ++
 include/asm-generic/atomic64.h     |    1 +
 include/asm-generic/local.h        |    1 +
 include/asm-generic/local64.h      |    2 ++
 include/linux/atomic.h             |    9 +++++++++
 27 files changed, 125 insertions(+), 0 deletions(-)

diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt
index 3bd585b..1eec221 100644
--- a/Documentation/atomic_ops.txt
+++ b/Documentation/atomic_ops.txt
@@ -190,6 +190,7 @@ atomic_add_unless requires explicit memory barriers around the operation
 unless it fails (returns 0).
 
 atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
+atomic_dec_not_zero, equivalent to atomic_add_unless(v, -1, 0)
 
 
 If a caller requires memory barrier semantics around an atomic_t
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 640f909..09d1571 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -225,6 +225,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0)
 #define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0)
diff --git a/arch/alpha/include/asm/local.h b/arch/alpha/include/asm/local.h
index 9c94b84..51eb678 100644
--- a/arch/alpha/include/asm/local.h
+++ b/arch/alpha/include/asm/local.h
@@ -79,6 +79,7 @@ static __inline__ long local_sub_return(long i, local_t * l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_add_negative(a, l) (local_add_return((a), (l)) < 0)
 
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index 86976d0..80ed975 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -458,6 +458,7 @@ static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u)
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 #endif /* !CONFIG_GENERIC_ATOMIC64 */
 #endif
diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 3fad89e..af6e9b2 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -122,6 +122,7 @@ static __inline__ long atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic_add_return(i,v)						\
 ({									\
diff --git a/arch/m32r/include/asm/local.h b/arch/m32r/include/asm/local.h
index 734bca8..d536082 100644
--- a/arch/m32r/include/asm/local.h
+++ b/arch/m32r/include/asm/local.h
@@ -272,6 +272,7 @@ static inline int local_add_unless(local_t *l, long a, long u)
 }
 
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 static inline void local_clear_mask(unsigned long  mask, local_t *addr)
 {
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 1d93f81..babb043 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -697,6 +697,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic64_dec_return(v) atomic64_sub_return(1, (v))
 #define atomic64_inc_return(v) atomic64_add_return(1, (v))
diff --git a/arch/mips/include/asm/local.h b/arch/mips/include/asm/local.h
index 94fde8d..0242256 100644
--- a/arch/mips/include/asm/local.h
+++ b/arch/mips/include/asm/local.h
@@ -137,6 +137,7 @@ static __inline__ long local_sub_return(long i, local_t * l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_dec_return(l) local_sub_return(1, (l))
 #define local_inc_return(l) local_add_return(1, (l))
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index b1dc71f..8a50234 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -334,6 +334,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #endif /* !CONFIG_64BIT */
 
diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index e2a4c26..c0131a6 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -468,6 +468,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #endif /* __powerpc64__ */
 
diff --git a/arch/powerpc/include/asm/local.h b/arch/powerpc/include/asm/local.h
index b8da913..d182e34 100644
--- a/arch/powerpc/include/asm/local.h
+++ b/arch/powerpc/include/asm/local.h
@@ -134,6 +134,7 @@ static __inline__ int local_add_unless(local_t *l, long a, long u)
 }
 
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_sub_and_test(a, l)	(local_sub_return((a), (l)) == 0)
 #define local_dec_and_test(l)		(local_dec_return((l)) == 0)
diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h
index 8517d2a..92e7d5d 100644
--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -325,6 +325,7 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
 #define atomic64_dec_return(_v)		atomic64_sub_return(1, _v)
 #define atomic64_dec_and_test(_v)	(atomic64_sub_return(1, _v) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1, 0)
 
 #define smp_mb__before_atomic_dec()	smp_mb()
 #define smp_mb__after_atomic_dec()	smp_mb()
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 9f421df..94cf160 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -106,6 +106,7 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 /* Atomic operations are already serializing */
 #define smp_mb__before_atomic_dec()	barrier()
diff --git a/arch/tile/include/asm/atomic_32.h b/arch/tile/include/asm/atomic_32.h
index c03349e..9cfafb3 100644
--- a/arch/tile/include/asm/atomic_32.h
+++ b/arch/tile/include/asm/atomic_32.h
@@ -233,6 +233,7 @@ static inline void atomic64_set(atomic64_t *v, u64 n)
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 /*
  * We need to barrier before modifying the word, since the _atomic_xxx()
diff --git a/arch/tile/include/asm/atomic_64.h b/arch/tile/include/asm/atomic_64.h
index 27fe667..9c22f50 100644
--- a/arch/tile/include/asm/atomic_64.h
+++ b/arch/tile/include/asm/atomic_64.h
@@ -141,6 +141,7 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
 #define atomic64_add_negative(i, v)	(atomic64_add_return((i), (v)) < 0)
 
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1, 0)
 
 /* Atomic dec and inc don't implement barrier, so provide them if needed. */
 #define smp_mb__before_atomic_dec()	smp_mb()
diff --git a/arch/um/sys-i386/atomic64_cx8_32.S b/arch/um/sys-i386/atomic64_cx8_32.S
index 1e901d3..a58a1d4 100644
--- a/arch/um/sys-i386/atomic64_cx8_32.S
+++ b/arch/um/sys-i386/atomic64_cx8_32.S
@@ -223,3 +223,31 @@ ENTRY(atomic64_inc_not_zero_cx8)
 	jmp 3b
 	CFI_ENDPROC
 ENDPROC(atomic64_inc_not_zero_cx8)
+
+ENTRY(atomic64_dec_not_zero_cx8)
+	CFI_STARTPROC
+	SAVE ebx
+
+	read64 %esi
+1:
+	testl %eax, %eax
+	je 4f
+2:
+	movl %eax, %ebx
+	movl %edx, %ecx
+	subl $1, %ebx
+	sbbl $0, %ecx
+	LOCK_PREFIX
+	cmpxchg8b (%esi)
+	jne 1b
+
+	movl $1, %eax
+3:
+	RESTORE ebx
+	ret
+4:
+	testl %edx, %edx
+	jne 2b
+	jmp 3b
+	CFI_ENDPROC
+ENDPROC(atomic64_dec_not_zero_cx8)
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 24098aa..3cd4431 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -287,6 +287,18 @@ static inline int atomic64_inc_not_zero(atomic64_t *v)
 	return r;
 }
 
+
+static inline int atomic64_dec_not_zero(atomic64_t *v)
+{
+	int r;
+	asm volatile(ATOMIC64_ALTERNATIVE(dec_not_zero)
+		     : "=a" (r)
+		     : "S" (v)
+		     : "ecx", "edx", "memory"
+		     );
+	return r;
+}
+
 static inline long long atomic64_dec_if_positive(atomic64_t *v)
 {
 	long long r;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index 017594d..93c9d8b 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -220,6 +220,7 @@ static inline int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 /*
  * atomic64_dec_if_positive - decrement by 1 if old value positive
diff --git a/arch/x86/include/asm/local.h b/arch/x86/include/asm/local.h
index 9cdae5d..2c8c92d 100644
--- a/arch/x86/include/asm/local.h
+++ b/arch/x86/include/asm/local.h
@@ -185,6 +185,7 @@ static inline long local_sub_return(long i, local_t *l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 /* On x86_32, these are no better than the atomic variants.
  * On x86-64 these are better than the atomic variants on SMP kernels
diff --git a/arch/x86/lib/atomic64_32.c b/arch/x86/lib/atomic64_32.c
index 042f682..7da05c3 100644
--- a/arch/x86/lib/atomic64_32.c
+++ b/arch/x86/lib/atomic64_32.c
@@ -24,6 +24,8 @@ long long atomic64_dec_if_positive_cx8(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_dec_if_positive_cx8);
 int atomic64_inc_not_zero_cx8(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_inc_not_zero_cx8);
+int atomic64_dec_not_zero_cx8(atomic64_t *v);
+EXPORT_SYMBOL(atomic64_dec_not_zero_cx8);
 int atomic64_add_unless_cx8(atomic64_t *v, long long a, long long u);
 EXPORT_SYMBOL(atomic64_add_unless_cx8);
 
@@ -54,6 +56,8 @@ long long atomic64_dec_if_positive_386(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_dec_if_positive_386);
 int atomic64_inc_not_zero_386(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_inc_not_zero_386);
+int atomic64_dec_not_zero_386(atomic64_t *v);
+EXPORT_SYMBOL(atomic64_dec_not_zero_386);
 int atomic64_add_unless_386(atomic64_t *v, long long a, long long u);
 EXPORT_SYMBOL(atomic64_add_unless_386);
 #endif
diff --git a/arch/x86/lib/atomic64_386_32.S b/arch/x86/lib/atomic64_386_32.S
index e8e7e0d..c78337b 100644
--- a/arch/x86/lib/atomic64_386_32.S
+++ b/arch/x86/lib/atomic64_386_32.S
@@ -181,6 +181,27 @@ ENDP
 #undef v
 
 #define v %esi
+BEGIN(dec_not_zero)
+	movl  (v), %eax
+	movl 4(v), %edx
+	testl %eax, %eax
+	je 3f
+1:
+	subl $1, %eax
+	sbbl $0, %edx
+	movl %eax,  (v)
+	movl %edx, 4(v)
+	movl $1, %eax
+2:
+	RET
+3:
+	testl %edx, %edx
+	jne 1b
+	jmp 2b
+ENDP
+#undef v
+
+#define v %esi
 BEGIN(dec_if_positive)
 	movl  (v), %eax
 	movl 4(v), %edx
diff --git a/arch/x86/lib/atomic64_cx8_32.S b/arch/x86/lib/atomic64_cx8_32.S
index 391a083..989638c 100644
--- a/arch/x86/lib/atomic64_cx8_32.S
+++ b/arch/x86/lib/atomic64_cx8_32.S
@@ -220,3 +220,31 @@ ENTRY(atomic64_inc_not_zero_cx8)
 	jmp 3b
 	CFI_ENDPROC
 ENDPROC(atomic64_inc_not_zero_cx8)
+
+ENTRY(atomic64_dec_not_zero_cx8)
+	CFI_STARTPROC
+	SAVE ebx
+
+	read64 %esi
+1:
+	testl %eax, %eax
+	je 4f
+2:
+	movl %eax, %ebx
+	movl %edx, %ecx
+	subl $1, %ebx
+	sbbl $0, %ecx
+	LOCK_PREFIX
+	cmpxchg8b (%esi)
+	jne 1b
+
+	movl $1, %eax
+3:
+	RESTORE ebx
+	ret
+4:
+	testl %edx, %edx
+	jne 2b
+	jmp 3b
+	CFI_ENDPROC
+ENDPROC(atomic64_dec_not_zero_cx8)
diff --git a/include/asm-generic/atomic-long.h b/include/asm-generic/atomic-long.h
index b7babf0..0fe75ab 100644
--- a/include/asm-generic/atomic-long.h
+++ b/include/asm-generic/atomic-long.h
@@ -130,6 +130,7 @@ static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u)
 }
 
 #define atomic_long_inc_not_zero(l) atomic64_inc_not_zero((atomic64_t *)(l))
+#define atomic_long_dec_not_zero(l) atomic64_dec_not_zero((atomic64_t *)(l))
 
 #define atomic_long_cmpxchg(l, old, new) \
 	(atomic64_cmpxchg((atomic64_t *)(l), (old), (new)))
@@ -247,6 +248,7 @@ static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u)
 }
 
 #define atomic_long_inc_not_zero(l) atomic_inc_not_zero((atomic_t *)(l))
+#define atomic_long_dec_not_zero(l) atomic_dec_not_zero((atomic_t *)(l))
 
 #define atomic_long_cmpxchg(l, old, new) \
 	(atomic_cmpxchg((atomic_t *)(l), (old), (new)))
diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
index b18ce4f..90ff9b1 100644
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -38,5 +38,6 @@ extern int	 atomic64_add_unless(atomic64_t *v, long long a, long long u);
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v) 	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 #endif  /*  _ASM_GENERIC_ATOMIC64_H  */
diff --git a/include/asm-generic/local.h b/include/asm-generic/local.h
index 9ceb03b..fabf4f3 100644
--- a/include/asm-generic/local.h
+++ b/include/asm-generic/local.h
@@ -44,6 +44,7 @@ typedef struct
 #define local_xchg(l, n) atomic_long_xchg((&(l)->a), (n))
 #define local_add_unless(l, _a, u) atomic_long_add_unless((&(l)->a), (_a), (u))
 #define local_inc_not_zero(l) atomic_long_inc_not_zero(&(l)->a)
+#define local_dec_not_zero(l) atomic_long_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
diff --git a/include/asm-generic/local64.h b/include/asm-generic/local64.h
index 5980002..76acbe2 100644
--- a/include/asm-generic/local64.h
+++ b/include/asm-generic/local64.h
@@ -45,6 +45,7 @@ typedef struct {
 #define local64_xchg(l, n)	local_xchg((&(l)->a), (n))
 #define local64_add_unless(l, _a, u) local_add_unless((&(l)->a), (_a), (u))
 #define local64_inc_not_zero(l)	local_inc_not_zero(&(l)->a)
+#define local64_dec_not_zero(l)	local_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
@@ -83,6 +84,7 @@ typedef struct {
 #define local64_xchg(l, n)	atomic64_xchg((&(l)->a), (n))
 #define local64_add_unless(l, _a, u) atomic64_add_unless((&(l)->a), (_a), (u))
 #define local64_inc_not_zero(l)	atomic64_inc_not_zero(&(l)->a)
+#define local64_dec_not_zero(l)	atomic64_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 42b77b5..ad2b750 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -27,6 +27,15 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u)
 #define atomic_inc_not_zero(v)		atomic_add_unless((v), 1, 0)
 
 /**
+ * atomic_dec_not_zero - decrement unless the number is zero
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1, so long as @v is non-zero.
+ * Returns non-zero if @v was non-zero, and zero otherwise.
+ */
+#define atomic_dec_not_zero(v)		atomic_add_unless((v), -1, 0)
+
+/**
  * atomic_inc_not_zero_hint - increment if not null
  * @v: pointer of type atomic_t
  * @hint: probable value of the atomic before the increment
-- 
1.7.5.4

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 01/11] atomic: add *_dec_not_zero
@ 2011-07-27  9:47 ` Sven Eckelmann
  0 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch
  Cc: linux-kernel, Sven Eckelmann, Randy Dunlap, Richard Henderson,
	Ivan Kokshaysky, Matt Turner, Russell King, Tony Luck,
	Fenghua Yu, Hirokazu Takata, Ralf Baechle, Kyle McMartin,
	Helge Deller, James E.J. Bottomley, Benjamin Herrenschmidt,
	Paul Mackerras, Martin Schwidefsky, Heiko Carstens, linux390,
	David S. Miller, Chris Metcalf, Jeff Dike, Richard Weinberger,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, Arnd Bergmann,
	linux-doc, linux-alpha, linux-arm-kernel, linux-ia64, linux-m32r,
	linux-m32r-ja, linux-mips, linux-parisc, linuxppc-dev,
	linux-s390, sparclinux, user-mode-linux-devel

Introduce an *_dec_not_zero operation.  Make this a special case of
*_add_unless because batman-adv uses atomic_dec_not_zero in different
places like re-broadcast queue or aggregation queue management. There
are other non-final patches which may also want to use this macro.

Reported-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Helge Deller <deller@gmx.de>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux390@de.ibm.com
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-alpha@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-ia64@vger.kernel.org
Cc: linux-m32r@ml.linux-m32r.org
Cc: linux-m32r-ja@ml.linux-m32r.org
Cc: linux-mips@linux-mips.org
Cc: linux-parisc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: user-mode-linux-devel@lists.sourceforge.net
---
David S. Miller recommended this change in
 https://lists.open-mesh.org/pipermail/b.a.t.m.a.n/2011-May/004560.html

Arnd Bergmann wanted to apply it in 201106172320.26476.arnd@arndb.de

... and then Arun Sharma created a big merge conflict with
https://lkml.org/lkml/2011/6/6/430

I don't think that it is a a good idea to asume that everyone still agrees
with the patch after I've rewritten it.

 Documentation/atomic_ops.txt       |    1 +
 arch/alpha/include/asm/atomic.h    |    1 +
 arch/alpha/include/asm/local.h     |    1 +
 arch/arm/include/asm/atomic.h      |    1 +
 arch/ia64/include/asm/atomic.h     |    1 +
 arch/m32r/include/asm/local.h      |    1 +
 arch/mips/include/asm/atomic.h     |    1 +
 arch/mips/include/asm/local.h      |    1 +
 arch/parisc/include/asm/atomic.h   |    1 +
 arch/powerpc/include/asm/atomic.h  |    1 +
 arch/powerpc/include/asm/local.h   |    1 +
 arch/s390/include/asm/atomic.h     |    1 +
 arch/sparc/include/asm/atomic_64.h |    1 +
 arch/tile/include/asm/atomic_32.h  |    1 +
 arch/tile/include/asm/atomic_64.h  |    1 +
 arch/um/sys-i386/atomic64_cx8_32.S |   28 ++++++++++++++++++++++++++++
 arch/x86/include/asm/atomic64_32.h |   12 ++++++++++++
 arch/x86/include/asm/atomic64_64.h |    1 +
 arch/x86/include/asm/local.h       |    1 +
 arch/x86/lib/atomic64_32.c         |    4 ++++
 arch/x86/lib/atomic64_386_32.S     |   21 +++++++++++++++++++++
 arch/x86/lib/atomic64_cx8_32.S     |   28 ++++++++++++++++++++++++++++
 include/asm-generic/atomic-long.h  |    2 ++
 include/asm-generic/atomic64.h     |    1 +
 include/asm-generic/local.h        |    1 +
 include/asm-generic/local64.h      |    2 ++
 include/linux/atomic.h             |    9 +++++++++
 27 files changed, 125 insertions(+), 0 deletions(-)

diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt
index 3bd585b..1eec221 100644
--- a/Documentation/atomic_ops.txt
+++ b/Documentation/atomic_ops.txt
@@ -190,6 +190,7 @@ atomic_add_unless requires explicit memory barriers around the operation
 unless it fails (returns 0).
 
 atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
+atomic_dec_not_zero, equivalent to atomic_add_unless(v, -1, 0)
 
 
 If a caller requires memory barrier semantics around an atomic_t
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 640f909..09d1571 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -225,6 +225,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0)
 #define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0)
diff --git a/arch/alpha/include/asm/local.h b/arch/alpha/include/asm/local.h
index 9c94b84..51eb678 100644
--- a/arch/alpha/include/asm/local.h
+++ b/arch/alpha/include/asm/local.h
@@ -79,6 +79,7 @@ static __inline__ long local_sub_return(long i, local_t * l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_add_negative(a, l) (local_add_return((a), (l)) < 0)
 
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index 86976d0..80ed975 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -458,6 +458,7 @@ static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u)
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 #endif /* !CONFIG_GENERIC_ATOMIC64 */
 #endif
diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 3fad89e..af6e9b2 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -122,6 +122,7 @@ static __inline__ long atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic_add_return(i,v)						\
 ({									\
diff --git a/arch/m32r/include/asm/local.h b/arch/m32r/include/asm/local.h
index 734bca8..d536082 100644
--- a/arch/m32r/include/asm/local.h
+++ b/arch/m32r/include/asm/local.h
@@ -272,6 +272,7 @@ static inline int local_add_unless(local_t *l, long a, long u)
 }
 
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 static inline void local_clear_mask(unsigned long  mask, local_t *addr)
 {
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 1d93f81..babb043 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -697,6 +697,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic64_dec_return(v) atomic64_sub_return(1, (v))
 #define atomic64_inc_return(v) atomic64_add_return(1, (v))
diff --git a/arch/mips/include/asm/local.h b/arch/mips/include/asm/local.h
index 94fde8d..0242256 100644
--- a/arch/mips/include/asm/local.h
+++ b/arch/mips/include/asm/local.h
@@ -137,6 +137,7 @@ static __inline__ long local_sub_return(long i, local_t * l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_dec_return(l) local_sub_return(1, (l))
 #define local_inc_return(l) local_add_return(1, (l))
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index b1dc71f..8a50234 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -334,6 +334,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #endif /* !CONFIG_64BIT */
 
diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index e2a4c26..c0131a6 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -468,6 +468,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #endif /* __powerpc64__ */
 
diff --git a/arch/powerpc/include/asm/local.h b/arch/powerpc/include/asm/local.h
index b8da913..d182e34 100644
--- a/arch/powerpc/include/asm/local.h
+++ b/arch/powerpc/include/asm/local.h
@@ -134,6 +134,7 @@ static __inline__ int local_add_unless(local_t *l, long a, long u)
 }
 
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_sub_and_test(a, l)	(local_sub_return((a), (l)) == 0)
 #define local_dec_and_test(l)		(local_dec_return((l)) == 0)
diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h
index 8517d2a..92e7d5d 100644
--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -325,6 +325,7 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
 #define atomic64_dec_return(_v)		atomic64_sub_return(1, _v)
 #define atomic64_dec_and_test(_v)	(atomic64_sub_return(1, _v) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1, 0)
 
 #define smp_mb__before_atomic_dec()	smp_mb()
 #define smp_mb__after_atomic_dec()	smp_mb()
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 9f421df..94cf160 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -106,6 +106,7 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 /* Atomic operations are already serializing */
 #define smp_mb__before_atomic_dec()	barrier()
diff --git a/arch/tile/include/asm/atomic_32.h b/arch/tile/include/asm/atomic_32.h
index c03349e..9cfafb3 100644
--- a/arch/tile/include/asm/atomic_32.h
+++ b/arch/tile/include/asm/atomic_32.h
@@ -233,6 +233,7 @@ static inline void atomic64_set(atomic64_t *v, u64 n)
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 /*
  * We need to barrier before modifying the word, since the _atomic_xxx()
diff --git a/arch/tile/include/asm/atomic_64.h b/arch/tile/include/asm/atomic_64.h
index 27fe667..9c22f50 100644
--- a/arch/tile/include/asm/atomic_64.h
+++ b/arch/tile/include/asm/atomic_64.h
@@ -141,6 +141,7 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
 #define atomic64_add_negative(i, v)	(atomic64_add_return((i), (v)) < 0)
 
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1, 0)
 
 /* Atomic dec and inc don't implement barrier, so provide them if needed. */
 #define smp_mb__before_atomic_dec()	smp_mb()
diff --git a/arch/um/sys-i386/atomic64_cx8_32.S b/arch/um/sys-i386/atomic64_cx8_32.S
index 1e901d3..a58a1d4 100644
--- a/arch/um/sys-i386/atomic64_cx8_32.S
+++ b/arch/um/sys-i386/atomic64_cx8_32.S
@@ -223,3 +223,31 @@ ENTRY(atomic64_inc_not_zero_cx8)
 	jmp 3b
 	CFI_ENDPROC
 ENDPROC(atomic64_inc_not_zero_cx8)
+
+ENTRY(atomic64_dec_not_zero_cx8)
+	CFI_STARTPROC
+	SAVE ebx
+
+	read64 %esi
+1:
+	testl %eax, %eax
+	je 4f
+2:
+	movl %eax, %ebx
+	movl %edx, %ecx
+	subl $1, %ebx
+	sbbl $0, %ecx
+	LOCK_PREFIX
+	cmpxchg8b (%esi)
+	jne 1b
+
+	movl $1, %eax
+3:
+	RESTORE ebx
+	ret
+4:
+	testl %edx, %edx
+	jne 2b
+	jmp 3b
+	CFI_ENDPROC
+ENDPROC(atomic64_dec_not_zero_cx8)
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 24098aa..3cd4431 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -287,6 +287,18 @@ static inline int atomic64_inc_not_zero(atomic64_t *v)
 	return r;
 }
 
+
+static inline int atomic64_dec_not_zero(atomic64_t *v)
+{
+	int r;
+	asm volatile(ATOMIC64_ALTERNATIVE(dec_not_zero)
+		     : "=a" (r)
+		     : "S" (v)
+		     : "ecx", "edx", "memory"
+		     );
+	return r;
+}
+
 static inline long long atomic64_dec_if_positive(atomic64_t *v)
 {
 	long long r;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index 017594d..93c9d8b 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -220,6 +220,7 @@ static inline int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 /*
  * atomic64_dec_if_positive - decrement by 1 if old value positive
diff --git a/arch/x86/include/asm/local.h b/arch/x86/include/asm/local.h
index 9cdae5d..2c8c92d 100644
--- a/arch/x86/include/asm/local.h
+++ b/arch/x86/include/asm/local.h
@@ -185,6 +185,7 @@ static inline long local_sub_return(long i, local_t *l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 /* On x86_32, these are no better than the atomic variants.
  * On x86-64 these are better than the atomic variants on SMP kernels
diff --git a/arch/x86/lib/atomic64_32.c b/arch/x86/lib/atomic64_32.c
index 042f682..7da05c3 100644
--- a/arch/x86/lib/atomic64_32.c
+++ b/arch/x86/lib/atomic64_32.c
@@ -24,6 +24,8 @@ long long atomic64_dec_if_positive_cx8(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_dec_if_positive_cx8);
 int atomic64_inc_not_zero_cx8(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_inc_not_zero_cx8);
+int atomic64_dec_not_zero_cx8(atomic64_t *v);
+EXPORT_SYMBOL(atomic64_dec_not_zero_cx8);
 int atomic64_add_unless_cx8(atomic64_t *v, long long a, long long u);
 EXPORT_SYMBOL(atomic64_add_unless_cx8);
 
@@ -54,6 +56,8 @@ long long atomic64_dec_if_positive_386(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_dec_if_positive_386);
 int atomic64_inc_not_zero_386(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_inc_not_zero_386);
+int atomic64_dec_not_zero_386(atomic64_t *v);
+EXPORT_SYMBOL(atomic64_dec_not_zero_386);
 int atomic64_add_unless_386(atomic64_t *v, long long a, long long u);
 EXPORT_SYMBOL(atomic64_add_unless_386);
 #endif
diff --git a/arch/x86/lib/atomic64_386_32.S b/arch/x86/lib/atomic64_386_32.S
index e8e7e0d..c78337b 100644
--- a/arch/x86/lib/atomic64_386_32.S
+++ b/arch/x86/lib/atomic64_386_32.S
@@ -181,6 +181,27 @@ ENDP
 #undef v
 
 #define v %esi
+BEGIN(dec_not_zero)
+	movl  (v), %eax
+	movl 4(v), %edx
+	testl %eax, %eax
+	je 3f
+1:
+	subl $1, %eax
+	sbbl $0, %edx
+	movl %eax,  (v)
+	movl %edx, 4(v)
+	movl $1, %eax
+2:
+	RET
+3:
+	testl %edx, %edx
+	jne 1b
+	jmp 2b
+ENDP
+#undef v
+
+#define v %esi
 BEGIN(dec_if_positive)
 	movl  (v), %eax
 	movl 4(v), %edx
diff --git a/arch/x86/lib/atomic64_cx8_32.S b/arch/x86/lib/atomic64_cx8_32.S
index 391a083..989638c 100644
--- a/arch/x86/lib/atomic64_cx8_32.S
+++ b/arch/x86/lib/atomic64_cx8_32.S
@@ -220,3 +220,31 @@ ENTRY(atomic64_inc_not_zero_cx8)
 	jmp 3b
 	CFI_ENDPROC
 ENDPROC(atomic64_inc_not_zero_cx8)
+
+ENTRY(atomic64_dec_not_zero_cx8)
+	CFI_STARTPROC
+	SAVE ebx
+
+	read64 %esi
+1:
+	testl %eax, %eax
+	je 4f
+2:
+	movl %eax, %ebx
+	movl %edx, %ecx
+	subl $1, %ebx
+	sbbl $0, %ecx
+	LOCK_PREFIX
+	cmpxchg8b (%esi)
+	jne 1b
+
+	movl $1, %eax
+3:
+	RESTORE ebx
+	ret
+4:
+	testl %edx, %edx
+	jne 2b
+	jmp 3b
+	CFI_ENDPROC
+ENDPROC(atomic64_dec_not_zero_cx8)
diff --git a/include/asm-generic/atomic-long.h b/include/asm-generic/atomic-long.h
index b7babf0..0fe75ab 100644
--- a/include/asm-generic/atomic-long.h
+++ b/include/asm-generic/atomic-long.h
@@ -130,6 +130,7 @@ static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u)
 }
 
 #define atomic_long_inc_not_zero(l) atomic64_inc_not_zero((atomic64_t *)(l))
+#define atomic_long_dec_not_zero(l) atomic64_dec_not_zero((atomic64_t *)(l))
 
 #define atomic_long_cmpxchg(l, old, new) \
 	(atomic64_cmpxchg((atomic64_t *)(l), (old), (new)))
@@ -247,6 +248,7 @@ static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u)
 }
 
 #define atomic_long_inc_not_zero(l) atomic_inc_not_zero((atomic_t *)(l))
+#define atomic_long_dec_not_zero(l) atomic_dec_not_zero((atomic_t *)(l))
 
 #define atomic_long_cmpxchg(l, old, new) \
 	(atomic_cmpxchg((atomic_t *)(l), (old), (new)))
diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
index b18ce4f..90ff9b1 100644
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -38,5 +38,6 @@ extern int	 atomic64_add_unless(atomic64_t *v, long long a, long long u);
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v) 	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 #endif  /*  _ASM_GENERIC_ATOMIC64_H  */
diff --git a/include/asm-generic/local.h b/include/asm-generic/local.h
index 9ceb03b..fabf4f3 100644
--- a/include/asm-generic/local.h
+++ b/include/asm-generic/local.h
@@ -44,6 +44,7 @@ typedef struct
 #define local_xchg(l, n) atomic_long_xchg((&(l)->a), (n))
 #define local_add_unless(l, _a, u) atomic_long_add_unless((&(l)->a), (_a), (u))
 #define local_inc_not_zero(l) atomic_long_inc_not_zero(&(l)->a)
+#define local_dec_not_zero(l) atomic_long_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
diff --git a/include/asm-generic/local64.h b/include/asm-generic/local64.h
index 5980002..76acbe2 100644
--- a/include/asm-generic/local64.h
+++ b/include/asm-generic/local64.h
@@ -45,6 +45,7 @@ typedef struct {
 #define local64_xchg(l, n)	local_xchg((&(l)->a), (n))
 #define local64_add_unless(l, _a, u) local_add_unless((&(l)->a), (_a), (u))
 #define local64_inc_not_zero(l)	local_inc_not_zero(&(l)->a)
+#define local64_dec_not_zero(l)	local_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
@@ -83,6 +84,7 @@ typedef struct {
 #define local64_xchg(l, n)	atomic64_xchg((&(l)->a), (n))
 #define local64_add_unless(l, _a, u) atomic64_add_unless((&(l)->a), (_a), (u))
 #define local64_inc_not_zero(l)	atomic64_inc_not_zero(&(l)->a)
+#define local64_dec_not_zero(l)	atomic64_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 42b77b5..ad2b750 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -27,6 +27,15 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u)
 #define atomic_inc_not_zero(v)		atomic_add_unless((v), 1, 0)
 
 /**
+ * atomic_dec_not_zero - decrement unless the number is zero
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1, so long as @v is non-zero.
+ * Returns non-zero if @v was non-zero, and zero otherwise.
+ */
+#define atomic_dec_not_zero(v)		atomic_add_unless((v), -1, 0)
+
+/**
  * atomic_inc_not_zero_hint - increment if not null
  * @v: pointer of type atomic_t
  * @hint: probable value of the atomic before the increment
-- 
1.7.5.4

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 01/11] atomic: add *_dec_not_zero
@ 2011-07-27  9:47 ` Sven Eckelmann
  0 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch
  Cc: linux-m32r-ja, linux-mips, linux-ia64, linux-doc, H. Peter Anvin,
	Heiko Carstens, Randy Dunlap, Paul Mackerras, Helge Deller,
	sparclinux, Sven Eckelmann, linux-s390, Russell King,
	user-mode-linux-devel, Richard Weinberger, Hirokazu Takata, x86,
	James E.J. Bottomley, Ingo Molnar, Matt Turner, Fenghua Yu,
	Arnd Bergmann, Jeff Dike, Chris Metcalf, linux-m32r,
	Ivan Kokshaysky, Thomas Gleixner, linux-arm-kernel,
	Richard Henderson, Tony Luck, linux-parisc, linux-kernel,
	Ralf Baechle, Kyle McMartin, linux-alpha, Martin Schwidefsky,
	linux390, linuxppc-dev, David S. Miller

Introduce an *_dec_not_zero operation.  Make this a special case of
*_add_unless because batman-adv uses atomic_dec_not_zero in different
places like re-broadcast queue or aggregation queue management. There
are other non-final patches which may also want to use this macro.

Reported-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Helge Deller <deller@gmx.de>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux390@de.ibm.com
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-alpha@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-ia64@vger.kernel.org
Cc: linux-m32r@ml.linux-m32r.org
Cc: linux-m32r-ja@ml.linux-m32r.org
Cc: linux-mips@linux-mips.org
Cc: linux-parisc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: user-mode-linux-devel@lists.sourceforge.net
---
David S. Miller recommended this change in
 https://lists.open-mesh.org/pipermail/b.a.t.m.a.n/2011-May/004560.html

Arnd Bergmann wanted to apply it in 201106172320.26476.arnd@arndb.de

... and then Arun Sharma created a big merge conflict with
https://lkml.org/lkml/2011/6/6/430

I don't think that it is a a good idea to asume that everyone still agrees
with the patch after I've rewritten it.

 Documentation/atomic_ops.txt       |    1 +
 arch/alpha/include/asm/atomic.h    |    1 +
 arch/alpha/include/asm/local.h     |    1 +
 arch/arm/include/asm/atomic.h      |    1 +
 arch/ia64/include/asm/atomic.h     |    1 +
 arch/m32r/include/asm/local.h      |    1 +
 arch/mips/include/asm/atomic.h     |    1 +
 arch/mips/include/asm/local.h      |    1 +
 arch/parisc/include/asm/atomic.h   |    1 +
 arch/powerpc/include/asm/atomic.h  |    1 +
 arch/powerpc/include/asm/local.h   |    1 +
 arch/s390/include/asm/atomic.h     |    1 +
 arch/sparc/include/asm/atomic_64.h |    1 +
 arch/tile/include/asm/atomic_32.h  |    1 +
 arch/tile/include/asm/atomic_64.h  |    1 +
 arch/um/sys-i386/atomic64_cx8_32.S |   28 ++++++++++++++++++++++++++++
 arch/x86/include/asm/atomic64_32.h |   12 ++++++++++++
 arch/x86/include/asm/atomic64_64.h |    1 +
 arch/x86/include/asm/local.h       |    1 +
 arch/x86/lib/atomic64_32.c         |    4 ++++
 arch/x86/lib/atomic64_386_32.S     |   21 +++++++++++++++++++++
 arch/x86/lib/atomic64_cx8_32.S     |   28 ++++++++++++++++++++++++++++
 include/asm-generic/atomic-long.h  |    2 ++
 include/asm-generic/atomic64.h     |    1 +
 include/asm-generic/local.h        |    1 +
 include/asm-generic/local64.h      |    2 ++
 include/linux/atomic.h             |    9 +++++++++
 27 files changed, 125 insertions(+), 0 deletions(-)

diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt
index 3bd585b..1eec221 100644
--- a/Documentation/atomic_ops.txt
+++ b/Documentation/atomic_ops.txt
@@ -190,6 +190,7 @@ atomic_add_unless requires explicit memory barriers around the operation
 unless it fails (returns 0).
 
 atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
+atomic_dec_not_zero, equivalent to atomic_add_unless(v, -1, 0)
 
 
 If a caller requires memory barrier semantics around an atomic_t
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 640f909..09d1571 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -225,6 +225,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0)
 #define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0)
diff --git a/arch/alpha/include/asm/local.h b/arch/alpha/include/asm/local.h
index 9c94b84..51eb678 100644
--- a/arch/alpha/include/asm/local.h
+++ b/arch/alpha/include/asm/local.h
@@ -79,6 +79,7 @@ static __inline__ long local_sub_return(long i, local_t * l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_add_negative(a, l) (local_add_return((a), (l)) < 0)
 
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index 86976d0..80ed975 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -458,6 +458,7 @@ static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u)
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 #endif /* !CONFIG_GENERIC_ATOMIC64 */
 #endif
diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 3fad89e..af6e9b2 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -122,6 +122,7 @@ static __inline__ long atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic_add_return(i,v)						\
 ({									\
diff --git a/arch/m32r/include/asm/local.h b/arch/m32r/include/asm/local.h
index 734bca8..d536082 100644
--- a/arch/m32r/include/asm/local.h
+++ b/arch/m32r/include/asm/local.h
@@ -272,6 +272,7 @@ static inline int local_add_unless(local_t *l, long a, long u)
 }
 
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 static inline void local_clear_mask(unsigned long  mask, local_t *addr)
 {
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 1d93f81..babb043 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -697,6 +697,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic64_dec_return(v) atomic64_sub_return(1, (v))
 #define atomic64_inc_return(v) atomic64_add_return(1, (v))
diff --git a/arch/mips/include/asm/local.h b/arch/mips/include/asm/local.h
index 94fde8d..0242256 100644
--- a/arch/mips/include/asm/local.h
+++ b/arch/mips/include/asm/local.h
@@ -137,6 +137,7 @@ static __inline__ long local_sub_return(long i, local_t * l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_dec_return(l) local_sub_return(1, (l))
 #define local_inc_return(l) local_add_return(1, (l))
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index b1dc71f..8a50234 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -334,6 +334,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #endif /* !CONFIG_64BIT */
 
diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index e2a4c26..c0131a6 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -468,6 +468,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #endif /* __powerpc64__ */
 
diff --git a/arch/powerpc/include/asm/local.h b/arch/powerpc/include/asm/local.h
index b8da913..d182e34 100644
--- a/arch/powerpc/include/asm/local.h
+++ b/arch/powerpc/include/asm/local.h
@@ -134,6 +134,7 @@ static __inline__ int local_add_unless(local_t *l, long a, long u)
 }
 
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_sub_and_test(a, l)	(local_sub_return((a), (l)) == 0)
 #define local_dec_and_test(l)		(local_dec_return((l)) == 0)
diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h
index 8517d2a..92e7d5d 100644
--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -325,6 +325,7 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
 #define atomic64_dec_return(_v)		atomic64_sub_return(1, _v)
 #define atomic64_dec_and_test(_v)	(atomic64_sub_return(1, _v) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1, 0)
 
 #define smp_mb__before_atomic_dec()	smp_mb()
 #define smp_mb__after_atomic_dec()	smp_mb()
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 9f421df..94cf160 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -106,6 +106,7 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 /* Atomic operations are already serializing */
 #define smp_mb__before_atomic_dec()	barrier()
diff --git a/arch/tile/include/asm/atomic_32.h b/arch/tile/include/asm/atomic_32.h
index c03349e..9cfafb3 100644
--- a/arch/tile/include/asm/atomic_32.h
+++ b/arch/tile/include/asm/atomic_32.h
@@ -233,6 +233,7 @@ static inline void atomic64_set(atomic64_t *v, u64 n)
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 /*
  * We need to barrier before modifying the word, since the _atomic_xxx()
diff --git a/arch/tile/include/asm/atomic_64.h b/arch/tile/include/asm/atomic_64.h
index 27fe667..9c22f50 100644
--- a/arch/tile/include/asm/atomic_64.h
+++ b/arch/tile/include/asm/atomic_64.h
@@ -141,6 +141,7 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
 #define atomic64_add_negative(i, v)	(atomic64_add_return((i), (v)) < 0)
 
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1, 0)
 
 /* Atomic dec and inc don't implement barrier, so provide them if needed. */
 #define smp_mb__before_atomic_dec()	smp_mb()
diff --git a/arch/um/sys-i386/atomic64_cx8_32.S b/arch/um/sys-i386/atomic64_cx8_32.S
index 1e901d3..a58a1d4 100644
--- a/arch/um/sys-i386/atomic64_cx8_32.S
+++ b/arch/um/sys-i386/atomic64_cx8_32.S
@@ -223,3 +223,31 @@ ENTRY(atomic64_inc_not_zero_cx8)
 	jmp 3b
 	CFI_ENDPROC
 ENDPROC(atomic64_inc_not_zero_cx8)
+
+ENTRY(atomic64_dec_not_zero_cx8)
+	CFI_STARTPROC
+	SAVE ebx
+
+	read64 %esi
+1:
+	testl %eax, %eax
+	je 4f
+2:
+	movl %eax, %ebx
+	movl %edx, %ecx
+	subl $1, %ebx
+	sbbl $0, %ecx
+	LOCK_PREFIX
+	cmpxchg8b (%esi)
+	jne 1b
+
+	movl $1, %eax
+3:
+	RESTORE ebx
+	ret
+4:
+	testl %edx, %edx
+	jne 2b
+	jmp 3b
+	CFI_ENDPROC
+ENDPROC(atomic64_dec_not_zero_cx8)
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 24098aa..3cd4431 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -287,6 +287,18 @@ static inline int atomic64_inc_not_zero(atomic64_t *v)
 	return r;
 }
 
+
+static inline int atomic64_dec_not_zero(atomic64_t *v)
+{
+	int r;
+	asm volatile(ATOMIC64_ALTERNATIVE(dec_not_zero)
+		     : "=a" (r)
+		     : "S" (v)
+		     : "ecx", "edx", "memory"
+		     );
+	return r;
+}
+
 static inline long long atomic64_dec_if_positive(atomic64_t *v)
 {
 	long long r;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index 017594d..93c9d8b 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -220,6 +220,7 @@ static inline int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 /*
  * atomic64_dec_if_positive - decrement by 1 if old value positive
diff --git a/arch/x86/include/asm/local.h b/arch/x86/include/asm/local.h
index 9cdae5d..2c8c92d 100644
--- a/arch/x86/include/asm/local.h
+++ b/arch/x86/include/asm/local.h
@@ -185,6 +185,7 @@ static inline long local_sub_return(long i, local_t *l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 /* On x86_32, these are no better than the atomic variants.
  * On x86-64 these are better than the atomic variants on SMP kernels
diff --git a/arch/x86/lib/atomic64_32.c b/arch/x86/lib/atomic64_32.c
index 042f682..7da05c3 100644
--- a/arch/x86/lib/atomic64_32.c
+++ b/arch/x86/lib/atomic64_32.c
@@ -24,6 +24,8 @@ long long atomic64_dec_if_positive_cx8(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_dec_if_positive_cx8);
 int atomic64_inc_not_zero_cx8(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_inc_not_zero_cx8);
+int atomic64_dec_not_zero_cx8(atomic64_t *v);
+EXPORT_SYMBOL(atomic64_dec_not_zero_cx8);
 int atomic64_add_unless_cx8(atomic64_t *v, long long a, long long u);
 EXPORT_SYMBOL(atomic64_add_unless_cx8);
 
@@ -54,6 +56,8 @@ long long atomic64_dec_if_positive_386(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_dec_if_positive_386);
 int atomic64_inc_not_zero_386(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_inc_not_zero_386);
+int atomic64_dec_not_zero_386(atomic64_t *v);
+EXPORT_SYMBOL(atomic64_dec_not_zero_386);
 int atomic64_add_unless_386(atomic64_t *v, long long a, long long u);
 EXPORT_SYMBOL(atomic64_add_unless_386);
 #endif
diff --git a/arch/x86/lib/atomic64_386_32.S b/arch/x86/lib/atomic64_386_32.S
index e8e7e0d..c78337b 100644
--- a/arch/x86/lib/atomic64_386_32.S
+++ b/arch/x86/lib/atomic64_386_32.S
@@ -181,6 +181,27 @@ ENDP
 #undef v
 
 #define v %esi
+BEGIN(dec_not_zero)
+	movl  (v), %eax
+	movl 4(v), %edx
+	testl %eax, %eax
+	je 3f
+1:
+	subl $1, %eax
+	sbbl $0, %edx
+	movl %eax,  (v)
+	movl %edx, 4(v)
+	movl $1, %eax
+2:
+	RET
+3:
+	testl %edx, %edx
+	jne 1b
+	jmp 2b
+ENDP
+#undef v
+
+#define v %esi
 BEGIN(dec_if_positive)
 	movl  (v), %eax
 	movl 4(v), %edx
diff --git a/arch/x86/lib/atomic64_cx8_32.S b/arch/x86/lib/atomic64_cx8_32.S
index 391a083..989638c 100644
--- a/arch/x86/lib/atomic64_cx8_32.S
+++ b/arch/x86/lib/atomic64_cx8_32.S
@@ -220,3 +220,31 @@ ENTRY(atomic64_inc_not_zero_cx8)
 	jmp 3b
 	CFI_ENDPROC
 ENDPROC(atomic64_inc_not_zero_cx8)
+
+ENTRY(atomic64_dec_not_zero_cx8)
+	CFI_STARTPROC
+	SAVE ebx
+
+	read64 %esi
+1:
+	testl %eax, %eax
+	je 4f
+2:
+	movl %eax, %ebx
+	movl %edx, %ecx
+	subl $1, %ebx
+	sbbl $0, %ecx
+	LOCK_PREFIX
+	cmpxchg8b (%esi)
+	jne 1b
+
+	movl $1, %eax
+3:
+	RESTORE ebx
+	ret
+4:
+	testl %edx, %edx
+	jne 2b
+	jmp 3b
+	CFI_ENDPROC
+ENDPROC(atomic64_dec_not_zero_cx8)
diff --git a/include/asm-generic/atomic-long.h b/include/asm-generic/atomic-long.h
index b7babf0..0fe75ab 100644
--- a/include/asm-generic/atomic-long.h
+++ b/include/asm-generic/atomic-long.h
@@ -130,6 +130,7 @@ static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u)
 }
 
 #define atomic_long_inc_not_zero(l) atomic64_inc_not_zero((atomic64_t *)(l))
+#define atomic_long_dec_not_zero(l) atomic64_dec_not_zero((atomic64_t *)(l))
 
 #define atomic_long_cmpxchg(l, old, new) \
 	(atomic64_cmpxchg((atomic64_t *)(l), (old), (new)))
@@ -247,6 +248,7 @@ static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u)
 }
 
 #define atomic_long_inc_not_zero(l) atomic_inc_not_zero((atomic_t *)(l))
+#define atomic_long_dec_not_zero(l) atomic_dec_not_zero((atomic_t *)(l))
 
 #define atomic_long_cmpxchg(l, old, new) \
 	(atomic_cmpxchg((atomic_t *)(l), (old), (new)))
diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
index b18ce4f..90ff9b1 100644
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -38,5 +38,6 @@ extern int	 atomic64_add_unless(atomic64_t *v, long long a, long long u);
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v) 	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 #endif  /*  _ASM_GENERIC_ATOMIC64_H  */
diff --git a/include/asm-generic/local.h b/include/asm-generic/local.h
index 9ceb03b..fabf4f3 100644
--- a/include/asm-generic/local.h
+++ b/include/asm-generic/local.h
@@ -44,6 +44,7 @@ typedef struct
 #define local_xchg(l, n) atomic_long_xchg((&(l)->a), (n))
 #define local_add_unless(l, _a, u) atomic_long_add_unless((&(l)->a), (_a), (u))
 #define local_inc_not_zero(l) atomic_long_inc_not_zero(&(l)->a)
+#define local_dec_not_zero(l) atomic_long_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
diff --git a/include/asm-generic/local64.h b/include/asm-generic/local64.h
index 5980002..76acbe2 100644
--- a/include/asm-generic/local64.h
+++ b/include/asm-generic/local64.h
@@ -45,6 +45,7 @@ typedef struct {
 #define local64_xchg(l, n)	local_xchg((&(l)->a), (n))
 #define local64_add_unless(l, _a, u) local_add_unless((&(l)->a), (_a), (u))
 #define local64_inc_not_zero(l)	local_inc_not_zero(&(l)->a)
+#define local64_dec_not_zero(l)	local_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
@@ -83,6 +84,7 @@ typedef struct {
 #define local64_xchg(l, n)	atomic64_xchg((&(l)->a), (n))
 #define local64_add_unless(l, _a, u) atomic64_add_unless((&(l)->a), (_a), (u))
 #define local64_inc_not_zero(l)	atomic64_inc_not_zero(&(l)->a)
+#define local64_dec_not_zero(l)	atomic64_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 42b77b5..ad2b750 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -27,6 +27,15 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u)
 #define atomic_inc_not_zero(v)		atomic_add_unless((v), 1, 0)
 
 /**
+ * atomic_dec_not_zero - decrement unless the number is zero
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1, so long as @v is non-zero.
+ * Returns non-zero if @v was non-zero, and zero otherwise.
+ */
+#define atomic_dec_not_zero(v)		atomic_add_unless((v), -1, 0)
+
+/**
  * atomic_inc_not_zero_hint - increment if not null
  * @v: pointer of type atomic_t
  * @hint: probable value of the atomic before the increment
-- 
1.7.5.4

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 01/11] atomic: add *_dec_not_zero
@ 2011-07-27  9:47 ` Sven Eckelmann
  0 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arm-kernel

Introduce an *_dec_not_zero operation.  Make this a special case of
*_add_unless because batman-adv uses atomic_dec_not_zero in different
places like re-broadcast queue or aggregation queue management. There
are other non-final patches which may also want to use this macro.

Reported-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Helge Deller <deller@gmx.de>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux390 at de.ibm.com
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86 at kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-doc at vger.kernel.org
Cc: linux-kernel at vger.kernel.org
Cc: linux-alpha at vger.kernel.org
Cc: linux-arm-kernel at lists.infradead.org
Cc: linux-ia64 at vger.kernel.org
Cc: linux-m32r at ml.linux-m32r.org
Cc: linux-m32r-ja at ml.linux-m32r.org
Cc: linux-mips at linux-mips.org
Cc: linux-parisc at vger.kernel.org
Cc: linuxppc-dev at lists.ozlabs.org
Cc: linux-s390 at vger.kernel.org
Cc: sparclinux at vger.kernel.org
Cc: user-mode-linux-devel at lists.sourceforge.net
---
David S. Miller recommended this change in
 https://lists.open-mesh.org/pipermail/b.a.t.m.a.n/2011-May/004560.html

Arnd Bergmann wanted to apply it in 201106172320.26476.arnd at arndb.de

... and then Arun Sharma created a big merge conflict with
https://lkml.org/lkml/2011/6/6/430

I don't think that it is a a good idea to asume that everyone still agrees
with the patch after I've rewritten it.

 Documentation/atomic_ops.txt       |    1 +
 arch/alpha/include/asm/atomic.h    |    1 +
 arch/alpha/include/asm/local.h     |    1 +
 arch/arm/include/asm/atomic.h      |    1 +
 arch/ia64/include/asm/atomic.h     |    1 +
 arch/m32r/include/asm/local.h      |    1 +
 arch/mips/include/asm/atomic.h     |    1 +
 arch/mips/include/asm/local.h      |    1 +
 arch/parisc/include/asm/atomic.h   |    1 +
 arch/powerpc/include/asm/atomic.h  |    1 +
 arch/powerpc/include/asm/local.h   |    1 +
 arch/s390/include/asm/atomic.h     |    1 +
 arch/sparc/include/asm/atomic_64.h |    1 +
 arch/tile/include/asm/atomic_32.h  |    1 +
 arch/tile/include/asm/atomic_64.h  |    1 +
 arch/um/sys-i386/atomic64_cx8_32.S |   28 ++++++++++++++++++++++++++++
 arch/x86/include/asm/atomic64_32.h |   12 ++++++++++++
 arch/x86/include/asm/atomic64_64.h |    1 +
 arch/x86/include/asm/local.h       |    1 +
 arch/x86/lib/atomic64_32.c         |    4 ++++
 arch/x86/lib/atomic64_386_32.S     |   21 +++++++++++++++++++++
 arch/x86/lib/atomic64_cx8_32.S     |   28 ++++++++++++++++++++++++++++
 include/asm-generic/atomic-long.h  |    2 ++
 include/asm-generic/atomic64.h     |    1 +
 include/asm-generic/local.h        |    1 +
 include/asm-generic/local64.h      |    2 ++
 include/linux/atomic.h             |    9 +++++++++
 27 files changed, 125 insertions(+), 0 deletions(-)

diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt
index 3bd585b..1eec221 100644
--- a/Documentation/atomic_ops.txt
+++ b/Documentation/atomic_ops.txt
@@ -190,6 +190,7 @@ atomic_add_unless requires explicit memory barriers around the operation
 unless it fails (returns 0).
 
 atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
+atomic_dec_not_zero, equivalent to atomic_add_unless(v, -1, 0)
 
 
 If a caller requires memory barrier semantics around an atomic_t
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 640f909..09d1571 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -225,6 +225,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0)
 #define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0)
diff --git a/arch/alpha/include/asm/local.h b/arch/alpha/include/asm/local.h
index 9c94b84..51eb678 100644
--- a/arch/alpha/include/asm/local.h
+++ b/arch/alpha/include/asm/local.h
@@ -79,6 +79,7 @@ static __inline__ long local_sub_return(long i, local_t * l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_add_negative(a, l) (local_add_return((a), (l)) < 0)
 
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index 86976d0..80ed975 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -458,6 +458,7 @@ static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u)
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 #endif /* !CONFIG_GENERIC_ATOMIC64 */
 #endif
diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 3fad89e..af6e9b2 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -122,6 +122,7 @@ static __inline__ long atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic_add_return(i,v)						\
 ({									\
diff --git a/arch/m32r/include/asm/local.h b/arch/m32r/include/asm/local.h
index 734bca8..d536082 100644
--- a/arch/m32r/include/asm/local.h
+++ b/arch/m32r/include/asm/local.h
@@ -272,6 +272,7 @@ static inline int local_add_unless(local_t *l, long a, long u)
 }
 
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 static inline void local_clear_mask(unsigned long  mask, local_t *addr)
 {
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 1d93f81..babb043 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -697,6 +697,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #define atomic64_dec_return(v) atomic64_sub_return(1, (v))
 #define atomic64_inc_return(v) atomic64_add_return(1, (v))
diff --git a/arch/mips/include/asm/local.h b/arch/mips/include/asm/local.h
index 94fde8d..0242256 100644
--- a/arch/mips/include/asm/local.h
+++ b/arch/mips/include/asm/local.h
@@ -137,6 +137,7 @@ static __inline__ long local_sub_return(long i, local_t * l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_dec_return(l) local_sub_return(1, (l))
 #define local_inc_return(l) local_add_return(1, (l))
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index b1dc71f..8a50234 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -334,6 +334,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #endif /* !CONFIG_64BIT */
 
diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index e2a4c26..c0131a6 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -468,6 +468,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 #endif /* __powerpc64__ */
 
diff --git a/arch/powerpc/include/asm/local.h b/arch/powerpc/include/asm/local.h
index b8da913..d182e34 100644
--- a/arch/powerpc/include/asm/local.h
+++ b/arch/powerpc/include/asm/local.h
@@ -134,6 +134,7 @@ static __inline__ int local_add_unless(local_t *l, long a, long u)
 }
 
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 #define local_sub_and_test(a, l)	(local_sub_return((a), (l)) == 0)
 #define local_dec_and_test(l)		(local_dec_return((l)) == 0)
diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h
index 8517d2a..92e7d5d 100644
--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -325,6 +325,7 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
 #define atomic64_dec_return(_v)		atomic64_sub_return(1, _v)
 #define atomic64_dec_and_test(_v)	(atomic64_sub_return(1, _v) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1, 0)
 
 #define smp_mb__before_atomic_dec()	smp_mb()
 #define smp_mb__after_atomic_dec()	smp_mb()
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 9f421df..94cf160 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -106,6 +106,7 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 /* Atomic operations are already serializing */
 #define smp_mb__before_atomic_dec()	barrier()
diff --git a/arch/tile/include/asm/atomic_32.h b/arch/tile/include/asm/atomic_32.h
index c03349e..9cfafb3 100644
--- a/arch/tile/include/asm/atomic_32.h
+++ b/arch/tile/include/asm/atomic_32.h
@@ -233,6 +233,7 @@ static inline void atomic64_set(atomic64_t *v, u64 n)
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 /*
  * We need to barrier before modifying the word, since the _atomic_xxx()
diff --git a/arch/tile/include/asm/atomic_64.h b/arch/tile/include/asm/atomic_64.h
index 27fe667..9c22f50 100644
--- a/arch/tile/include/asm/atomic_64.h
+++ b/arch/tile/include/asm/atomic_64.h
@@ -141,6 +141,7 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
 #define atomic64_add_negative(i, v)	(atomic64_add_return((i), (v)) < 0)
 
 #define atomic64_inc_not_zero(v)	atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1, 0)
 
 /* Atomic dec and inc don't implement barrier, so provide them if needed. */
 #define smp_mb__before_atomic_dec()	smp_mb()
diff --git a/arch/um/sys-i386/atomic64_cx8_32.S b/arch/um/sys-i386/atomic64_cx8_32.S
index 1e901d3..a58a1d4 100644
--- a/arch/um/sys-i386/atomic64_cx8_32.S
+++ b/arch/um/sys-i386/atomic64_cx8_32.S
@@ -223,3 +223,31 @@ ENTRY(atomic64_inc_not_zero_cx8)
 	jmp 3b
 	CFI_ENDPROC
 ENDPROC(atomic64_inc_not_zero_cx8)
+
+ENTRY(atomic64_dec_not_zero_cx8)
+	CFI_STARTPROC
+	SAVE ebx
+
+	read64 %esi
+1:
+	testl %eax, %eax
+	je 4f
+2:
+	movl %eax, %ebx
+	movl %edx, %ecx
+	subl $1, %ebx
+	sbbl $0, %ecx
+	LOCK_PREFIX
+	cmpxchg8b (%esi)
+	jne 1b
+
+	movl $1, %eax
+3:
+	RESTORE ebx
+	ret
+4:
+	testl %edx, %edx
+	jne 2b
+	jmp 3b
+	CFI_ENDPROC
+ENDPROC(atomic64_dec_not_zero_cx8)
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 24098aa..3cd4431 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -287,6 +287,18 @@ static inline int atomic64_inc_not_zero(atomic64_t *v)
 	return r;
 }
 
+
+static inline int atomic64_dec_not_zero(atomic64_t *v)
+{
+	int r;
+	asm volatile(ATOMIC64_ALTERNATIVE(dec_not_zero)
+		     : "=a" (r)
+		     : "S" (v)
+		     : "ecx", "edx", "memory"
+		     );
+	return r;
+}
+
 static inline long long atomic64_dec_if_positive(atomic64_t *v)
 {
 	long long r;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index 017594d..93c9d8b 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -220,6 +220,7 @@ static inline int atomic64_add_unless(atomic64_t *v, long a, long u)
 }
 
 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define atomic64_dec_not_zero(v) atomic64_add_unless((v), -1, 0)
 
 /*
  * atomic64_dec_if_positive - decrement by 1 if old value positive
diff --git a/arch/x86/include/asm/local.h b/arch/x86/include/asm/local.h
index 9cdae5d..2c8c92d 100644
--- a/arch/x86/include/asm/local.h
+++ b/arch/x86/include/asm/local.h
@@ -185,6 +185,7 @@ static inline long local_sub_return(long i, local_t *l)
 	c != (u);						\
 })
 #define local_inc_not_zero(l) local_add_unless((l), 1, 0)
+#define local_dec_not_zero(l) local_add_unless((l), -1, 0)
 
 /* On x86_32, these are no better than the atomic variants.
  * On x86-64 these are better than the atomic variants on SMP kernels
diff --git a/arch/x86/lib/atomic64_32.c b/arch/x86/lib/atomic64_32.c
index 042f682..7da05c3 100644
--- a/arch/x86/lib/atomic64_32.c
+++ b/arch/x86/lib/atomic64_32.c
@@ -24,6 +24,8 @@ long long atomic64_dec_if_positive_cx8(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_dec_if_positive_cx8);
 int atomic64_inc_not_zero_cx8(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_inc_not_zero_cx8);
+int atomic64_dec_not_zero_cx8(atomic64_t *v);
+EXPORT_SYMBOL(atomic64_dec_not_zero_cx8);
 int atomic64_add_unless_cx8(atomic64_t *v, long long a, long long u);
 EXPORT_SYMBOL(atomic64_add_unless_cx8);
 
@@ -54,6 +56,8 @@ long long atomic64_dec_if_positive_386(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_dec_if_positive_386);
 int atomic64_inc_not_zero_386(atomic64_t *v);
 EXPORT_SYMBOL(atomic64_inc_not_zero_386);
+int atomic64_dec_not_zero_386(atomic64_t *v);
+EXPORT_SYMBOL(atomic64_dec_not_zero_386);
 int atomic64_add_unless_386(atomic64_t *v, long long a, long long u);
 EXPORT_SYMBOL(atomic64_add_unless_386);
 #endif
diff --git a/arch/x86/lib/atomic64_386_32.S b/arch/x86/lib/atomic64_386_32.S
index e8e7e0d..c78337b 100644
--- a/arch/x86/lib/atomic64_386_32.S
+++ b/arch/x86/lib/atomic64_386_32.S
@@ -181,6 +181,27 @@ ENDP
 #undef v
 
 #define v %esi
+BEGIN(dec_not_zero)
+	movl  (v), %eax
+	movl 4(v), %edx
+	testl %eax, %eax
+	je 3f
+1:
+	subl $1, %eax
+	sbbl $0, %edx
+	movl %eax,  (v)
+	movl %edx, 4(v)
+	movl $1, %eax
+2:
+	RET
+3:
+	testl %edx, %edx
+	jne 1b
+	jmp 2b
+ENDP
+#undef v
+
+#define v %esi
 BEGIN(dec_if_positive)
 	movl  (v), %eax
 	movl 4(v), %edx
diff --git a/arch/x86/lib/atomic64_cx8_32.S b/arch/x86/lib/atomic64_cx8_32.S
index 391a083..989638c 100644
--- a/arch/x86/lib/atomic64_cx8_32.S
+++ b/arch/x86/lib/atomic64_cx8_32.S
@@ -220,3 +220,31 @@ ENTRY(atomic64_inc_not_zero_cx8)
 	jmp 3b
 	CFI_ENDPROC
 ENDPROC(atomic64_inc_not_zero_cx8)
+
+ENTRY(atomic64_dec_not_zero_cx8)
+	CFI_STARTPROC
+	SAVE ebx
+
+	read64 %esi
+1:
+	testl %eax, %eax
+	je 4f
+2:
+	movl %eax, %ebx
+	movl %edx, %ecx
+	subl $1, %ebx
+	sbbl $0, %ecx
+	LOCK_PREFIX
+	cmpxchg8b (%esi)
+	jne 1b
+
+	movl $1, %eax
+3:
+	RESTORE ebx
+	ret
+4:
+	testl %edx, %edx
+	jne 2b
+	jmp 3b
+	CFI_ENDPROC
+ENDPROC(atomic64_dec_not_zero_cx8)
diff --git a/include/asm-generic/atomic-long.h b/include/asm-generic/atomic-long.h
index b7babf0..0fe75ab 100644
--- a/include/asm-generic/atomic-long.h
+++ b/include/asm-generic/atomic-long.h
@@ -130,6 +130,7 @@ static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u)
 }
 
 #define atomic_long_inc_not_zero(l) atomic64_inc_not_zero((atomic64_t *)(l))
+#define atomic_long_dec_not_zero(l) atomic64_dec_not_zero((atomic64_t *)(l))
 
 #define atomic_long_cmpxchg(l, old, new) \
 	(atomic64_cmpxchg((atomic64_t *)(l), (old), (new)))
@@ -247,6 +248,7 @@ static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u)
 }
 
 #define atomic_long_inc_not_zero(l) atomic_inc_not_zero((atomic_t *)(l))
+#define atomic_long_dec_not_zero(l) atomic_dec_not_zero((atomic_t *)(l))
 
 #define atomic_long_cmpxchg(l, old, new) \
 	(atomic_cmpxchg((atomic_t *)(l), (old), (new)))
diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
index b18ce4f..90ff9b1 100644
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -38,5 +38,6 @@ extern int	 atomic64_add_unless(atomic64_t *v, long long a, long long u);
 #define atomic64_dec_return(v)		atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_not_zero(v) 	atomic64_add_unless((v), 1LL, 0LL)
+#define atomic64_dec_not_zero(v)	atomic64_add_unless((v), -1LL, 0LL)
 
 #endif  /*  _ASM_GENERIC_ATOMIC64_H  */
diff --git a/include/asm-generic/local.h b/include/asm-generic/local.h
index 9ceb03b..fabf4f3 100644
--- a/include/asm-generic/local.h
+++ b/include/asm-generic/local.h
@@ -44,6 +44,7 @@ typedef struct
 #define local_xchg(l, n) atomic_long_xchg((&(l)->a), (n))
 #define local_add_unless(l, _a, u) atomic_long_add_unless((&(l)->a), (_a), (u))
 #define local_inc_not_zero(l) atomic_long_inc_not_zero(&(l)->a)
+#define local_dec_not_zero(l) atomic_long_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
diff --git a/include/asm-generic/local64.h b/include/asm-generic/local64.h
index 5980002..76acbe2 100644
--- a/include/asm-generic/local64.h
+++ b/include/asm-generic/local64.h
@@ -45,6 +45,7 @@ typedef struct {
 #define local64_xchg(l, n)	local_xchg((&(l)->a), (n))
 #define local64_add_unless(l, _a, u) local_add_unless((&(l)->a), (_a), (u))
 #define local64_inc_not_zero(l)	local_inc_not_zero(&(l)->a)
+#define local64_dec_not_zero(l)	local_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
@@ -83,6 +84,7 @@ typedef struct {
 #define local64_xchg(l, n)	atomic64_xchg((&(l)->a), (n))
 #define local64_add_unless(l, _a, u) atomic64_add_unless((&(l)->a), (_a), (u))
 #define local64_inc_not_zero(l)	atomic64_inc_not_zero(&(l)->a)
+#define local64_dec_not_zero(l)	atomic64_dec_not_zero(&(l)->a)
 
 /* Non-atomic variants, ie. preemption disabled and won't be touched
  * in interrupt, etc.  Some archs can optimize this case well. */
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 42b77b5..ad2b750 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -27,6 +27,15 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u)
 #define atomic_inc_not_zero(v)		atomic_add_unless((v), 1, 0)
 
 /**
+ * atomic_dec_not_zero - decrement unless the number is zero
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1, so long as @v is non-zero.
+ * Returns non-zero if @v was non-zero, and zero otherwise.
+ */
+#define atomic_dec_not_zero(v)		atomic_add_unless((v), -1, 0)
+
+/**
  * atomic_inc_not_zero_hint - increment if not null
  * @v: pointer of type atomic_t
  * @hint: probable value of the atomic before the increment
-- 
1.7.5.4

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 02/11] batman-adv: Remove private define of atomic_dec_not_zero
@ 2011-07-27  9:47   ` Sven Eckelmann
  0 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch
  Cc: linux-kernel, Sven Eckelmann, Marek Lindner, Simon Wunderlich,
	b.a.t.m.a.n

atomic_dec_not_zero is defined through <linux/atomic.h> for all
architectures and batman-adv doesn't need an extra define which may
collide with the global one.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Marek Lindner <lindner_marek@yahoo.de>
Cc: Simon Wunderlich <siwu@hrz.tu-chemnitz.de>
Cc: b.a.t.m.a.n@lists.open-mesh.org
---
 net/batman-adv/main.h |    2 --
 1 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/net/batman-adv/main.h b/net/batman-adv/main.h
index a6df61a..d986f34 100644
--- a/net/batman-adv/main.h
+++ b/net/batman-adv/main.h
@@ -201,8 +201,6 @@ static inline int compare_eth(const void *data1, const void *data2)
 }
 
 
-#define atomic_dec_not_zero(v)	atomic_add_unless((v), -1, 0)
-
 /* Returns the smallest signed integer in two's complement with the sizeof x */
 #define smallest_signed_int(x) (1u << (7u + 8u * (sizeof(x) - 1u)))
 
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 02/11] batman-adv: Remove private define of atomic_dec_not_zero
@ 2011-07-27  9:47   ` Sven Eckelmann
  0 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch-u79uwXL29TY76Z2rM5mHXA
  Cc: Simon Wunderlich, Marek Lindner,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	b.a.t.m.a.n-ZwoEplunGu2X36UT3dwllkB+6BGkLq7r

atomic_dec_not_zero is defined through <linux/atomic.h> for all
architectures and batman-adv doesn't need an extra define which may
collide with the global one.

Signed-off-by: Sven Eckelmann <sven-KaDOiPu9UxWEi8DpZVb4nw@public.gmane.org>
Cc: Marek Lindner <lindner_marek-LWAfsSFWpa4@public.gmane.org>
Cc: Simon Wunderlich <siwu-MaAgPAbsBIVS8oHt8HbXEIQuADTiUCJX@public.gmane.org>
Cc: b.a.t.m.a.n-ZwoEplunGu2X36UT3dwllkB+6BGkLq7r@public.gmane.org
---
 net/batman-adv/main.h |    2 --
 1 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/net/batman-adv/main.h b/net/batman-adv/main.h
index a6df61a..d986f34 100644
--- a/net/batman-adv/main.h
+++ b/net/batman-adv/main.h
@@ -201,8 +201,6 @@ static inline int compare_eth(const void *data1, const void *data2)
 }
 
 
-#define atomic_dec_not_zero(v)	atomic_add_unless((v), -1, 0)
-
 /* Returns the smallest signed integer in two's complement with the sizeof x */
 #define smallest_signed_int(x) (1u << (7u + 8u * (sizeof(x) - 1u)))
 
-- 
1.7.5.4

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [B.A.T.M.A.N.] [PATCHv4 02/11] batman-adv: Remove private define of atomic_dec_not_zero
@ 2011-07-27  9:47   ` Sven Eckelmann
  0 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch; +Cc: Simon Wunderlich, Marek Lindner, linux-kernel, b.a.t.m.a.n

atomic_dec_not_zero is defined through <linux/atomic.h> for all
architectures and batman-adv doesn't need an extra define which may
collide with the global one.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Marek Lindner <lindner_marek@yahoo.de>
Cc: Simon Wunderlich <siwu@hrz.tu-chemnitz.de>
Cc: b.a.t.m.a.n@lists.open-mesh.org
---
 net/batman-adv/main.h |    2 --
 1 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/net/batman-adv/main.h b/net/batman-adv/main.h
index a6df61a..d986f34 100644
--- a/net/batman-adv/main.h
+++ b/net/batman-adv/main.h
@@ -201,8 +201,6 @@ static inline int compare_eth(const void *data1, const void *data2)
 }
 
 
-#define atomic_dec_not_zero(v)	atomic_add_unless((v), -1, 0)
-
 /* Returns the smallest signed integer in two's complement with the sizeof x */
 #define smallest_signed_int(x) (1u << (7u + 8u * (sizeof(x) - 1u)))
 
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 03/11] fault_inject: Remove private define of atomic_dec_not_zero
  2011-07-27  9:47 ` Sven Eckelmann
                   ` (3 preceding siblings ...)
  (?)
@ 2011-07-27  9:47 ` Sven Eckelmann
  -1 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch; +Cc: linux-kernel, Sven Eckelmann, Akinobu Mita

atomic_dec_not_zero is defined through <linux/atomic.h> for all
architectures and fault_inject doesn't need an extra define which may
collide with the global one.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Akinobu Mita <akinobu.mita@gmail.com>
---
 lib/fault-inject.c |    2 --
 1 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/lib/fault-inject.c b/lib/fault-inject.c
index 2577b12..8ce98c6 100644
--- a/lib/fault-inject.c
+++ b/lib/fault-inject.c
@@ -45,8 +45,6 @@ static void fail_dump(struct fault_attr *attr)
 		dump_stack();
 }
 
-#define atomic_dec_not_zero(v)		atomic_add_unless((v), -1, 0)
-
 static bool fail_task(struct fault_attr *attr, struct task_struct *task)
 {
 	return !in_interrupt() && task->make_it_fail;
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 04/11] PM: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` Sven Eckelmann
                   ` (5 preceding siblings ...)
  (?)
@ 2011-07-27  9:47 ` Sven Eckelmann
  2011-07-27 19:50   ` Rafael J. Wysocki
  2011-07-27 19:50   ` Rafael J. Wysocki
  -1 siblings, 2 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch
  Cc: linux-kernel, Sven Eckelmann, Len Brown, Pavel Machek,
	Rafael J. Wysocki, linux-pm

atomic_dec_not_zero is defined for each architecture through
<linux/atomic.h> to provide the functionality of
atomic_add_unless(x, -1, 0).

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Len Brown <len.brown@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Rafael J. Wysocki <rjw@sisk.pl>
Cc: linux-pm@lists.linux-foundation.org
---
 drivers/base/power/runtime.c |    4 ++--
 include/linux/pm_runtime.h   |    2 +-
 kernel/power/hibernate.c     |    4 ++--
 kernel/power/user.c          |    2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index 8dc247c..bda10d9 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -401,7 +401,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
 
 		if (dev->parent) {
 			parent = dev->parent;
-			atomic_add_unless(&parent->power.child_count, -1, 0);
+			atomic_dec_not_zero(&parent->power.child_count);
 		}
 	}
 	wake_up_all(&dev->power.wait_queue);
@@ -841,7 +841,7 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
 	if (status == RPM_SUSPENDED) {
 		/* It always is possible to set the status to 'suspended'. */
 		if (parent) {
-			atomic_add_unless(&parent->power.child_count, -1, 0);
+			atomic_dec_not_zero(&parent->power.child_count);
 			notify_parent = !parent->power.ignore_children;
 		}
 		goto out_set;
diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
index daac05d..3b4931c 100644
--- a/include/linux/pm_runtime.h
+++ b/include/linux/pm_runtime.h
@@ -63,7 +63,7 @@ static inline void pm_runtime_get_noresume(struct device *dev)
 
 static inline void pm_runtime_put_noidle(struct device *dev)
 {
-	atomic_add_unless(&dev->power.usage_count, -1, 0);
+	atomic_dec_not_zero(&dev->power.usage_count);
 }
 
 static inline bool device_run_wake(struct device *dev)
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index 8f7b1db..0ba8d87 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -606,7 +606,7 @@ int hibernate(void)
 
 	mutex_lock(&pm_mutex);
 	/* The snapshot device should not be opened while we're running */
-	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
+	if (!atomic_dec_not_zero(&snapshot_device_available)) {
 		error = -EBUSY;
 		goto Unlock;
 	}
@@ -756,7 +756,7 @@ static int software_resume(void)
 		goto Unlock;
 
 	/* The snapshot device should not be opened while we're running */
-	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
+	if (!atomic_dec_not_zero(&snapshot_device_available)) {
 		error = -EBUSY;
 		swsusp_close(FMODE_READ);
 		goto Unlock;
diff --git a/kernel/power/user.c b/kernel/power/user.c
index 42ddbc6..1c1cc01 100644
--- a/kernel/power/user.c
+++ b/kernel/power/user.c
@@ -72,7 +72,7 @@ static int snapshot_open(struct inode *inode, struct file *filp)
 
 	mutex_lock(&pm_mutex);
 
-	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
+	if (!atomic_dec_not_zero(&snapshot_device_available)) {
 		error = -EBUSY;
 		goto Unlock;
 	}
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 04/11] PM: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` Sven Eckelmann
                   ` (4 preceding siblings ...)
  (?)
@ 2011-07-27  9:47 ` Sven Eckelmann
  -1 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch; +Cc: Len Brown, linux-kernel, linux-pm, Sven Eckelmann

atomic_dec_not_zero is defined for each architecture through
<linux/atomic.h> to provide the functionality of
atomic_add_unless(x, -1, 0).

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Len Brown <len.brown@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Rafael J. Wysocki <rjw@sisk.pl>
Cc: linux-pm@lists.linux-foundation.org
---
 drivers/base/power/runtime.c |    4 ++--
 include/linux/pm_runtime.h   |    2 +-
 kernel/power/hibernate.c     |    4 ++--
 kernel/power/user.c          |    2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index 8dc247c..bda10d9 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -401,7 +401,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
 
 		if (dev->parent) {
 			parent = dev->parent;
-			atomic_add_unless(&parent->power.child_count, -1, 0);
+			atomic_dec_not_zero(&parent->power.child_count);
 		}
 	}
 	wake_up_all(&dev->power.wait_queue);
@@ -841,7 +841,7 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
 	if (status == RPM_SUSPENDED) {
 		/* It always is possible to set the status to 'suspended'. */
 		if (parent) {
-			atomic_add_unless(&parent->power.child_count, -1, 0);
+			atomic_dec_not_zero(&parent->power.child_count);
 			notify_parent = !parent->power.ignore_children;
 		}
 		goto out_set;
diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
index daac05d..3b4931c 100644
--- a/include/linux/pm_runtime.h
+++ b/include/linux/pm_runtime.h
@@ -63,7 +63,7 @@ static inline void pm_runtime_get_noresume(struct device *dev)
 
 static inline void pm_runtime_put_noidle(struct device *dev)
 {
-	atomic_add_unless(&dev->power.usage_count, -1, 0);
+	atomic_dec_not_zero(&dev->power.usage_count);
 }
 
 static inline bool device_run_wake(struct device *dev)
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index 8f7b1db..0ba8d87 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -606,7 +606,7 @@ int hibernate(void)
 
 	mutex_lock(&pm_mutex);
 	/* The snapshot device should not be opened while we're running */
-	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
+	if (!atomic_dec_not_zero(&snapshot_device_available)) {
 		error = -EBUSY;
 		goto Unlock;
 	}
@@ -756,7 +756,7 @@ static int software_resume(void)
 		goto Unlock;
 
 	/* The snapshot device should not be opened while we're running */
-	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
+	if (!atomic_dec_not_zero(&snapshot_device_available)) {
 		error = -EBUSY;
 		swsusp_close(FMODE_READ);
 		goto Unlock;
diff --git a/kernel/power/user.c b/kernel/power/user.c
index 42ddbc6..1c1cc01 100644
--- a/kernel/power/user.c
+++ b/kernel/power/user.c
@@ -72,7 +72,7 @@ static int snapshot_open(struct inode *inode, struct file *filp)
 
 	mutex_lock(&pm_mutex);
 
-	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
+	if (!atomic_dec_not_zero(&snapshot_device_available)) {
 		error = -EBUSY;
 		goto Unlock;
 	}
-- 
1.7.5.4

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 05/11] omap3isp: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` Sven Eckelmann
                   ` (6 preceding siblings ...)
  (?)
@ 2011-07-27  9:47 ` Sven Eckelmann
  2011-07-31 15:00   ` Laurent Pinchart
  -1 siblings, 1 reply; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch; +Cc: linux-kernel, Sven Eckelmann, Laurent Pinchart, linux-media

atomic_dec_not_zero is defined for each architecture through
<linux/atomic.h> to provide the functionality of
atomic_add_unless(x, -1, 0).

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: linux-media@vger.kernel.org
---
 drivers/media/video/omap3isp/ispstat.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/media/video/omap3isp/ispstat.c b/drivers/media/video/omap3isp/ispstat.c
index b44cb68..81b1ec9 100644
--- a/drivers/media/video/omap3isp/ispstat.c
+++ b/drivers/media/video/omap3isp/ispstat.c
@@ -652,7 +652,7 @@ static int isp_stat_buf_process(struct ispstat *stat, int buf_state)
 {
 	int ret = STAT_NO_BUF;
 
-	if (!atomic_add_unless(&stat->buf_err, -1, 0) &&
+	if (!atomic_dec_not_zero(&stat->buf_err) &&
 	    buf_state == STAT_BUF_DONE && stat->state == ISPSTAT_ENABLED) {
 		ret = isp_stat_buf_queue(stat);
 		isp_stat_buf_next(stat);
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 06/11] qeth: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` Sven Eckelmann
                   ` (7 preceding siblings ...)
  (?)
@ 2011-07-27  9:47 ` Sven Eckelmann
  -1 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch
  Cc: linux-kernel, Sven Eckelmann, Ursula Braun, Frank Blaschka,
	linux390, linux-s390

atomic_dec_not_zero is defined for each architecture through
<linux/atomic.h> to provide the functionality of
atomic_add_unless(x, -1, 0).

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Ursula Braun <ursula.braun@de.ibm.com>
Cc: Frank Blaschka <blaschka@linux.vnet.ibm.com>
Cc: linux390@de.ibm.com
Cc: linux-s390@vger.kernel.org
---
 drivers/s390/net/qeth_core_main.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
index 4550573..56ceed2 100644
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -2765,7 +2765,7 @@ void qeth_queue_input_buffer(struct qeth_card *card, int index)
 			atomic_set(&card->force_alloc_skb, 3);
 			count = newcount;
 		} else {
-			atomic_add_unless(&card->force_alloc_skb, -1, 0);
+			atomic_dec_not_zero(&card->force_alloc_skb);
 		}
 
 		/*
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 07/11] ext4: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` Sven Eckelmann
                   ` (8 preceding siblings ...)
  (?)
@ 2011-07-27  9:47 ` Sven Eckelmann
  -1 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch
  Cc: linux-kernel, Sven Eckelmann, Theodore Ts'o, Andreas Dilger,
	linux-ext4

atomic_dec_not_zero is defined for each architecture through
<linux/atomic.h> to provide the functionality of
atomic_add_unless(x, -1, 0).

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Andreas Dilger <adilger.kernel@dilger.ca>
Cc: linux-ext4@vger.kernel.org
---
 fs/ext4/ext4.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index fa44df8..29996e8 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -2098,7 +2098,7 @@ static inline void ext4_lock_group(struct super_block *sb, ext4_group_t group)
 		 * We're able to grab the lock right away, so drop the
 		 * lock contention counter.
 		 */
-		atomic_add_unless(&EXT4_SB(sb)->s_lock_busy, -1, 0);
+		atomic_dec_not_zero(&EXT4_SB(sb)->s_lock_busy);
 	else {
 		/*
 		 * The lock is busy, so bump the contention counter,
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 08/11] xfs: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` Sven Eckelmann
@ 2011-07-27  9:47   ` Sven Eckelmann
  -1 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch; +Cc: linux-kernel, Sven Eckelmann, Alex Elder, xfs-masters, xfs

atomic_dec_not_zero is defined for each architecture through
<linux/atomic.h> to provide the functionality of
atomic_add_unless(x, -1, 0).

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Alex Elder <aelder@sgi.com>
Cc: xfs-masters@oss.sgi.com
Cc: xfs@oss.sgi.com
---
 fs/xfs/linux-2.6/xfs_buf.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index b2b4119..a68d9bf 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -1429,7 +1429,7 @@ xfs_buftarg_shrink(
 		 * zero. If the value is already zero, we need to reclaim the
 		 * buffer, otherwise it gets another trip through the LRU.
 		 */
-		if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
+		if (!atomic_dec_not_zero(&bp->b_lru_ref)) {
 			list_move_tail(&bp->b_lru, &btp->bt_lru);
 			continue;
 		}
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 08/11] xfs: Use *_dec_not_zero instead of *_add_unless
@ 2011-07-27  9:47   ` Sven Eckelmann
  0 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch; +Cc: xfs-masters, xfs, linux-kernel, Sven Eckelmann, Alex Elder

atomic_dec_not_zero is defined for each architecture through
<linux/atomic.h> to provide the functionality of
atomic_add_unless(x, -1, 0).

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Alex Elder <aelder@sgi.com>
Cc: xfs-masters@oss.sgi.com
Cc: xfs@oss.sgi.com
---
 fs/xfs/linux-2.6/xfs_buf.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index b2b4119..a68d9bf 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -1429,7 +1429,7 @@ xfs_buftarg_shrink(
 		 * zero. If the value is already zero, we need to reclaim the
 		 * buffer, otherwise it gets another trip through the LRU.
 		 */
-		if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
+		if (!atomic_dec_not_zero(&bp->b_lru_ref)) {
 			list_move_tail(&bp->b_lru, &btp->bt_lru);
 			continue;
 		}
-- 
1.7.5.4

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 09/11] memcg: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` Sven Eckelmann
@ 2011-07-27  9:47   ` Sven Eckelmann
  -1 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch
  Cc: linux-kernel, Sven Eckelmann, Balbir Singh, Daisuke Nishimura,
	KAMEZAWA Hiroyuki, linux-mm

atomic_dec_not_zero is defined for each architecture through
<linux/atomic.h> to provide the functionality of
atomic_add_unless(x, -1, 0).

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: linux-mm@kvack.org
---
 mm/memcontrol.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5f84d23..00a7580 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1909,10 +1909,10 @@ static void mem_cgroup_unmark_under_oom(struct mem_cgroup *mem)
 	/*
 	 * When a new child is created while the hierarchy is under oom,
 	 * mem_cgroup_oom_lock() may not be called. We have to use
-	 * atomic_add_unless() here.
+	 * atomic_dec_not_zero() here.
 	 */
 	for_each_mem_cgroup_tree(iter, mem)
-		atomic_add_unless(&iter->under_oom, -1, 0);
+		atomic_dec_not_zero(&iter->under_oom);
 }
 
 static DEFINE_SPINLOCK(memcg_oom_lock);
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 09/11] memcg: Use *_dec_not_zero instead of *_add_unless
@ 2011-07-27  9:47   ` Sven Eckelmann
  0 siblings, 0 replies; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch
  Cc: linux-kernel, Sven Eckelmann, Balbir Singh, Daisuke Nishimura,
	KAMEZAWA Hiroyuki, linux-mm

atomic_dec_not_zero is defined for each architecture through
<linux/atomic.h> to provide the functionality of
atomic_add_unless(x, -1, 0).

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: linux-mm@kvack.org
---
 mm/memcontrol.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5f84d23..00a7580 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1909,10 +1909,10 @@ static void mem_cgroup_unmark_under_oom(struct mem_cgroup *mem)
 	/*
 	 * When a new child is created while the hierarchy is under oom,
 	 * mem_cgroup_oom_lock() may not be called. We have to use
-	 * atomic_add_unless() here.
+	 * atomic_dec_not_zero() here.
 	 */
 	for_each_mem_cgroup_tree(iter, mem)
-		atomic_add_unless(&iter->under_oom, -1, 0);
+		atomic_dec_not_zero(&iter->under_oom);
 }
 
 static DEFINE_SPINLOCK(memcg_oom_lock);
-- 
1.7.5.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 10/11] drop_monitor: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` Sven Eckelmann
                   ` (11 preceding siblings ...)
  (?)
@ 2011-07-27  9:47 ` Sven Eckelmann
  2011-07-27 10:59   ` Neil Horman
  -1 siblings, 1 reply; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch; +Cc: linux-kernel, Sven Eckelmann, Neil Horman, netdev

atomic_dec_not_zero is defined for each architecture through
<linux/atomic.h> to provide the functionality of
atomic_add_unless(x, -1, 0).

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: netdev@vger.kernel.org
---
 net/core/drop_monitor.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
index 7f36b38..ef4a05d 100644
--- a/net/core/drop_monitor.c
+++ b/net/core/drop_monitor.c
@@ -137,7 +137,7 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
 	struct per_cpu_dm_data *data = &__get_cpu_var(dm_cpu_data);
 
 
-	if (!atomic_add_unless(&data->dm_hit_count, -1, 0)) {
+	if (!atomic_dec_not_zero(&data->dm_hit_count)) {
 		/*
 		 * we're already at zero, discard this hit
 		 */
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 11/11] Phonet: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` Sven Eckelmann
                   ` (12 preceding siblings ...)
  (?)
@ 2011-07-27  9:47 ` Sven Eckelmann
  2011-07-27 10:29   ` Rémi Denis-Courmont
  -1 siblings, 1 reply; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27  9:47 UTC (permalink / raw)
  To: linux-arch; +Cc: linux-kernel, Sven Eckelmann, Remi Denis-Courmont

atomic_dec_not_zero is defined for each architecture through
<linux/atomic.h> to provide the functionality of
atomic_add_unless(x, -1, 0).

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Cc: Remi Denis-Courmont <remi.denis-courmont@nokia.com>
---
 net/phonet/pep.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/net/phonet/pep.c b/net/phonet/pep.c
index f17fd84..b192c83 100644
--- a/net/phonet/pep.c
+++ b/net/phonet/pep.c
@@ -1013,7 +1013,7 @@ static int pipe_skb_send(struct sock *sk, struct sk_buff *skb)
 	int err;
 
 	if (pn_flow_safe(pn->tx_fc) &&
-	    !atomic_add_unless(&pn->tx_credits, -1, 0)) {
+	    !atomic_dec_not_zero(&pn->tx_credits)) {
 		kfree_skb(skb);
 		return -ENOBUFS;
 	}
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 11/11] Phonet: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` [PATCHv4 11/11] Phonet: " Sven Eckelmann
@ 2011-07-27 10:29   ` Rémi Denis-Courmont
  0 siblings, 0 replies; 38+ messages in thread
From: Rémi Denis-Courmont @ 2011-07-27 10:29 UTC (permalink / raw)
  To: linux-arch; +Cc: linux-kernel

On Wednesday 27 July 2011 12:47:50 ext Sven Eckelmann, you wrote:
> atomic_dec_not_zero is defined for each architecture through
> <linux/atomic.h> to provide the functionality of
> atomic_add_unless(x, -1, 0).
> 
> Signed-off-by: Sven Eckelmann <sven@narfation.org>
> Cc: Remi Denis-Courmont <remi.denis-courmont@nokia.com>

Acked-by: Rémi Denis-Courmont <remi.denis-courmont@nokia.com>

> ---
>  net/phonet/pep.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/net/phonet/pep.c b/net/phonet/pep.c
> index f17fd84..b192c83 100644
> --- a/net/phonet/pep.c
> +++ b/net/phonet/pep.c
> @@ -1013,7 +1013,7 @@ static int pipe_skb_send(struct sock *sk, struct
> sk_buff *skb) int err;
> 
>  	if (pn_flow_safe(pn->tx_fc) &&
> -	    !atomic_add_unless(&pn->tx_credits, -1, 0)) {
> +	    !atomic_dec_not_zero(&pn->tx_credits)) {
>  		kfree_skb(skb);
>  		return -ENOBUFS;
>  	}


-- 
Rémi Denis-Courmont
http://www.remlab.net/

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 10/11] drop_monitor: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` [PATCHv4 10/11] drop_monitor: " Sven Eckelmann
@ 2011-07-27 10:59   ` Neil Horman
  2011-07-27 11:52     ` Sven Eckelmann
  0 siblings, 1 reply; 38+ messages in thread
From: Neil Horman @ 2011-07-27 10:59 UTC (permalink / raw)
  To: Sven Eckelmann; +Cc: linux-arch, linux-kernel, netdev

On Wed, Jul 27, 2011 at 11:47:49AM +0200, Sven Eckelmann wrote:
> atomic_dec_not_zero is defined for each architecture through
> <linux/atomic.h> to provide the functionality of
> atomic_add_unless(x, -1, 0).
> 
> Signed-off-by: Sven Eckelmann <sven@narfation.org>
> Cc: Neil Horman <nhorman@tuxdriver.com>
> Cc: netdev@vger.kernel.org
> ---
>  net/core/drop_monitor.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
> index 7f36b38..ef4a05d 100644
> --- a/net/core/drop_monitor.c
> +++ b/net/core/drop_monitor.c
> @@ -137,7 +137,7 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
>  	struct per_cpu_dm_data *data = &__get_cpu_var(dm_cpu_data);
>  
>  
> -	if (!atomic_add_unless(&data->dm_hit_count, -1, 0)) {
> +	if (!atomic_dec_not_zero(&data->dm_hit_count)) {
>  		/*
>  		 * we're already at zero, discard this hit
>  		 */
> -- 
> 1.7.5.4
> 
> 
Wheres the patch that creates the per arch definition of this function?  I see
the other posts in this series went to lkml, but the archives don't have the
first in the series anywhere, which ostensibly adds the definition.  
Neil


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Re: [PATCHv4 10/11] drop_monitor: Use *_dec_not_zero instead of *_add_unless
  2011-07-27 10:59   ` Neil Horman
@ 2011-07-27 11:52     ` Sven Eckelmann
  2011-07-27 14:25       ` Neil Horman
  0 siblings, 1 reply; 38+ messages in thread
From: Sven Eckelmann @ 2011-07-27 11:52 UTC (permalink / raw)
  To: Neil Horman; +Cc: linux-arch, linux-kernel, netdev

[-- Attachment #1: Type: text/plain, Size: 1659 bytes --]

On Wednesday 27 July 2011 06:59:07 Neil Horman wrote:
> On Wed, Jul 27, 2011 at 11:47:49AM +0200, Sven Eckelmann wrote:
> > atomic_dec_not_zero is defined for each architecture through
> > <linux/atomic.h> to provide the functionality of
> > atomic_add_unless(x, -1, 0).
> > 
> > Signed-off-by: Sven Eckelmann <sven@narfation.org>
> > Cc: Neil Horman <nhorman@tuxdriver.com>
> > Cc: netdev@vger.kernel.org
> > ---
> > 
> >  net/core/drop_monitor.c |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
> > index 7f36b38..ef4a05d 100644
> > --- a/net/core/drop_monitor.c
> > +++ b/net/core/drop_monitor.c
> > @@ -137,7 +137,7 @@ static void trace_drop_common(struct sk_buff *skb,
> > void *location)
> > 
> >  	struct per_cpu_dm_data *data = &__get_cpu_var(dm_cpu_data);
> > 
> > -	if (!atomic_add_unless(&data->dm_hit_count, -1, 0)) {
> > +	if (!atomic_dec_not_zero(&data->dm_hit_count)) {
> > 
> >  		/*
> >  		
> >  		 * we're already at zero, discard this hit
> >  		 */
> 
> Wheres the patch that creates the per arch definition of this function?  I
> see the other posts in this series went to lkml, but the archives don't
> have the first in the series anywhere, which ostensibly adds the
> definition. Neil

Most architectures don't use a per architecture definition anymore, but a 
cross-architecture the definition in include/linux/atomic.h.

The 01/11 can be found in different archives under the message id 
1311760070-21532-1-git-send-email-sven@narfation.org ... for example gmane: 
http://article.gmane.org/gmane.linux.ports.arm.kernel/126704

Kind regards,
	Sven

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 09/11] memcg: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47   ` Sven Eckelmann
@ 2011-07-27 11:58     ` Michal Hocko
  -1 siblings, 0 replies; 38+ messages in thread
From: Michal Hocko @ 2011-07-27 11:58 UTC (permalink / raw)
  To: Sven Eckelmann
  Cc: linux-arch, linux-kernel, Balbir Singh, Daisuke Nishimura,
	KAMEZAWA Hiroyuki, linux-mm

On Wed 27-07-11 11:47:48, Sven Eckelmann wrote:
> atomic_dec_not_zero is defined for each architecture through
> <linux/atomic.h> to provide the functionality of
> atomic_add_unless(x, -1, 0).

yes, I like it because atomic_dec_* is more consistent (at least from
the code reading) with atomic_inc used by mem_cgroup_mark_under_oom
which.

> 
> Signed-off-by: Sven Eckelmann <sven@narfation.org>
> Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: linux-mm@kvack.org

Acked-by: Michal Hocko <mhocko@suse.cz>

> ---
>  mm/memcontrol.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 5f84d23..00a7580 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1909,10 +1909,10 @@ static void mem_cgroup_unmark_under_oom(struct mem_cgroup *mem)
>  	/*
>  	 * When a new child is created while the hierarchy is under oom,
>  	 * mem_cgroup_oom_lock() may not be called. We have to use
> -	 * atomic_add_unless() here.
> +	 * atomic_dec_not_zero() here.
>  	 */
>  	for_each_mem_cgroup_tree(iter, mem)
> -		atomic_add_unless(&iter->under_oom, -1, 0);
> +		atomic_dec_not_zero(&iter->under_oom);
>  }
>  
>  static DEFINE_SPINLOCK(memcg_oom_lock);
> -- 
> 1.7.5.4
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 09/11] memcg: Use *_dec_not_zero instead of *_add_unless
@ 2011-07-27 11:58     ` Michal Hocko
  0 siblings, 0 replies; 38+ messages in thread
From: Michal Hocko @ 2011-07-27 11:58 UTC (permalink / raw)
  To: Sven Eckelmann
  Cc: linux-arch, linux-kernel, Balbir Singh, Daisuke Nishimura,
	KAMEZAWA Hiroyuki, linux-mm

On Wed 27-07-11 11:47:48, Sven Eckelmann wrote:
> atomic_dec_not_zero is defined for each architecture through
> <linux/atomic.h> to provide the functionality of
> atomic_add_unless(x, -1, 0).

yes, I like it because atomic_dec_* is more consistent (at least from
the code reading) with atomic_inc used by mem_cgroup_mark_under_oom
which.

> 
> Signed-off-by: Sven Eckelmann <sven@narfation.org>
> Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: linux-mm@kvack.org

Acked-by: Michal Hocko <mhocko@suse.cz>

> ---
>  mm/memcontrol.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 5f84d23..00a7580 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1909,10 +1909,10 @@ static void mem_cgroup_unmark_under_oom(struct mem_cgroup *mem)
>  	/*
>  	 * When a new child is created while the hierarchy is under oom,
>  	 * mem_cgroup_oom_lock() may not be called. We have to use
> -	 * atomic_add_unless() here.
> +	 * atomic_dec_not_zero() here.
>  	 */
>  	for_each_mem_cgroup_tree(iter, mem)
> -		atomic_add_unless(&iter->under_oom, -1, 0);
> +		atomic_dec_not_zero(&iter->under_oom);
>  }
>  
>  static DEFINE_SPINLOCK(memcg_oom_lock);
> -- 
> 1.7.5.4
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Re: [PATCHv4 10/11] drop_monitor: Use *_dec_not_zero instead of *_add_unless
  2011-07-27 11:52     ` Sven Eckelmann
@ 2011-07-27 14:25       ` Neil Horman
  0 siblings, 0 replies; 38+ messages in thread
From: Neil Horman @ 2011-07-27 14:25 UTC (permalink / raw)
  To: Sven Eckelmann; +Cc: linux-arch, linux-kernel, netdev

On Wed, Jul 27, 2011 at 01:52:50PM +0200, Sven Eckelmann wrote:
> On Wednesday 27 July 2011 06:59:07 Neil Horman wrote:
> > On Wed, Jul 27, 2011 at 11:47:49AM +0200, Sven Eckelmann wrote:
> > > atomic_dec_not_zero is defined for each architecture through
> > > <linux/atomic.h> to provide the functionality of
> > > atomic_add_unless(x, -1, 0).
> > > 
> > > Signed-off-by: Sven Eckelmann <sven@narfation.org>
> > > Cc: Neil Horman <nhorman@tuxdriver.com>
> > > Cc: netdev@vger.kernel.org
> > > ---
> > > 
> > >  net/core/drop_monitor.c |    2 +-
> > >  1 files changed, 1 insertions(+), 1 deletions(-)
> > > 
> > > diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
> > > index 7f36b38..ef4a05d 100644
> > > --- a/net/core/drop_monitor.c
> > > +++ b/net/core/drop_monitor.c
> > > @@ -137,7 +137,7 @@ static void trace_drop_common(struct sk_buff *skb,
> > > void *location)
> > > 
> > >  	struct per_cpu_dm_data *data = &__get_cpu_var(dm_cpu_data);
> > > 
> > > -	if (!atomic_add_unless(&data->dm_hit_count, -1, 0)) {
> > > +	if (!atomic_dec_not_zero(&data->dm_hit_count)) {
> > > 
> > >  		/*
> > >  		
> > >  		 * we're already at zero, discard this hit
> > >  		 */
> > 
> > Wheres the patch that creates the per arch definition of this function?  I
> > see the other posts in this series went to lkml, but the archives don't
> > have the first in the series anywhere, which ostensibly adds the
> > definition. Neil
> 
> Most architectures don't use a per architecture definition anymore, but a 
> cross-architecture the definition in include/linux/atomic.h.
> 
> The 01/11 can be found in different archives under the message id 
> 1311760070-21532-1-git-send-email-sven@narfation.org ... for example gmane: 
> http://article.gmane.org/gmane.linux.ports.arm.kernel/126704
> 
> Kind regards,
> 	Sven
Ok, thank you, I just didn't see the patch that implemented it in the lkml
archive at MARC, and wanted to be sure this wasn't proposed for a different tree
than the patch with the definition.

Acked-by: Neil Horman <nhorman@tuxdriver.com>




^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 04/11] PM: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` Sven Eckelmann
  2011-07-27 19:50   ` Rafael J. Wysocki
@ 2011-07-27 19:50   ` Rafael J. Wysocki
  2011-07-27 20:36     ` Pavel Machek
  2011-07-27 20:36     ` Pavel Machek
  1 sibling, 2 replies; 38+ messages in thread
From: Rafael J. Wysocki @ 2011-07-27 19:50 UTC (permalink / raw)
  To: Sven Eckelmann
  Cc: linux-arch, linux-kernel, Len Brown, Pavel Machek, linux-pm

On Wednesday, July 27, 2011, Sven Eckelmann wrote:
> atomic_dec_not_zero is defined for each architecture through
> <linux/atomic.h> to provide the functionality of
> atomic_add_unless(x, -1, 0).
> 
> Signed-off-by: Sven Eckelmann <sven@narfation.org>
> Cc: Len Brown <len.brown@intel.com>
> Cc: Pavel Machek <pavel@ucw.cz>
> Cc: Rafael J. Wysocki <rjw@sisk.pl>
> Cc: linux-pm@lists.linux-foundation.org

Acked-by: Rafael J. Wysocki <rjw@sisk.pl>

> ---
>  drivers/base/power/runtime.c |    4 ++--
>  include/linux/pm_runtime.h   |    2 +-
>  kernel/power/hibernate.c     |    4 ++--
>  kernel/power/user.c          |    2 +-
>  4 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
> index 8dc247c..bda10d9 100644
> --- a/drivers/base/power/runtime.c
> +++ b/drivers/base/power/runtime.c
> @@ -401,7 +401,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
>  
>  		if (dev->parent) {
>  			parent = dev->parent;
> -			atomic_add_unless(&parent->power.child_count, -1, 0);
> +			atomic_dec_not_zero(&parent->power.child_count);
>  		}
>  	}
>  	wake_up_all(&dev->power.wait_queue);
> @@ -841,7 +841,7 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
>  	if (status == RPM_SUSPENDED) {
>  		/* It always is possible to set the status to 'suspended'. */
>  		if (parent) {
> -			atomic_add_unless(&parent->power.child_count, -1, 0);
> +			atomic_dec_not_zero(&parent->power.child_count);
>  			notify_parent = !parent->power.ignore_children;
>  		}
>  		goto out_set;
> diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
> index daac05d..3b4931c 100644
> --- a/include/linux/pm_runtime.h
> +++ b/include/linux/pm_runtime.h
> @@ -63,7 +63,7 @@ static inline void pm_runtime_get_noresume(struct device *dev)
>  
>  static inline void pm_runtime_put_noidle(struct device *dev)
>  {
> -	atomic_add_unless(&dev->power.usage_count, -1, 0);
> +	atomic_dec_not_zero(&dev->power.usage_count);
>  }
>  
>  static inline bool device_run_wake(struct device *dev)
> diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
> index 8f7b1db..0ba8d87 100644
> --- a/kernel/power/hibernate.c
> +++ b/kernel/power/hibernate.c
> @@ -606,7 +606,7 @@ int hibernate(void)
>  
>  	mutex_lock(&pm_mutex);
>  	/* The snapshot device should not be opened while we're running */
> -	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
> +	if (!atomic_dec_not_zero(&snapshot_device_available)) {
>  		error = -EBUSY;
>  		goto Unlock;
>  	}
> @@ -756,7 +756,7 @@ static int software_resume(void)
>  		goto Unlock;
>  
>  	/* The snapshot device should not be opened while we're running */
> -	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
> +	if (!atomic_dec_not_zero(&snapshot_device_available)) {
>  		error = -EBUSY;
>  		swsusp_close(FMODE_READ);
>  		goto Unlock;
> diff --git a/kernel/power/user.c b/kernel/power/user.c
> index 42ddbc6..1c1cc01 100644
> --- a/kernel/power/user.c
> +++ b/kernel/power/user.c
> @@ -72,7 +72,7 @@ static int snapshot_open(struct inode *inode, struct file *filp)
>  
>  	mutex_lock(&pm_mutex);
>  
> -	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
> +	if (!atomic_dec_not_zero(&snapshot_device_available)) {
>  		error = -EBUSY;
>  		goto Unlock;
>  	}
> 


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 04/11] PM: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` Sven Eckelmann
@ 2011-07-27 19:50   ` Rafael J. Wysocki
  2011-07-27 19:50   ` Rafael J. Wysocki
  1 sibling, 0 replies; 38+ messages in thread
From: Rafael J. Wysocki @ 2011-07-27 19:50 UTC (permalink / raw)
  To: Sven Eckelmann; +Cc: linux-arch, Len Brown, linux-pm, linux-kernel

On Wednesday, July 27, 2011, Sven Eckelmann wrote:
> atomic_dec_not_zero is defined for each architecture through
> <linux/atomic.h> to provide the functionality of
> atomic_add_unless(x, -1, 0).
> 
> Signed-off-by: Sven Eckelmann <sven@narfation.org>
> Cc: Len Brown <len.brown@intel.com>
> Cc: Pavel Machek <pavel@ucw.cz>
> Cc: Rafael J. Wysocki <rjw@sisk.pl>
> Cc: linux-pm@lists.linux-foundation.org

Acked-by: Rafael J. Wysocki <rjw@sisk.pl>

> ---
>  drivers/base/power/runtime.c |    4 ++--
>  include/linux/pm_runtime.h   |    2 +-
>  kernel/power/hibernate.c     |    4 ++--
>  kernel/power/user.c          |    2 +-
>  4 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
> index 8dc247c..bda10d9 100644
> --- a/drivers/base/power/runtime.c
> +++ b/drivers/base/power/runtime.c
> @@ -401,7 +401,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
>  
>  		if (dev->parent) {
>  			parent = dev->parent;
> -			atomic_add_unless(&parent->power.child_count, -1, 0);
> +			atomic_dec_not_zero(&parent->power.child_count);
>  		}
>  	}
>  	wake_up_all(&dev->power.wait_queue);
> @@ -841,7 +841,7 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
>  	if (status == RPM_SUSPENDED) {
>  		/* It always is possible to set the status to 'suspended'. */
>  		if (parent) {
> -			atomic_add_unless(&parent->power.child_count, -1, 0);
> +			atomic_dec_not_zero(&parent->power.child_count);
>  			notify_parent = !parent->power.ignore_children;
>  		}
>  		goto out_set;
> diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
> index daac05d..3b4931c 100644
> --- a/include/linux/pm_runtime.h
> +++ b/include/linux/pm_runtime.h
> @@ -63,7 +63,7 @@ static inline void pm_runtime_get_noresume(struct device *dev)
>  
>  static inline void pm_runtime_put_noidle(struct device *dev)
>  {
> -	atomic_add_unless(&dev->power.usage_count, -1, 0);
> +	atomic_dec_not_zero(&dev->power.usage_count);
>  }
>  
>  static inline bool device_run_wake(struct device *dev)
> diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
> index 8f7b1db..0ba8d87 100644
> --- a/kernel/power/hibernate.c
> +++ b/kernel/power/hibernate.c
> @@ -606,7 +606,7 @@ int hibernate(void)
>  
>  	mutex_lock(&pm_mutex);
>  	/* The snapshot device should not be opened while we're running */
> -	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
> +	if (!atomic_dec_not_zero(&snapshot_device_available)) {
>  		error = -EBUSY;
>  		goto Unlock;
>  	}
> @@ -756,7 +756,7 @@ static int software_resume(void)
>  		goto Unlock;
>  
>  	/* The snapshot device should not be opened while we're running */
> -	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
> +	if (!atomic_dec_not_zero(&snapshot_device_available)) {
>  		error = -EBUSY;
>  		swsusp_close(FMODE_READ);
>  		goto Unlock;
> diff --git a/kernel/power/user.c b/kernel/power/user.c
> index 42ddbc6..1c1cc01 100644
> --- a/kernel/power/user.c
> +++ b/kernel/power/user.c
> @@ -72,7 +72,7 @@ static int snapshot_open(struct inode *inode, struct file *filp)
>  
>  	mutex_lock(&pm_mutex);
>  
> -	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
> +	if (!atomic_dec_not_zero(&snapshot_device_available)) {
>  		error = -EBUSY;
>  		goto Unlock;
>  	}
> 

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 04/11] PM: Use *_dec_not_zero instead of *_add_unless
  2011-07-27 19:50   ` Rafael J. Wysocki
  2011-07-27 20:36     ` Pavel Machek
@ 2011-07-27 20:36     ` Pavel Machek
  2011-07-28 21:43         ` Rafael J. Wysocki
  1 sibling, 1 reply; 38+ messages in thread
From: Pavel Machek @ 2011-07-27 20:36 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Sven Eckelmann, linux-arch, linux-kernel, Len Brown, linux-pm

Hi!

> > atomic_dec_not_zero is defined for each architecture through
> > <linux/atomic.h> to provide the functionality of
> > atomic_add_unless(x, -1, 0).
> > 
> > Signed-off-by: Sven Eckelmann <sven@narfation.org>
> > Cc: Len Brown <len.brown@intel.com>
> > Cc: Pavel Machek <pavel@ucw.cz>
> > Cc: Rafael J. Wysocki <rjw@sisk.pl>
> > Cc: linux-pm@lists.linux-foundation.org
> 
> Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
> 
> > ---
> >  drivers/base/power/runtime.c |    4 ++--
> >  include/linux/pm_runtime.h   |    2 +-
> >  kernel/power/hibernate.c     |    4 ++--
> >  kernel/power/user.c          |    2 +-
> >  4 files changed, 6 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
> > index 8dc247c..bda10d9 100644
> > --- a/drivers/base/power/runtime.c
> > +++ b/drivers/base/power/runtime.c
> > @@ -401,7 +401,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
> >  
> >  		if (dev->parent) {
> >  			parent = dev->parent;
> > -			atomic_add_unless(&parent->power.child_count, -1, 0);
> > +  atomic_dec_not_zero(&parent->power.child_count);

I'd like to understand... Why not atomic_dec in the first place? Count
should be exact, anyway, or we run into problems, right?

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 04/11] PM: Use *_dec_not_zero instead of *_add_unless
  2011-07-27 19:50   ` Rafael J. Wysocki
@ 2011-07-27 20:36     ` Pavel Machek
  2011-07-27 20:36     ` Pavel Machek
  1 sibling, 0 replies; 38+ messages in thread
From: Pavel Machek @ 2011-07-27 20:36 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: linux-arch, Len Brown, linux-pm, linux-kernel, Sven Eckelmann

Hi!

> > atomic_dec_not_zero is defined for each architecture through
> > <linux/atomic.h> to provide the functionality of
> > atomic_add_unless(x, -1, 0).
> > 
> > Signed-off-by: Sven Eckelmann <sven@narfation.org>
> > Cc: Len Brown <len.brown@intel.com>
> > Cc: Pavel Machek <pavel@ucw.cz>
> > Cc: Rafael J. Wysocki <rjw@sisk.pl>
> > Cc: linux-pm@lists.linux-foundation.org
> 
> Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
> 
> > ---
> >  drivers/base/power/runtime.c |    4 ++--
> >  include/linux/pm_runtime.h   |    2 +-
> >  kernel/power/hibernate.c     |    4 ++--
> >  kernel/power/user.c          |    2 +-
> >  4 files changed, 6 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
> > index 8dc247c..bda10d9 100644
> > --- a/drivers/base/power/runtime.c
> > +++ b/drivers/base/power/runtime.c
> > @@ -401,7 +401,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
> >  
> >  		if (dev->parent) {
> >  			parent = dev->parent;
> > -			atomic_add_unless(&parent->power.child_count, -1, 0);
> > +  atomic_dec_not_zero(&parent->power.child_count);

I'd like to understand... Why not atomic_dec in the first place? Count
should be exact, anyway, or we run into problems, right?

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 04/11] PM: Use *_dec_not_zero instead of *_add_unless
  2011-07-27 20:36     ` Pavel Machek
@ 2011-07-28 21:43         ` Rafael J. Wysocki
  0 siblings, 0 replies; 38+ messages in thread
From: Rafael J. Wysocki @ 2011-07-28 21:43 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Sven Eckelmann, linux-arch, linux-kernel, Len Brown, linux-pm

On Wednesday, July 27, 2011, Pavel Machek wrote:
> Hi!
> 
> > > atomic_dec_not_zero is defined for each architecture through
> > > <linux/atomic.h> to provide the functionality of
> > > atomic_add_unless(x, -1, 0).
> > > 
> > > Signed-off-by: Sven Eckelmann <sven@narfation.org>
> > > Cc: Len Brown <len.brown@intel.com>
> > > Cc: Pavel Machek <pavel@ucw.cz>
> > > Cc: Rafael J. Wysocki <rjw@sisk.pl>
> > > Cc: linux-pm@lists.linux-foundation.org
> > 
> > Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
> > 
> > > ---
> > >  drivers/base/power/runtime.c |    4 ++--
> > >  include/linux/pm_runtime.h   |    2 +-
> > >  kernel/power/hibernate.c     |    4 ++--
> > >  kernel/power/user.c          |    2 +-
> > >  4 files changed, 6 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
> > > index 8dc247c..bda10d9 100644
> > > --- a/drivers/base/power/runtime.c
> > > +++ b/drivers/base/power/runtime.c
> > > @@ -401,7 +401,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
> > >  
> > >  		if (dev->parent) {
> > >  			parent = dev->parent;
> > > -			atomic_add_unless(&parent->power.child_count, -1, 0);
> > > +  atomic_dec_not_zero(&parent->power.child_count);
> 
> I'd like to understand... Why not atomic_dec in the first place? Count
> should be exact, anyway, or we run into problems, right?

Well, we'll also run into trouble if the count becomes negative.  We might
throw a WARN_ON() there if the old value weren't as expected, but that
would be a separate patch.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 04/11] PM: Use *_dec_not_zero instead of *_add_unless
@ 2011-07-28 21:43         ` Rafael J. Wysocki
  0 siblings, 0 replies; 38+ messages in thread
From: Rafael J. Wysocki @ 2011-07-28 21:43 UTC (permalink / raw)
  To: Pavel Machek
  Cc: linux-arch, Len Brown, linux-pm, linux-kernel, Sven Eckelmann

On Wednesday, July 27, 2011, Pavel Machek wrote:
> Hi!
> 
> > > atomic_dec_not_zero is defined for each architecture through
> > > <linux/atomic.h> to provide the functionality of
> > > atomic_add_unless(x, -1, 0).
> > > 
> > > Signed-off-by: Sven Eckelmann <sven@narfation.org>
> > > Cc: Len Brown <len.brown@intel.com>
> > > Cc: Pavel Machek <pavel@ucw.cz>
> > > Cc: Rafael J. Wysocki <rjw@sisk.pl>
> > > Cc: linux-pm@lists.linux-foundation.org
> > 
> > Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
> > 
> > > ---
> > >  drivers/base/power/runtime.c |    4 ++--
> > >  include/linux/pm_runtime.h   |    2 +-
> > >  kernel/power/hibernate.c     |    4 ++--
> > >  kernel/power/user.c          |    2 +-
> > >  4 files changed, 6 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
> > > index 8dc247c..bda10d9 100644
> > > --- a/drivers/base/power/runtime.c
> > > +++ b/drivers/base/power/runtime.c
> > > @@ -401,7 +401,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
> > >  
> > >  		if (dev->parent) {
> > >  			parent = dev->parent;
> > > -			atomic_add_unless(&parent->power.child_count, -1, 0);
> > > +  atomic_dec_not_zero(&parent->power.child_count);
> 
> I'd like to understand... Why not atomic_dec in the first place? Count
> should be exact, anyway, or we run into problems, right?

Well, we'll also run into trouble if the count becomes negative.  We might
throw a WARN_ON() there if the old value weren't as expected, but that
would be a separate patch.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 05/11] omap3isp: Use *_dec_not_zero instead of *_add_unless
  2011-07-27  9:47 ` [PATCHv4 05/11] omap3isp: " Sven Eckelmann
@ 2011-07-31 15:00   ` Laurent Pinchart
  2011-08-01 10:07     ` Sven Eckelmann
  0 siblings, 1 reply; 38+ messages in thread
From: Laurent Pinchart @ 2011-07-31 15:00 UTC (permalink / raw)
  To: Sven Eckelmann; +Cc: linux-arch, linux-kernel, linux-media

Hi Sven,

Thanks for the patch.

On Wednesday 27 July 2011 11:47:44 Sven Eckelmann wrote:
> atomic_dec_not_zero is defined for each architecture through
> <linux/atomic.h> to provide the functionality of
> atomic_add_unless(x, -1, 0).
> 
> Signed-off-by: Sven Eckelmann <sven@narfation.org>
> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

I'll queue this to my tree for v3.2. Please let me know if you would rather 
push the patch through another tree.

> Cc: linux-media@vger.kernel.org
> ---
>  drivers/media/video/omap3isp/ispstat.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/media/video/omap3isp/ispstat.c
> b/drivers/media/video/omap3isp/ispstat.c index b44cb68..81b1ec9 100644
> --- a/drivers/media/video/omap3isp/ispstat.c
> +++ b/drivers/media/video/omap3isp/ispstat.c
> @@ -652,7 +652,7 @@ static int isp_stat_buf_process(struct ispstat *stat,
> int buf_state) {
>  	int ret = STAT_NO_BUF;
> 
> -	if (!atomic_add_unless(&stat->buf_err, -1, 0) &&
> +	if (!atomic_dec_not_zero(&stat->buf_err) &&
>  	    buf_state == STAT_BUF_DONE && stat->state == ISPSTAT_ENABLED) {
>  		ret = isp_stat_buf_queue(stat);
>  		isp_stat_buf_next(stat);

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Re: [PATCHv4 05/11] omap3isp: Use *_dec_not_zero instead of *_add_unless
  2011-07-31 15:00   ` Laurent Pinchart
@ 2011-08-01 10:07     ` Sven Eckelmann
  2011-08-01 11:10       ` Laurent Pinchart
  0 siblings, 1 reply; 38+ messages in thread
From: Sven Eckelmann @ 2011-08-01 10:07 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: linux-arch, linux-kernel, linux-media

[-- Attachment #1: Type: text/plain, Size: 1161 bytes --]

On Sunday 31 July 2011 17:00:43 Laurent Pinchart wrote:
> Hi Sven,
> 
> Thanks for the patch.
> 
> On Wednesday 27 July 2011 11:47:44 Sven Eckelmann wrote:
> > atomic_dec_not_zero is defined for each architecture through
> > <linux/atomic.h> to provide the functionality of
> > atomic_add_unless(x, -1, 0).
> > 
> > Signed-off-by: Sven Eckelmann <sven@narfation.org>
> > Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> 
> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> 
> I'll queue this to my tree for v3.2. Please let me know if you would rather
> push the patch through another tree.

The problem is that until now no one from linux-arch has applied the patch 
01/11 in his tree (which is needed before this patch can be applied) and you 
tree have to be based on the "yet to be chosen linux-arch tree". Otherwise 
your tree will just break and not be acceptable for a pull request. 

Maybe it is easier when one person applies 01-11 after 02-11 was Acked-by the 
responsible maintainers.

02 is more or less automatically Acked-by us :)
04, 09 and 10 are also Acked.
... and the rest is waiting for actions.

Kind regards,
	Sven

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 05/11] omap3isp: Use *_dec_not_zero instead of *_add_unless
  2011-08-01 10:07     ` Sven Eckelmann
@ 2011-08-01 11:10       ` Laurent Pinchart
  0 siblings, 0 replies; 38+ messages in thread
From: Laurent Pinchart @ 2011-08-01 11:10 UTC (permalink / raw)
  To: Sven Eckelmann; +Cc: linux-arch, linux-kernel, linux-media

Hi Sven,

On Monday 01 August 2011 12:07:15 Sven Eckelmann wrote:
> On Sunday 31 July 2011 17:00:43 Laurent Pinchart wrote:
> > On Wednesday 27 July 2011 11:47:44 Sven Eckelmann wrote:
> > > atomic_dec_not_zero is defined for each architecture through
> > > <linux/atomic.h> to provide the functionality of
> > > atomic_add_unless(x, -1, 0).
> > > 
> > > Signed-off-by: Sven Eckelmann <sven@narfation.org>
> > > Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > 
> > Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > 
> > I'll queue this to my tree for v3.2. Please let me know if you would
> > rather push the patch through another tree.
> 
> The problem is that until now no one from linux-arch has applied the patch
> 01/11 in his tree (which is needed before this patch can be applied) and
> you tree have to be based on the "yet to be chosen linux-arch tree".
> Otherwise your tree will just break and not be acceptable for a pull
> request.
> 
> Maybe it is easier when one person applies 01-11 after 02-11 was Acked-by
> the responsible maintainers.
> 
> 02 is more or less automatically Acked-by us :)
> 04, 09 and 10 are also Acked.
> ... and the rest is waiting for actions.

OK. I'm fine with 05/11 being pushed through any tree with my ack. Please let 
me know if/when you want me to apply it to my tree.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 01/11] atomic: add *_dec_not_zero
  2011-07-27  9:47 ` Sven Eckelmann
  (?)
  (?)
@ 2011-08-02  9:58   ` Ralf Baechle
  -1 siblings, 0 replies; 38+ messages in thread
From: Ralf Baechle @ 2011-08-02  9:58 UTC (permalink / raw)
  To: Sven Eckelmann
  Cc: linux-m32r-ja, linux-mips, linux-ia64, linux-doc,
	Benjamin Herrenschmidt, H. Peter Anvin, Heiko Carstens,
	Randy Dunlap, Paul Mackerras, Helge Deller, sparclinux,
	linux-arch, linux-s390, Russell King, user-mode-linux-devel,
	Richard Weinberger, Hirokazu Takata, x86, James E.J. Bottomley,
	Ingo Molnar, Matt Turner, Fenghua Yu, Arnd Bergmann, Jeff Dike,
	Chris Metcalf, linux-m32r

On Wed, Jul 27, 2011 at 11:47:40AM +0200, Sven Eckelmann wrote:

For MIPS:

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 01/11] atomic: add *_dec_not_zero
@ 2011-08-02  9:58   ` Ralf Baechle
  0 siblings, 0 replies; 38+ messages in thread
From: Ralf Baechle @ 2011-08-02  9:58 UTC (permalink / raw)
  To: Sven Eckelmann
  Cc: linux-arch, linux-kernel, Randy Dunlap, Richard Henderson,
	Ivan Kokshaysky, Matt Turner, Russell King, Tony Luck,
	Fenghua Yu, Hirokazu Takata, Kyle McMartin, Helge Deller,
	James E.J. Bottomley, Benjamin Herrenschmidt, Paul Mackerras,
	Martin Schwidefsky, Heiko Carstens, linux390, David S. Miller,
	Chris Metcalf, Jeff Dike, Richard Weinberger, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, Arnd Bergmann, linux-doc,
	linux-alpha, linux-arm-kernel, linux-ia64, linux-m32r,
	linux-m32r-ja, linux-mips, linux-parisc, linuxppc-dev,
	linux-s390, sparclinux, user-mode-linux-devel

On Wed, Jul 27, 2011 at 11:47:40AM +0200, Sven Eckelmann wrote:

For MIPS:

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 01/11] atomic: add *_dec_not_zero
@ 2011-08-02  9:58   ` Ralf Baechle
  0 siblings, 0 replies; 38+ messages in thread
From: Ralf Baechle @ 2011-08-02  9:58 UTC (permalink / raw)
  To: Sven Eckelmann
  Cc: linux-m32r-ja, linux-mips, linux-ia64, linux-doc, H. Peter Anvin,
	Heiko Carstens, Randy Dunlap, Paul Mackerras, Helge Deller,
	sparclinux, linux-arch, linux-s390, Russell King,
	user-mode-linux-devel, Richard Weinberger, Hirokazu Takata, x86,
	James E.J. Bottomley, Ingo Molnar, Matt Turner, Fenghua Yu,
	Arnd Bergmann, Jeff Dike, Chris Metcalf, linux-m32r,
	Ivan Kokshaysky, Thomas Gleixner, linux-arm-kernel,
	Richard Henderson, Tony Luck, linux-parisc, linux-kernel,
	Kyle McMartin, linux-alpha, Martin Schwidefsky, linux390,
	linuxppc-dev, David S. Miller

On Wed, Jul 27, 2011 at 11:47:40AM +0200, Sven Eckelmann wrote:

For MIPS:

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCHv4 01/11] atomic: add *_dec_not_zero
@ 2011-08-02  9:58   ` Ralf Baechle
  0 siblings, 0 replies; 38+ messages in thread
From: Ralf Baechle @ 2011-08-02  9:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 27, 2011 at 11:47:40AM +0200, Sven Eckelmann wrote:

For MIPS:

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2011-08-02 10:14 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-07-27  9:47 [PATCHv4 01/11] atomic: add *_dec_not_zero Sven Eckelmann
2011-07-27  9:47 ` Sven Eckelmann
2011-07-27  9:47 ` Sven Eckelmann
2011-07-27  9:47 ` Sven Eckelmann
2011-07-27  9:47 ` [PATCHv4 02/11] batman-adv: Remove private define of atomic_dec_not_zero Sven Eckelmann
2011-07-27  9:47   ` [B.A.T.M.A.N.] " Sven Eckelmann
2011-07-27  9:47   ` Sven Eckelmann
2011-07-27  9:47 ` [PATCHv4 03/11] fault_inject: " Sven Eckelmann
2011-07-27  9:47 ` [PATCHv4 04/11] PM: Use *_dec_not_zero instead of *_add_unless Sven Eckelmann
2011-07-27  9:47 ` Sven Eckelmann
2011-07-27 19:50   ` Rafael J. Wysocki
2011-07-27 19:50   ` Rafael J. Wysocki
2011-07-27 20:36     ` Pavel Machek
2011-07-27 20:36     ` Pavel Machek
2011-07-28 21:43       ` Rafael J. Wysocki
2011-07-28 21:43         ` Rafael J. Wysocki
2011-07-27  9:47 ` [PATCHv4 05/11] omap3isp: " Sven Eckelmann
2011-07-31 15:00   ` Laurent Pinchart
2011-08-01 10:07     ` Sven Eckelmann
2011-08-01 11:10       ` Laurent Pinchart
2011-07-27  9:47 ` [PATCHv4 06/11] qeth: " Sven Eckelmann
2011-07-27  9:47 ` [PATCHv4 07/11] ext4: " Sven Eckelmann
2011-07-27  9:47 ` [PATCHv4 08/11] xfs: " Sven Eckelmann
2011-07-27  9:47   ` Sven Eckelmann
2011-07-27  9:47 ` [PATCHv4 09/11] memcg: " Sven Eckelmann
2011-07-27  9:47   ` Sven Eckelmann
2011-07-27 11:58   ` Michal Hocko
2011-07-27 11:58     ` Michal Hocko
2011-07-27  9:47 ` [PATCHv4 10/11] drop_monitor: " Sven Eckelmann
2011-07-27 10:59   ` Neil Horman
2011-07-27 11:52     ` Sven Eckelmann
2011-07-27 14:25       ` Neil Horman
2011-07-27  9:47 ` [PATCHv4 11/11] Phonet: " Sven Eckelmann
2011-07-27 10:29   ` Rémi Denis-Courmont
2011-08-02  9:58 ` [PATCHv4 01/11] atomic: add *_dec_not_zero Ralf Baechle
2011-08-02  9:58   ` Ralf Baechle
2011-08-02  9:58   ` Ralf Baechle
2011-08-02  9:58   ` Ralf Baechle

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.