All of lore.kernel.org
 help / color / mirror / Atom feed
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	stable@vger.kernel.org, Will Deacon <will@kernel.org>,
	Ard Biesheuvel <ardb@kernel.org>,
	Kees Cook <keescook@chromium.org>,
	Hanjun Guo <guohanjun@huawei.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Elena Reshetova <elena.reshetova@intel.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>, Sasha Levin <sashal@kernel.org>
Subject: [PATCH 5.4 63/87] locking/refcount: Move the bulk of the REFCOUNT_FULL implementation into the <linux/refcount.h> header
Date: Wed, 27 Jul 2022 18:10:56 +0200	[thread overview]
Message-ID: <20220727161011.601518877@linuxfoundation.org> (raw)
In-Reply-To: <20220727161008.993711844@linuxfoundation.org>

From: Will Deacon <will@kernel.org>

[ Upstream commit 77e9971c79c29542ab7dd4140f9343bf2ff36158 ]

In an effort to improve performance of the REFCOUNT_FULL implementation,
move the bulk of its functions into linux/refcount.h. This allows them
to be inlined in the same way as if they had been provided via
CONFIG_ARCH_HAS_REFCOUNT.

Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Hanjun Guo <guohanjun@huawei.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20191121115902.2551-5-will@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 include/linux/refcount.h | 237 ++++++++++++++++++++++++++++++++++++--
 lib/refcount.c           | 238 +--------------------------------------
 2 files changed, 229 insertions(+), 246 deletions(-)

diff --git a/include/linux/refcount.h b/include/linux/refcount.h
index edd505d1a23b..e719b5b1220e 100644
--- a/include/linux/refcount.h
+++ b/include/linux/refcount.h
@@ -45,22 +45,241 @@ static inline unsigned int refcount_read(const refcount_t *r)
 }
 
 #ifdef CONFIG_REFCOUNT_FULL
+#include <linux/bug.h>
 
 #define REFCOUNT_MAX		(UINT_MAX - 1)
 #define REFCOUNT_SATURATED	UINT_MAX
 
-extern __must_check bool refcount_add_not_zero(int i, refcount_t *r);
-extern void refcount_add(int i, refcount_t *r);
+/*
+ * Variant of atomic_t specialized for reference counts.
+ *
+ * The interface matches the atomic_t interface (to aid in porting) but only
+ * provides the few functions one should use for reference counting.
+ *
+ * It differs in that the counter saturates at REFCOUNT_SATURATED and will not
+ * move once there. This avoids wrapping the counter and causing 'spurious'
+ * use-after-free issues.
+ *
+ * Memory ordering rules are slightly relaxed wrt regular atomic_t functions
+ * and provide only what is strictly required for refcounts.
+ *
+ * The increments are fully relaxed; these will not provide ordering. The
+ * rationale is that whatever is used to obtain the object we're increasing the
+ * reference count on will provide the ordering. For locked data structures,
+ * its the lock acquire, for RCU/lockless data structures its the dependent
+ * load.
+ *
+ * Do note that inc_not_zero() provides a control dependency which will order
+ * future stores against the inc, this ensures we'll never modify the object
+ * if we did not in fact acquire a reference.
+ *
+ * The decrements will provide release order, such that all the prior loads and
+ * stores will be issued before, it also provides a control dependency, which
+ * will order us against the subsequent free().
+ *
+ * The control dependency is against the load of the cmpxchg (ll/sc) that
+ * succeeded. This means the stores aren't fully ordered, but this is fine
+ * because the 1->0 transition indicates no concurrency.
+ *
+ * Note that the allocator is responsible for ordering things between free()
+ * and alloc().
+ *
+ * The decrements dec_and_test() and sub_and_test() also provide acquire
+ * ordering on success.
+ *
+ */
+
+/**
+ * refcount_add_not_zero - add a value to a refcount unless it is 0
+ * @i: the value to add to the refcount
+ * @r: the refcount
+ *
+ * Will saturate at REFCOUNT_SATURATED and WARN.
+ *
+ * Provides no memory ordering, it is assumed the caller has guaranteed the
+ * object memory to be stable (RCU, etc.). It does provide a control dependency
+ * and thereby orders future stores. See the comment on top.
+ *
+ * Use of this function is not recommended for the normal reference counting
+ * use case in which references are taken and released one at a time.  In these
+ * cases, refcount_inc(), or one of its variants, should instead be used to
+ * increment a reference count.
+ *
+ * Return: false if the passed refcount is 0, true otherwise
+ */
+static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r)
+{
+	unsigned int new, val = atomic_read(&r->refs);
+
+	do {
+		if (!val)
+			return false;
+
+		if (unlikely(val == REFCOUNT_SATURATED))
+			return true;
+
+		new = val + i;
+		if (new < val)
+			new = REFCOUNT_SATURATED;
+
+	} while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new));
+
+	WARN_ONCE(new == REFCOUNT_SATURATED,
+		  "refcount_t: saturated; leaking memory.\n");
+
+	return true;
+}
+
+/**
+ * refcount_add - add a value to a refcount
+ * @i: the value to add to the refcount
+ * @r: the refcount
+ *
+ * Similar to atomic_add(), but will saturate at REFCOUNT_SATURATED and WARN.
+ *
+ * Provides no memory ordering, it is assumed the caller has guaranteed the
+ * object memory to be stable (RCU, etc.). It does provide a control dependency
+ * and thereby orders future stores. See the comment on top.
+ *
+ * Use of this function is not recommended for the normal reference counting
+ * use case in which references are taken and released one at a time.  In these
+ * cases, refcount_inc(), or one of its variants, should instead be used to
+ * increment a reference count.
+ */
+static inline void refcount_add(int i, refcount_t *r)
+{
+	WARN_ONCE(!refcount_add_not_zero(i, r), "refcount_t: addition on 0; use-after-free.\n");
+}
+
+/**
+ * refcount_inc_not_zero - increment a refcount unless it is 0
+ * @r: the refcount to increment
+ *
+ * Similar to atomic_inc_not_zero(), but will saturate at REFCOUNT_SATURATED
+ * and WARN.
+ *
+ * Provides no memory ordering, it is assumed the caller has guaranteed the
+ * object memory to be stable (RCU, etc.). It does provide a control dependency
+ * and thereby orders future stores. See the comment on top.
+ *
+ * Return: true if the increment was successful, false otherwise
+ */
+static inline __must_check bool refcount_inc_not_zero(refcount_t *r)
+{
+	unsigned int new, val = atomic_read(&r->refs);
+
+	do {
+		new = val + 1;
 
-extern __must_check bool refcount_inc_not_zero(refcount_t *r);
-extern void refcount_inc(refcount_t *r);
+		if (!val)
+			return false;
 
-extern __must_check bool refcount_sub_and_test(int i, refcount_t *r);
+		if (unlikely(!new))
+			return true;
 
-extern __must_check bool refcount_dec_and_test(refcount_t *r);
-extern void refcount_dec(refcount_t *r);
+	} while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new));
+
+	WARN_ONCE(new == REFCOUNT_SATURATED,
+		  "refcount_t: saturated; leaking memory.\n");
+
+	return true;
+}
+
+/**
+ * refcount_inc - increment a refcount
+ * @r: the refcount to increment
+ *
+ * Similar to atomic_inc(), but will saturate at REFCOUNT_SATURATED and WARN.
+ *
+ * Provides no memory ordering, it is assumed the caller already has a
+ * reference on the object.
+ *
+ * Will WARN if the refcount is 0, as this represents a possible use-after-free
+ * condition.
+ */
+static inline void refcount_inc(refcount_t *r)
+{
+	WARN_ONCE(!refcount_inc_not_zero(r), "refcount_t: increment on 0; use-after-free.\n");
+}
+
+/**
+ * refcount_sub_and_test - subtract from a refcount and test if it is 0
+ * @i: amount to subtract from the refcount
+ * @r: the refcount
+ *
+ * Similar to atomic_dec_and_test(), but it will WARN, return false and
+ * ultimately leak on underflow and will fail to decrement when saturated
+ * at REFCOUNT_SATURATED.
+ *
+ * Provides release memory ordering, such that prior loads and stores are done
+ * before, and provides an acquire ordering on success such that free()
+ * must come after.
+ *
+ * Use of this function is not recommended for the normal reference counting
+ * use case in which references are taken and released one at a time.  In these
+ * cases, refcount_dec(), or one of its variants, should instead be used to
+ * decrement a reference count.
+ *
+ * Return: true if the resulting refcount is 0, false otherwise
+ */
+static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r)
+{
+	unsigned int new, val = atomic_read(&r->refs);
+
+	do {
+		if (unlikely(val == REFCOUNT_SATURATED))
+			return false;
+
+		new = val - i;
+		if (new > val) {
+			WARN_ONCE(new > val, "refcount_t: underflow; use-after-free.\n");
+			return false;
+		}
+
+	} while (!atomic_try_cmpxchg_release(&r->refs, &val, new));
+
+	if (!new) {
+		smp_acquire__after_ctrl_dep();
+		return true;
+	}
+	return false;
+
+}
+
+/**
+ * refcount_dec_and_test - decrement a refcount and test if it is 0
+ * @r: the refcount
+ *
+ * Similar to atomic_dec_and_test(), it will WARN on underflow and fail to
+ * decrement when saturated at REFCOUNT_SATURATED.
+ *
+ * Provides release memory ordering, such that prior loads and stores are done
+ * before, and provides an acquire ordering on success such that free()
+ * must come after.
+ *
+ * Return: true if the resulting refcount is 0, false otherwise
+ */
+static inline __must_check bool refcount_dec_and_test(refcount_t *r)
+{
+	return refcount_sub_and_test(1, r);
+}
+
+/**
+ * refcount_dec - decrement a refcount
+ * @r: the refcount
+ *
+ * Similar to atomic_dec(), it will WARN on underflow and fail to decrement
+ * when saturated at REFCOUNT_SATURATED.
+ *
+ * Provides release memory ordering, such that prior loads and stores are done
+ * before.
+ */
+static inline void refcount_dec(refcount_t *r)
+{
+	WARN_ONCE(refcount_dec_and_test(r), "refcount_t: decrement hit 0; leaking memory.\n");
+}
 
-#else
+#else /* CONFIG_REFCOUNT_FULL */
 
 #define REFCOUNT_MAX		INT_MAX
 #define REFCOUNT_SATURATED	(INT_MIN / 2)
@@ -103,7 +322,7 @@ static inline void refcount_dec(refcount_t *r)
 	atomic_dec(&r->refs);
 }
 # endif /* !CONFIG_ARCH_HAS_REFCOUNT */
-#endif /* CONFIG_REFCOUNT_FULL */
+#endif /* !CONFIG_REFCOUNT_FULL */
 
 extern __must_check bool refcount_dec_if_one(refcount_t *r);
 extern __must_check bool refcount_dec_not_one(refcount_t *r);
diff --git a/lib/refcount.c b/lib/refcount.c
index a2f670998cee..3a534fbebdcc 100644
--- a/lib/refcount.c
+++ b/lib/refcount.c
@@ -1,41 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0
 /*
- * Variant of atomic_t specialized for reference counts.
- *
- * The interface matches the atomic_t interface (to aid in porting) but only
- * provides the few functions one should use for reference counting.
- *
- * It differs in that the counter saturates at REFCOUNT_SATURATED and will not
- * move once there. This avoids wrapping the counter and causing 'spurious'
- * use-after-free issues.
- *
- * Memory ordering rules are slightly relaxed wrt regular atomic_t functions
- * and provide only what is strictly required for refcounts.
- *
- * The increments are fully relaxed; these will not provide ordering. The
- * rationale is that whatever is used to obtain the object we're increasing the
- * reference count on will provide the ordering. For locked data structures,
- * its the lock acquire, for RCU/lockless data structures its the dependent
- * load.
- *
- * Do note that inc_not_zero() provides a control dependency which will order
- * future stores against the inc, this ensures we'll never modify the object
- * if we did not in fact acquire a reference.
- *
- * The decrements will provide release order, such that all the prior loads and
- * stores will be issued before, it also provides a control dependency, which
- * will order us against the subsequent free().
- *
- * The control dependency is against the load of the cmpxchg (ll/sc) that
- * succeeded. This means the stores aren't fully ordered, but this is fine
- * because the 1->0 transition indicates no concurrency.
- *
- * Note that the allocator is responsible for ordering things between free()
- * and alloc().
- *
- * The decrements dec_and_test() and sub_and_test() also provide acquire
- * ordering on success.
- *
+ * Out-of-line refcount functions common to all refcount implementations.
  */
 
 #include <linux/mutex.h>
@@ -43,207 +8,6 @@
 #include <linux/spinlock.h>
 #include <linux/bug.h>
 
-#ifdef CONFIG_REFCOUNT_FULL
-
-/**
- * refcount_add_not_zero - add a value to a refcount unless it is 0
- * @i: the value to add to the refcount
- * @r: the refcount
- *
- * Will saturate at REFCOUNT_SATURATED and WARN.
- *
- * Provides no memory ordering, it is assumed the caller has guaranteed the
- * object memory to be stable (RCU, etc.). It does provide a control dependency
- * and thereby orders future stores. See the comment on top.
- *
- * Use of this function is not recommended for the normal reference counting
- * use case in which references are taken and released one at a time.  In these
- * cases, refcount_inc(), or one of its variants, should instead be used to
- * increment a reference count.
- *
- * Return: false if the passed refcount is 0, true otherwise
- */
-bool refcount_add_not_zero(int i, refcount_t *r)
-{
-	unsigned int new, val = atomic_read(&r->refs);
-
-	do {
-		if (!val)
-			return false;
-
-		if (unlikely(val == REFCOUNT_SATURATED))
-			return true;
-
-		new = val + i;
-		if (new < val)
-			new = REFCOUNT_SATURATED;
-
-	} while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new));
-
-	WARN_ONCE(new == REFCOUNT_SATURATED,
-		  "refcount_t: saturated; leaking memory.\n");
-
-	return true;
-}
-EXPORT_SYMBOL(refcount_add_not_zero);
-
-/**
- * refcount_add - add a value to a refcount
- * @i: the value to add to the refcount
- * @r: the refcount
- *
- * Similar to atomic_add(), but will saturate at REFCOUNT_SATURATED and WARN.
- *
- * Provides no memory ordering, it is assumed the caller has guaranteed the
- * object memory to be stable (RCU, etc.). It does provide a control dependency
- * and thereby orders future stores. See the comment on top.
- *
- * Use of this function is not recommended for the normal reference counting
- * use case in which references are taken and released one at a time.  In these
- * cases, refcount_inc(), or one of its variants, should instead be used to
- * increment a reference count.
- */
-void refcount_add(int i, refcount_t *r)
-{
-	WARN_ONCE(!refcount_add_not_zero(i, r), "refcount_t: addition on 0; use-after-free.\n");
-}
-EXPORT_SYMBOL(refcount_add);
-
-/**
- * refcount_inc_not_zero - increment a refcount unless it is 0
- * @r: the refcount to increment
- *
- * Similar to atomic_inc_not_zero(), but will saturate at REFCOUNT_SATURATED
- * and WARN.
- *
- * Provides no memory ordering, it is assumed the caller has guaranteed the
- * object memory to be stable (RCU, etc.). It does provide a control dependency
- * and thereby orders future stores. See the comment on top.
- *
- * Return: true if the increment was successful, false otherwise
- */
-bool refcount_inc_not_zero(refcount_t *r)
-{
-	unsigned int new, val = atomic_read(&r->refs);
-
-	do {
-		new = val + 1;
-
-		if (!val)
-			return false;
-
-		if (unlikely(!new))
-			return true;
-
-	} while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new));
-
-	WARN_ONCE(new == REFCOUNT_SATURATED,
-		  "refcount_t: saturated; leaking memory.\n");
-
-	return true;
-}
-EXPORT_SYMBOL(refcount_inc_not_zero);
-
-/**
- * refcount_inc - increment a refcount
- * @r: the refcount to increment
- *
- * Similar to atomic_inc(), but will saturate at REFCOUNT_SATURATED and WARN.
- *
- * Provides no memory ordering, it is assumed the caller already has a
- * reference on the object.
- *
- * Will WARN if the refcount is 0, as this represents a possible use-after-free
- * condition.
- */
-void refcount_inc(refcount_t *r)
-{
-	WARN_ONCE(!refcount_inc_not_zero(r), "refcount_t: increment on 0; use-after-free.\n");
-}
-EXPORT_SYMBOL(refcount_inc);
-
-/**
- * refcount_sub_and_test - subtract from a refcount and test if it is 0
- * @i: amount to subtract from the refcount
- * @r: the refcount
- *
- * Similar to atomic_dec_and_test(), but it will WARN, return false and
- * ultimately leak on underflow and will fail to decrement when saturated
- * at REFCOUNT_SATURATED.
- *
- * Provides release memory ordering, such that prior loads and stores are done
- * before, and provides an acquire ordering on success such that free()
- * must come after.
- *
- * Use of this function is not recommended for the normal reference counting
- * use case in which references are taken and released one at a time.  In these
- * cases, refcount_dec(), or one of its variants, should instead be used to
- * decrement a reference count.
- *
- * Return: true if the resulting refcount is 0, false otherwise
- */
-bool refcount_sub_and_test(int i, refcount_t *r)
-{
-	unsigned int new, val = atomic_read(&r->refs);
-
-	do {
-		if (unlikely(val == REFCOUNT_SATURATED))
-			return false;
-
-		new = val - i;
-		if (new > val) {
-			WARN_ONCE(new > val, "refcount_t: underflow; use-after-free.\n");
-			return false;
-		}
-
-	} while (!atomic_try_cmpxchg_release(&r->refs, &val, new));
-
-	if (!new) {
-		smp_acquire__after_ctrl_dep();
-		return true;
-	}
-	return false;
-
-}
-EXPORT_SYMBOL(refcount_sub_and_test);
-
-/**
- * refcount_dec_and_test - decrement a refcount and test if it is 0
- * @r: the refcount
- *
- * Similar to atomic_dec_and_test(), it will WARN on underflow and fail to
- * decrement when saturated at REFCOUNT_SATURATED.
- *
- * Provides release memory ordering, such that prior loads and stores are done
- * before, and provides an acquire ordering on success such that free()
- * must come after.
- *
- * Return: true if the resulting refcount is 0, false otherwise
- */
-bool refcount_dec_and_test(refcount_t *r)
-{
-	return refcount_sub_and_test(1, r);
-}
-EXPORT_SYMBOL(refcount_dec_and_test);
-
-/**
- * refcount_dec - decrement a refcount
- * @r: the refcount
- *
- * Similar to atomic_dec(), it will WARN on underflow and fail to decrement
- * when saturated at REFCOUNT_SATURATED.
- *
- * Provides release memory ordering, such that prior loads and stores are done
- * before.
- */
-void refcount_dec(refcount_t *r)
-{
-	WARN_ONCE(refcount_dec_and_test(r), "refcount_t: decrement hit 0; leaking memory.\n");
-}
-EXPORT_SYMBOL(refcount_dec);
-
-#endif /* CONFIG_REFCOUNT_FULL */
-
 /**
  * refcount_dec_if_one - decrement a refcount if it is 1
  * @r: the refcount
-- 
2.35.1




  parent reply	other threads:[~2022-07-27 16:43 UTC|newest]

Thread overview: 94+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-27 16:09 [PATCH 5.4 00/87] 5.4.208-rc1 review Greg Kroah-Hartman
2022-07-27 16:09 ` [PATCH 5.4 01/87] pinctrl: stm32: fix optional IRQ support to gpios Greg Kroah-Hartman
2022-07-27 16:09 ` [PATCH 5.4 02/87] riscv: add as-options for modules with assembly compontents Greg Kroah-Hartman
2022-07-27 16:09 ` [PATCH 5.4 03/87] mlxsw: spectrum_router: Fix IPv4 nexthop gateway indication Greg Kroah-Hartman
2022-07-27 16:09 ` [PATCH 5.4 04/87] lockdown: Fix kexec lockdown bypass with ima policy Greg Kroah-Hartman
2022-07-27 16:09 ` [PATCH 5.4 05/87] xen/gntdev: Ignore failure to unmap INVALID_GRANT_HANDLE Greg Kroah-Hartman
2022-07-27 16:09 ` [PATCH 5.4 06/87] PCI: hv: Fix multi-MSI to allow more than one MSI vector Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 07/87] PCI: hv: Fix hv_arch_irq_unmask() for multi-MSI Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 08/87] PCI: hv: Reuse existing IRTE allocation in compose_msi_msg() Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 09/87] PCI: hv: Fix interrupt mapping for multi-MSI Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 10/87] serial: mvebu-uart: correctly report configured baudrate value Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 11/87] xfrm: xfrm_policy: fix a possible double xfrm_pols_put() in xfrm_bundle_lookup() Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 12/87] power/reset: arm-versatile: Fix refcount leak in versatile_reboot_probe Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 13/87] pinctrl: ralink: Check for null return of devm_kcalloc Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 14/87] perf/core: Fix data race between perf_event_set_output() and perf_mmap_close() Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 15/87] igc: Reinstate IGC_REMOVED logic and implement it properly Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 16/87] ip: Fix data-races around sysctl_ip_no_pmtu_disc Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 17/87] ip: Fix data-races around sysctl_ip_fwd_use_pmtu Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 18/87] ip: Fix data-races around sysctl_ip_nonlocal_bind Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 19/87] ip: Fix a data-race around sysctl_fwmark_reflect Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 20/87] tcp/dccp: Fix a data-race around sysctl_tcp_fwmark_accept Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 21/87] tcp: Fix data-races around sysctl_tcp_mtu_probing Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 22/87] tcp: Fix data-races around sysctl_tcp_base_mss Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 23/87] tcp: Fix data-races around sysctl_tcp_min_snd_mss Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 24/87] tcp: Fix a data-race around sysctl_tcp_mtu_probe_floor Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 25/87] tcp: Fix a data-race around sysctl_tcp_probe_threshold Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 26/87] tcp: Fix a data-race around sysctl_tcp_probe_interval Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 27/87] i2c: cadence: Change large transfer count reset logic to be unconditional Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 28/87] net: stmmac: fix dma queue left shift overflow issue Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 29/87] net/tls: Fix race in TLS device down flow Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 30/87] igmp: Fix data-races around sysctl_igmp_llm_reports Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 31/87] igmp: Fix a data-race around sysctl_igmp_max_memberships Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 32/87] tcp: Fix data-races around sysctl_tcp_syncookies Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 33/87] tcp: Fix data-races around sysctl_tcp_reordering Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 34/87] tcp: Fix data-races around some timeout sysctl knobs Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 35/87] tcp: Fix a data-race around sysctl_tcp_notsent_lowat Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 36/87] tcp: Fix a data-race around sysctl_tcp_tw_reuse Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 37/87] tcp: Fix data-races around sysctl_max_syn_backlog Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 38/87] tcp: Fix data-races around sysctl_tcp_fastopen Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 39/87] iavf: Fix handling of dummy receive descriptors Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 40/87] i40e: Fix erroneous adapter reinitialization during recovery process Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 41/87] ixgbe: Add locking to prevent panic when setting sriov_numvfs to zero Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 42/87] gpio: pca953x: only use single read/write for No AI mode Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 43/87] be2net: Fix buffer overflow in be_get_module_eeprom Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 44/87] ipv4: Fix a data-race around sysctl_fib_multipath_use_neigh Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 45/87] udp: Fix a data-race around sysctl_udp_l3mdev_accept Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 46/87] tcp: Fix data-races around sysctl knobs related to SYN option Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 47/87] tcp: Fix a data-race around sysctl_tcp_early_retrans Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 48/87] tcp: Fix data-races around sysctl_tcp_recovery Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 49/87] tcp: Fix a data-race around sysctl_tcp_thin_linear_timeouts Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 50/87] tcp: Fix data-races around sysctl_tcp_slow_start_after_idle Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 51/87] tcp: Fix a data-race around sysctl_tcp_retrans_collapse Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 52/87] tcp: Fix a data-race around sysctl_tcp_stdurg Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 53/87] tcp: Fix a data-race around sysctl_tcp_rfc1337 Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 54/87] tcp: Fix data-races around sysctl_tcp_max_reordering Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 55/87] spi: bcm2835: bcm2835_spi_handle_err(): fix NULL pointer deref for non DMA transfers Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 56/87] mm/mempolicy: fix uninit-value in mpol_rebind_policy() Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 57/87] bpf: Make sure mac_header was set before using it Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 58/87] dlm: fix pending remove if msg allocation fails Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 59/87] ima: remove the IMA_TEMPLATE Kconfig option Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 60/87] locking/refcount: Define constants for saturation and max refcount values Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 61/87] locking/refcount: Ensure integer operands are treated as signed Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 62/87] locking/refcount: Remove unused refcount_*_checked() variants Greg Kroah-Hartman
2022-07-27 16:10 ` Greg Kroah-Hartman [this message]
2022-07-27 16:10 ` [PATCH 5.4 64/87] locking/refcount: Improve performance of generic REFCOUNT_FULL code Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 65/87] locking/refcount: Move saturation warnings out of line Greg Kroah-Hartman
2022-07-27 16:10 ` [PATCH 5.4 66/87] locking/refcount: Consolidate REFCOUNT_{MAX,SATURATED} definitions Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 67/87] locking/refcount: Consolidate implementations of refcount_t Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 68/87] x86: get rid of small constant size cases in raw_copy_{to,from}_user() Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 69/87] x86/uaccess: Implement macros for CMPXCHG on user addresses Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 70/87] mmap locking API: initial implementation as rwsem wrappers Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 71/87] x86/mce: Deduplicate exception handling Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 72/87] bitfield.h: Fix "type of reg too small for mask" test Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 73/87] ALSA: memalloc: Align buffer allocations in page size Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 74/87] Bluetooth: Add bt_skb_sendmsg helper Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 75/87] Bluetooth: Add bt_skb_sendmmsg helper Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 76/87] Bluetooth: SCO: Replace use of memcpy_from_msg with bt_skb_sendmsg Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 77/87] Bluetooth: RFCOMM: Replace use of memcpy_from_msg with bt_skb_sendmmsg Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 78/87] Bluetooth: Fix passing NULL to PTR_ERR Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 79/87] Bluetooth: SCO: Fix sco_send_frame returning skb->len Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 80/87] Bluetooth: Fix bt_skb_sendmmsg not allocating partial chunks Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 81/87] tty: drivers/tty/, stop using tty_schedule_flip() Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 82/87] tty: the rest, " Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 83/87] tty: drop tty_schedule_flip() Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 84/87] tty: extract tty_flip_buffer_commit() from tty_flip_buffer_push() Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 85/87] tty: use new tty_insert_flip_string_and_push_buffer() in pty_write() Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 86/87] net: usb: ax88179_178a needs FLAG_SEND_ZLP Greg Kroah-Hartman
2022-07-27 16:11 ` [PATCH 5.4 87/87] x86: drop bogus "cc" clobber from __try_cmpxchg_user_asm() Greg Kroah-Hartman
2022-07-27 22:32 ` [PATCH 5.4 00/87] 5.4.208-rc1 review Florian Fainelli
2022-07-28  9:40 ` Naresh Kamboju
2022-07-28 14:32 ` Jon Hunter
2022-07-28 14:41 ` Sudip Mukherjee (Codethink)
2022-07-28 14:45 ` Shuah Khan
2022-07-28 22:58 ` Guenter Roeck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220727161011.601518877@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=ard.biesheuvel@linaro.org \
    --cc=ardb@kernel.org \
    --cc=elena.reshetova@intel.com \
    --cc=guohanjun@huawei.com \
    --cc=keescook@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=sashal@kernel.org \
    --cc=stable@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.