netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next] page_pool: add a lockdep check for recycling in hardirq
@ 2023-07-20 17:37 Jakub Kicinski
  2023-07-21 11:53 ` Yunsheng Lin
  2023-07-21 15:48 ` Alexander Lobakin
  0 siblings, 2 replies; 7+ messages in thread
From: Jakub Kicinski @ 2023-07-20 17:37 UTC (permalink / raw)
  To: davem
  Cc: netdev, edumazet, pabeni, Jakub Kicinski, peterz, mingo, will,
	longman, boqun.feng, hawk, ilias.apalodimas

Page pool use in hardirq is prohibited, add debug checks
to catch misuses. IIRC we previously discussed using
DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns
that people will have DEBUG_NET enabled in perf testing.
I don't think anyone enables lockdep in perf testing,
so use lockdep to avoid pushback and arguing :)

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
CC: peterz@infradead.org
CC: mingo@redhat.com
CC: will@kernel.org
CC: longman@redhat.com
CC: boqun.feng@gmail.com
CC: hawk@kernel.org
CC: ilias.apalodimas@linaro.org
---
 include/linux/lockdep.h | 7 +++++++
 net/core/page_pool.c    | 4 ++++
 2 files changed, 11 insertions(+)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 310f85903c91..dc2844b071c2 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -625,6 +625,12 @@ do {									\
 	WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirq_context)); \
 } while (0)
 
+#define lockdep_assert_no_hardirq()					\
+do {									\
+	WARN_ON_ONCE(__lockdep_enabled && (this_cpu_read(hardirq_context) || \
+					   !this_cpu_read(hardirqs_enabled))); \
+} while (0)
+
 #define lockdep_assert_preemption_enabled()				\
 do {									\
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)	&&		\
@@ -659,6 +665,7 @@ do {									\
 # define lockdep_assert_irqs_enabled() do { } while (0)
 # define lockdep_assert_irqs_disabled() do { } while (0)
 # define lockdep_assert_in_irq() do { } while (0)
+# define lockdep_assert_no_hardirq() do { } while (0)
 
 # define lockdep_assert_preemption_enabled() do { } while (0)
 # define lockdep_assert_preemption_disabled() do { } while (0)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index a3e12a61d456..3ac760fcdc22 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -536,6 +536,8 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page)
 static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page)
 {
 	int ret;
+
+	lockdep_assert_no_hardirq();
 	/* BH protection not needed if current is softirq */
 	if (in_softirq())
 		ret = ptr_ring_produce(&pool->ring, page);
@@ -642,6 +644,8 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
 	int i, bulk_len = 0;
 	bool in_softirq;
 
+	lockdep_assert_no_hardirq();
+
 	for (i = 0; i < count; i++) {
 		struct page *page = virt_to_head_page(data[i]);
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-07-22  1:45 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-20 17:37 [PATCH net-next] page_pool: add a lockdep check for recycling in hardirq Jakub Kicinski
2023-07-21 11:53 ` Yunsheng Lin
2023-07-21 15:02   ` Jakub Kicinski
2023-07-21 15:48 ` Alexander Lobakin
2023-07-21 16:05   ` Jakub Kicinski
2023-07-21 16:33     ` Alexander Lobakin
2023-07-22  1:45       ` Jakub Kicinski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).