From: David Howells <dhowells@redhat.com>
To: netdev@vger.kernel.org
Cc: David Howells <dhowells@redhat.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Willem de Bruijn <willemdebruijn.kernel@gmail.com>,
David Ahern <dsahern@kernel.org>,
Matthew Wilcox <willy@infradead.org>,
Jens Axboe <axboe@kernel.dk>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [PATCH net-next 07/12] net: Clean up users of netdev_alloc_cache and napi_frag_cache
Date: Wed, 24 May 2023 16:33:06 +0100 [thread overview]
Message-ID: <20230524153311.3625329-8-dhowells@redhat.com> (raw)
In-Reply-To: <20230524153311.3625329-1-dhowells@redhat.com>
The users of netdev_alloc_cache and napi_frag_cache don't need to take the
bh lock around access to these fragment caches any more as the percpu
handling is now done in page_frag_alloc_align().
Signed-off-by: David Howells <dhowells@redhat.com>
cc: "David S. Miller" <davem@davemloft.net>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: linux-mm@kvack.org
---
include/linux/skbuff.h | 3 ++-
net/core/skbuff.c | 29 +++++++++--------------------
2 files changed, 11 insertions(+), 21 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 41b63e72c6c3..e11a765fe7fa 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -252,7 +252,8 @@
/* Maximum value in skb->csum_level */
#define SKB_MAX_CSUM_LEVEL 3
-#define SKB_DATA_ALIGN(X) ALIGN(X, SMP_CACHE_BYTES)
+#define SKB_DATA_ALIGNMENT SMP_CACHE_BYTES
+#define SKB_DATA_ALIGN(X) ALIGN(X, SKB_DATA_ALIGNMENT)
#define SKB_WITH_OVERHEAD(X) \
((X) - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 225a16f3713f..c2840b0dcad9 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -291,27 +291,20 @@ void napi_get_frags_check(struct napi_struct *napi)
void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align)
{
- fragsz = SKB_DATA_ALIGN(fragsz);
-
+ align = min_t(unsigned int, align, SKB_DATA_ALIGNMENT);
return page_frag_alloc_align(&napi_frag_cache, fragsz, GFP_ATOMIC, align);
}
EXPORT_SYMBOL(napi_alloc_frag_align);
void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align)
{
- void *data;
-
- fragsz = SKB_DATA_ALIGN(fragsz);
- if (in_hardirq() || irqs_disabled()) {
- data = page_frag_alloc_align(&netdev_alloc_cache,
+ align = min_t(unsigned int, align, SKB_DATA_ALIGNMENT);
+ if (in_hardirq() || irqs_disabled())
+ return page_frag_alloc_align(&netdev_alloc_cache,
fragsz, GFP_ATOMIC, align);
- } else {
- local_bh_disable();
- data = page_frag_alloc_align(&napi_frag_cache,
+ else
+ return page_frag_alloc_align(&napi_frag_cache,
fragsz, GFP_ATOMIC, align);
- local_bh_enable();
- }
- return data;
}
EXPORT_SYMBOL(netdev_alloc_frag_align);
@@ -709,15 +702,11 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
if (sk_memalloc_socks())
gfp_mask |= __GFP_MEMALLOC;
- if (in_hardirq() || irqs_disabled()) {
+ if (in_hardirq() || irqs_disabled())
data = page_frag_alloc(&netdev_alloc_cache, len, gfp_mask);
- pfmemalloc = folio_is_pfmemalloc(virt_to_folio(data));
- } else {
- local_bh_disable();
+ else
data = page_frag_alloc(&napi_frag_cache, len, gfp_mask);
- pfmemalloc = folio_is_pfmemalloc(virt_to_folio(data));
- local_bh_enable();
- }
+ pfmemalloc = folio_is_pfmemalloc(virt_to_folio(data));
if (unlikely(!data))
return NULL;
next prev parent reply other threads:[~2023-05-24 15:33 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-24 15:32 [PATCH net-next 00/12] splice, net: Replace sendpage with sendmsg(MSG_SPLICE_PAGES), part 3 David Howells
2023-05-24 15:33 ` [PATCH net-next 01/12] mm: Move the page fragment allocator from page_alloc.c into its own file David Howells
2023-05-24 15:33 ` [PATCH net-next 02/12] mm: Provide a page_frag_cache allocator cleanup function David Howells
2023-05-24 15:33 ` [PATCH net-next 03/12] mm: Make the page_frag_cache allocator alignment param a pow-of-2 David Howells
2023-05-27 15:54 ` Alexander H Duyck
2023-11-30 9:00 ` Yunsheng Lin
2023-06-16 15:28 ` David Howells
2023-06-16 16:06 ` Alexander Duyck
2023-05-24 15:33 ` [PATCH net-next 04/12] mm: Make the page_frag_cache allocator use multipage folios David Howells
2023-05-26 11:56 ` Yunsheng Lin
2023-05-27 15:47 ` Alexander H Duyck
2023-05-26 12:47 ` David Howells
2023-05-26 14:06 ` Mika Penttilä
2023-05-27 0:50 ` Jakub Kicinski
2023-05-24 15:33 ` [PATCH net-next 05/12] mm: Make the page_frag_cache allocator handle __GFP_ZERO itself David Howells
2023-05-27 0:57 ` Jakub Kicinski
2023-05-27 15:54 ` Alexander Duyck
2023-05-24 15:33 ` [PATCH net-next 06/12] mm: Make the page_frag_cache allocator use per-cpu David Howells
2023-05-27 1:02 ` Jakub Kicinski
2023-05-24 15:33 ` David Howells [this message]
2023-05-24 15:33 ` [PATCH net-next 08/12] net: Copy slab data for sendmsg(MSG_SPLICE_PAGES) David Howells
2023-05-24 15:33 ` [PATCH net-next 09/12] tls/sw: Support MSG_SPLICE_PAGES David Howells
2023-05-27 1:08 ` Jakub Kicinski
2023-05-30 22:26 ` Bug in short splice to socket? David Howells
2023-05-31 0:32 ` Jakub Kicinski
2023-05-24 15:33 ` [PATCH net-next 10/12] tls/sw: Convert tls_sw_sendpage() to use MSG_SPLICE_PAGES David Howells
2023-05-27 1:13 ` Jakub Kicinski
2023-05-24 15:33 ` [PATCH net-next 11/12] tls/device: Support MSG_SPLICE_PAGES David Howells
2023-05-24 15:33 ` [PATCH net-next 12/12] tls/device: Convert tls_device_sendpage() to use MSG_SPLICE_PAGES David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230524153311.3625329-8-dhowells@redhat.com \
--to=dhowells@redhat.com \
--cc=axboe@kernel.dk \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=willemdebruijn.kernel@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).