All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH][net-next] skbuff: pass the result of data ksize to __build_skb_around
@ 2021-09-22  6:17 Li RongQing
  2021-09-22 13:30 ` patchwork-bot+netdevbpf
  0 siblings, 1 reply; 3+ messages in thread
From: Li RongQing @ 2021-09-22  6:17 UTC (permalink / raw)
  To: netdev

Avoid to call ksize again in __build_skb_around by passing
the result of data ksize to __build_skb_around

nginx stress test shows this change can reduce ksize cpu usage,
and give a little performance boost

Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
 net/core/skbuff.c |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 9240af2..1e0a930 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -397,8 +397,9 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 {
 	struct kmem_cache *cache;
 	struct sk_buff *skb;
-	u8 *data;
+	unsigned int osize;
 	bool pfmemalloc;
+	u8 *data;
 
 	cache = (flags & SKB_ALLOC_FCLONE)
 		? skbuff_fclone_cache : skbuff_head_cache;
@@ -430,7 +431,8 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 	 * Put skb_shared_info exactly at the end of allocated zone,
 	 * to allow max possible filling before reallocation.
 	 */
-	size = SKB_WITH_OVERHEAD(ksize(data));
+	osize = ksize(data);
+	size = SKB_WITH_OVERHEAD(osize);
 	prefetchw(data + size);
 
 	/*
@@ -439,7 +441,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 	 * the tail pointer in struct sk_buff!
 	 */
 	memset(skb, 0, offsetof(struct sk_buff, tail));
-	__build_skb_around(skb, data, 0);
+	__build_skb_around(skb, data, osize);
 	skb->pfmemalloc = pfmemalloc;
 
 	if (flags & SKB_ALLOC_FCLONE) {
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH][net-next] skbuff: pass the result of data ksize to __build_skb_around
  2021-09-22  6:17 [PATCH][net-next] skbuff: pass the result of data ksize to __build_skb_around Li RongQing
@ 2021-09-22 13:30 ` patchwork-bot+netdevbpf
  0 siblings, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2021-09-22 13:30 UTC (permalink / raw)
  To: Li RongQing; +Cc: netdev

Hello:

This patch was applied to netdev/net-next.git (refs/heads/master):

On Wed, 22 Sep 2021 14:17:19 +0800 you wrote:
> Avoid to call ksize again in __build_skb_around by passing
> the result of data ksize to __build_skb_around
> 
> nginx stress test shows this change can reduce ksize cpu usage,
> and give a little performance boost
> 
> Signed-off-by: Li RongQing <lirongqing@baidu.com>
> 
> [...]

Here is the summary with links:
  - [net-next] skbuff: pass the result of data ksize to __build_skb_around
    https://git.kernel.org/netdev/net-next/c/a5df6333f1a0

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH][net-next] skbuff: pass the result of data ksize to __build_skb_around
@ 2021-09-09  6:33 Li RongQing
  0 siblings, 0 replies; 3+ messages in thread
From: Li RongQing @ 2021-09-09  6:33 UTC (permalink / raw)
  To: netdev

Avoid to call ksize again in __build_skb_around by passing
the result of data ksize to __build_skb_around

nginx stress test shows this change can reduce ksize cpu usage,
and give a little performance boost

Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
 net/core/skbuff.c |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 9240af2..1e0a930 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -397,8 +397,9 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 {
 	struct kmem_cache *cache;
 	struct sk_buff *skb;
-	u8 *data;
+	unsigned int osize;
 	bool pfmemalloc;
+	u8 *data;
 
 	cache = (flags & SKB_ALLOC_FCLONE)
 		? skbuff_fclone_cache : skbuff_head_cache;
@@ -430,7 +431,8 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 	 * Put skb_shared_info exactly at the end of allocated zone,
 	 * to allow max possible filling before reallocation.
 	 */
-	size = SKB_WITH_OVERHEAD(ksize(data));
+	osize = ksize(data);
+	size = SKB_WITH_OVERHEAD(osize);
 	prefetchw(data + size);
 
 	/*
@@ -439,7 +441,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 	 * the tail pointer in struct sk_buff!
 	 */
 	memset(skb, 0, offsetof(struct sk_buff, tail));
-	__build_skb_around(skb, data, 0);
+	__build_skb_around(skb, data, osize);
 	skb->pfmemalloc = pfmemalloc;
 
 	if (flags & SKB_ALLOC_FCLONE) {
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-09-22 13:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-22  6:17 [PATCH][net-next] skbuff: pass the result of data ksize to __build_skb_around Li RongQing
2021-09-22 13:30 ` patchwork-bot+netdevbpf
  -- strict thread matches above, loose matches on Subject: below --
2021-09-09  6:33 Li RongQing

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.