netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next] tcp: sk_forced_mem_schedule() optimization
@ 2022-06-11  3:30 Eric Dumazet
  2022-06-11  3:44 ` Soheil Hassas Yeganeh
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Eric Dumazet @ 2022-06-11  3:30 UTC (permalink / raw)
  To: David S . Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, Soheil Hassas Yeganeh, Wei Wang, Shakeel Butt,
	Neal Cardwell, Eric Dumazet, Eric Dumazet

From: Eric Dumazet <edumazet@google.com>

sk_memory_allocated_add() has three callers, and returns
to them @memory_allocated.

sk_forced_mem_schedule() is one of them, and ignores
the returned value.

Change sk_memory_allocated_add() to return void.

Change sock_reserve_memory() and __sk_mem_raise_allocated()
to call sk_memory_allocated().

This removes one cache line miss [1] for RPC workloads,
as first skbs in TCP write queue and receive queue go through
sk_forced_mem_schedule().

[1] Cache line holding tcp_memory_allocated.

Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 include/net/sock.h | 3 +--
 net/core/sock.c    | 9 ++++++---
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index 0063e8410a4e3ed91aef9cf34eb1127f7ce33b93..304a5e39d41e27105148058066e8fa23490cf9fa 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1412,7 +1412,7 @@ sk_memory_allocated(const struct sock *sk)
 /* 1 MB per cpu, in page units */
 #define SK_MEMORY_PCPU_RESERVE (1 << (20 - PAGE_SHIFT))
 
-static inline long
+static inline void
 sk_memory_allocated_add(struct sock *sk, int amt)
 {
 	int local_reserve;
@@ -1424,7 +1424,6 @@ sk_memory_allocated_add(struct sock *sk, int amt)
 		atomic_long_add(local_reserve, sk->sk_prot->memory_allocated);
 	}
 	preempt_enable();
-	return sk_memory_allocated(sk);
 }
 
 static inline void
diff --git a/net/core/sock.c b/net/core/sock.c
index 697d5c8e2f0def49351a7d38ec59ab5e875d3b10..92a0296ccb1842f11fb8dd4b2f768885d05daa8f 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1019,7 +1019,8 @@ static int sock_reserve_memory(struct sock *sk, int bytes)
 		return -ENOMEM;
 
 	/* pre-charge to forward_alloc */
-	allocated = sk_memory_allocated_add(sk, pages);
+	sk_memory_allocated_add(sk, pages);
+	allocated = sk_memory_allocated(sk);
 	/* If the system goes into memory pressure with this
 	 * precharge, give up and return error.
 	 */
@@ -2906,11 +2907,13 @@ EXPORT_SYMBOL(sk_wait_data);
  */
 int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
 {
-	struct proto *prot = sk->sk_prot;
-	long allocated = sk_memory_allocated_add(sk, amt);
 	bool memcg_charge = mem_cgroup_sockets_enabled && sk->sk_memcg;
+	struct proto *prot = sk->sk_prot;
 	bool charged = true;
+	long allocated;
 
+	sk_memory_allocated_add(sk, amt);
+	allocated = sk_memory_allocated(sk);
 	if (memcg_charge &&
 	    !(charged = mem_cgroup_charge_skmem(sk->sk_memcg, amt,
 						gfp_memcg_charge())))
-- 
2.36.1.476.g0c4daa206d-goog


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH net-next] tcp: sk_forced_mem_schedule() optimization
  2022-06-11  3:30 [PATCH net-next] tcp: sk_forced_mem_schedule() optimization Eric Dumazet
@ 2022-06-11  3:44 ` Soheil Hassas Yeganeh
  2022-06-11 21:13 ` Shakeel Butt
  2022-06-13 12:40 ` patchwork-bot+netdevbpf
  2 siblings, 0 replies; 4+ messages in thread
From: Soheil Hassas Yeganeh @ 2022-06-11  3:44 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, netdev, Wei Wang,
	Shakeel Butt, Neal Cardwell, Eric Dumazet

On Fri, Jun 10, 2022 at 11:30 PM Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
> From: Eric Dumazet <edumazet@google.com>
>
> sk_memory_allocated_add() has three callers, and returns
> to them @memory_allocated.
>
> sk_forced_mem_schedule() is one of them, and ignores
> the returned value.
>
> Change sk_memory_allocated_add() to return void.
>
> Change sock_reserve_memory() and __sk_mem_raise_allocated()
> to call sk_memory_allocated().
>
> This removes one cache line miss [1] for RPC workloads,
> as first skbs in TCP write queue and receive queue go through
> sk_forced_mem_schedule().
>
> [1] Cache line holding tcp_memory_allocated.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>

Acked-by: Soheil Hassas Yeganeh <soheil@google.com>

Nice find!

> ---
>  include/net/sock.h | 3 +--
>  net/core/sock.c    | 9 ++++++---
>  2 files changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/include/net/sock.h b/include/net/sock.h
> index 0063e8410a4e3ed91aef9cf34eb1127f7ce33b93..304a5e39d41e27105148058066e8fa23490cf9fa 100644
> --- a/include/net/sock.h
> +++ b/include/net/sock.h
> @@ -1412,7 +1412,7 @@ sk_memory_allocated(const struct sock *sk)
>  /* 1 MB per cpu, in page units */
>  #define SK_MEMORY_PCPU_RESERVE (1 << (20 - PAGE_SHIFT))
>
> -static inline long
> +static inline void
>  sk_memory_allocated_add(struct sock *sk, int amt)
>  {
>         int local_reserve;
> @@ -1424,7 +1424,6 @@ sk_memory_allocated_add(struct sock *sk, int amt)
>                 atomic_long_add(local_reserve, sk->sk_prot->memory_allocated);
>         }
>         preempt_enable();
> -       return sk_memory_allocated(sk);
>  }
>
>  static inline void
> diff --git a/net/core/sock.c b/net/core/sock.c
> index 697d5c8e2f0def49351a7d38ec59ab5e875d3b10..92a0296ccb1842f11fb8dd4b2f768885d05daa8f 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -1019,7 +1019,8 @@ static int sock_reserve_memory(struct sock *sk, int bytes)
>                 return -ENOMEM;
>
>         /* pre-charge to forward_alloc */
> -       allocated = sk_memory_allocated_add(sk, pages);
> +       sk_memory_allocated_add(sk, pages);
> +       allocated = sk_memory_allocated(sk);
>         /* If the system goes into memory pressure with this
>          * precharge, give up and return error.
>          */
> @@ -2906,11 +2907,13 @@ EXPORT_SYMBOL(sk_wait_data);
>   */
>  int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
>  {
> -       struct proto *prot = sk->sk_prot;
> -       long allocated = sk_memory_allocated_add(sk, amt);
>         bool memcg_charge = mem_cgroup_sockets_enabled && sk->sk_memcg;
> +       struct proto *prot = sk->sk_prot;
>         bool charged = true;
> +       long allocated;
>
> +       sk_memory_allocated_add(sk, amt);
> +       allocated = sk_memory_allocated(sk);
>         if (memcg_charge &&
>             !(charged = mem_cgroup_charge_skmem(sk->sk_memcg, amt,
>                                                 gfp_memcg_charge())))
> --
> 2.36.1.476.g0c4daa206d-goog
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH net-next] tcp: sk_forced_mem_schedule() optimization
  2022-06-11  3:30 [PATCH net-next] tcp: sk_forced_mem_schedule() optimization Eric Dumazet
  2022-06-11  3:44 ` Soheil Hassas Yeganeh
@ 2022-06-11 21:13 ` Shakeel Butt
  2022-06-13 12:40 ` patchwork-bot+netdevbpf
  2 siblings, 0 replies; 4+ messages in thread
From: Shakeel Butt @ 2022-06-11 21:13 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, netdev,
	Soheil Hassas Yeganeh, Wei Wang, Neal Cardwell, Eric Dumazet

On Fri, Jun 10, 2022 at 8:30 PM Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
> From: Eric Dumazet <edumazet@google.com>
>
> sk_memory_allocated_add() has three callers, and returns
> to them @memory_allocated.
>
> sk_forced_mem_schedule() is one of them, and ignores
> the returned value.
>
> Change sk_memory_allocated_add() to return void.
>
> Change sock_reserve_memory() and __sk_mem_raise_allocated()
> to call sk_memory_allocated().
>
> This removes one cache line miss [1] for RPC workloads,
> as first skbs in TCP write queue and receive queue go through
> sk_forced_mem_schedule().
>
> [1] Cache line holding tcp_memory_allocated.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>

Reviewed-by: Shakeel Butt <shakeelb@google.com>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH net-next] tcp: sk_forced_mem_schedule() optimization
  2022-06-11  3:30 [PATCH net-next] tcp: sk_forced_mem_schedule() optimization Eric Dumazet
  2022-06-11  3:44 ` Soheil Hassas Yeganeh
  2022-06-11 21:13 ` Shakeel Butt
@ 2022-06-13 12:40 ` patchwork-bot+netdevbpf
  2 siblings, 0 replies; 4+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-06-13 12:40 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: davem, kuba, pabeni, netdev, soheil, weiwan, shakeelb, ncardwell,
	edumazet

Hello:

This patch was applied to netdev/net-next.git (master)
by David S. Miller <davem@davemloft.net>:

On Fri, 10 Jun 2022 20:30:16 -0700 you wrote:
> From: Eric Dumazet <edumazet@google.com>
> 
> sk_memory_allocated_add() has three callers, and returns
> to them @memory_allocated.
> 
> sk_forced_mem_schedule() is one of them, and ignores
> the returned value.
> 
> [...]

Here is the summary with links:
  - [net-next] tcp: sk_forced_mem_schedule() optimization
    https://git.kernel.org/netdev/net-next/c/219160be496f

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-06-13 17:26 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-11  3:30 [PATCH net-next] tcp: sk_forced_mem_schedule() optimization Eric Dumazet
2022-06-11  3:44 ` Soheil Hassas Yeganeh
2022-06-11 21:13 ` Shakeel Butt
2022-06-13 12:40 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).