netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next] udp: do rmem bulk free even if the rx sk queue is empty
@ 2017-09-19 10:11 Paolo Abeni
  2017-09-20 21:29 ` David Miller
  0 siblings, 1 reply; 2+ messages in thread
From: Paolo Abeni @ 2017-09-19 10:11 UTC (permalink / raw)
  To: netdev; +Cc: David S. Miller, Eric Dumazet

The commit 6b229cf77d68 ("udp: add batching to udp_rmem_release()")
reduced greatly the cacheline contention between the BH and the US
reader batching the rmem updates in most scenarios.

Such optimization is explicitly avoided if the US reader is faster
then BH processing.

My fault, I initially suggested this kind of behavior due to concerns
of possible regressions with small sk_rcvbuf values. Tests showed
such concerns are misplaced, so this commit relaxes the condition
for rmem bulk updates, obtaining small but measurable performance
gain in the scenario described above.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 net/ipv4/udp.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index ef29df8648e4..784ced0b9150 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1212,8 +1212,7 @@ static void udp_rmem_release(struct sock *sk, int size, int partial,
 	if (likely(partial)) {
 		up->forward_deficit += size;
 		size = up->forward_deficit;
-		if (size < (sk->sk_rcvbuf >> 2) &&
-		    !skb_queue_empty(&up->reader_queue))
+		if (size < (sk->sk_rcvbuf >> 2))
 			return;
 	} else {
 		size += up->forward_deficit;
-- 
2.13.5

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH net-next] udp: do rmem bulk free even if the rx sk queue is empty
  2017-09-19 10:11 [PATCH net-next] udp: do rmem bulk free even if the rx sk queue is empty Paolo Abeni
@ 2017-09-20 21:29 ` David Miller
  0 siblings, 0 replies; 2+ messages in thread
From: David Miller @ 2017-09-20 21:29 UTC (permalink / raw)
  To: pabeni; +Cc: netdev, edumazet

From: Paolo Abeni <pabeni@redhat.com>
Date: Tue, 19 Sep 2017 12:11:43 +0200

> The commit 6b229cf77d68 ("udp: add batching to udp_rmem_release()")
> reduced greatly the cacheline contention between the BH and the US
> reader batching the rmem updates in most scenarios.
> 
> Such optimization is explicitly avoided if the US reader is faster
> then BH processing.
> 
> My fault, I initially suggested this kind of behavior due to concerns
> of possible regressions with small sk_rcvbuf values. Tests showed
> such concerns are misplaced, so this commit relaxes the condition
> for rmem bulk updates, obtaining small but measurable performance
> gain in the scenario described above.
> 
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>

Applied, thanks Paolo.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-09-20 21:29 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-19 10:11 [PATCH net-next] udp: do rmem bulk free even if the rx sk queue is empty Paolo Abeni
2017-09-20 21:29 ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).