All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next] tcp: add tcp_add_backlog()
@ 2016-08-27 14:37 Eric Dumazet
  2016-08-27 16:13 ` Yuchung Cheng
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: Eric Dumazet @ 2016-08-27 14:37 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Neal Cardwell, Yuchung Cheng

From: Eric Dumazet <edumazet@google.com>

When TCP operates in lossy environments (between 1 and 10 % packet
losses), many SACK blocks can be exchanged, and I noticed we could
drop them on busy senders, if these SACK blocks have to be queued
into the socket backlog.

While the main cause is the poor performance of RACK/SACK processing,
we can try to avoid these drops of valuable information that can lead to
spurious timeouts and retransmits.

Cause of the drops is the skb->truesize overestimation caused by :

- drivers allocating ~2048 (or more) bytes as a fragment to hold an
  Ethernet frame.

- various pskb_may_pull() calls bringing the headers into skb->head
  might have pulled all the frame content, but skb->truesize could
  not be lowered, as the stack has no idea of each fragment truesize.

The backlog drops are also more visible on bidirectional flows, since
their sk_rmem_alloc can be quite big.

Let's add some room for the backlog, as only the socket owner
can selectively take action to lower memory needs, like collapsing
receive queues or partial ofo pruning.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
---
 include/net/tcp.h   |    1 +
 net/ipv4/tcp_ipv4.c |   33 +++++++++++++++++++++++++++++----
 net/ipv6/tcp_ipv6.c |    5 +----
 3 files changed, 31 insertions(+), 8 deletions(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 25d64f6de69e1f639ed1531bf2d2df3f00fd76a2..5f5f09f6e019682ef29c864d2f43a8f247fcdd9a 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1163,6 +1163,7 @@ static inline void tcp_prequeue_init(struct tcp_sock *tp)
 }
 
 bool tcp_prequeue(struct sock *sk, struct sk_buff *skb);
+bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb);
 
 #undef STATE_TRACE
 
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index ad41e8ecf796bba1bd6d9ed155ca4a57ced96844..53e80cd004b6ce401c3acbb4b243b243c5c3c4a3 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1532,6 +1532,34 @@ bool tcp_prequeue(struct sock *sk, struct sk_buff *skb)
 }
 EXPORT_SYMBOL(tcp_prequeue);
 
+bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+{
+	u32 limit = sk->sk_rcvbuf + sk->sk_sndbuf;
+
+	/* Only socket owner can try to collapse/prune rx queues
+	 * to reduce memory overhead, so add a little headroom here.
+	 * Few sockets backlog are possibly concurrently non empty.
+	 */
+	limit += 64*1024;
+
+	/* In case all data was pulled from skb frags (in __pskb_pull_tail()),
+	 * we can fix skb->truesize to its real value to avoid future drops.
+	 * This is valid because skb is not yet charged to the socket.
+	 * It has been noticed pure SACK packets were sometimes dropped
+	 * (if cooked by drivers without copybreak feature).
+	 */
+	if (!skb->data_len)
+		skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
+
+	if (unlikely(sk_add_backlog(sk, skb, limit))) {
+		bh_unlock_sock(sk);
+		__NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPBACKLOGDROP);
+		return true;
+	}
+	return false;
+}
+EXPORT_SYMBOL(tcp_add_backlog);
+
 /*
  *	From tcp_input.c
  */
@@ -1662,10 +1690,7 @@ process:
 	if (!sock_owned_by_user(sk)) {
 		if (!tcp_prequeue(sk, skb))
 			ret = tcp_v4_do_rcv(sk, skb);
-	} else if (unlikely(sk_add_backlog(sk, skb,
-					   sk->sk_rcvbuf + sk->sk_sndbuf))) {
-		bh_unlock_sock(sk);
-		__NET_INC_STATS(net, LINUX_MIB_TCPBACKLOGDROP);
+	} else if (tcp_add_backlog(sk, skb)) {
 		goto discard_and_relse;
 	}
 	bh_unlock_sock(sk);
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index e4f55683af314438da8a09473927213a140e6e5c..5bf460bd299ff61ae759e8e545ba6298a1d1373c 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1467,10 +1467,7 @@ process:
 	if (!sock_owned_by_user(sk)) {
 		if (!tcp_prequeue(sk, skb))
 			ret = tcp_v6_do_rcv(sk, skb);
-	} else if (unlikely(sk_add_backlog(sk, skb,
-					   sk->sk_rcvbuf + sk->sk_sndbuf))) {
-		bh_unlock_sock(sk);
-		__NET_INC_STATS(net, LINUX_MIB_TCPBACKLOGDROP);
+	} else if (tcp_add_backlog(sk, skb)) {
 		goto discard_and_relse;
 	}
 	bh_unlock_sock(sk);

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-08-27 14:37 [PATCH net-next] tcp: add tcp_add_backlog() Eric Dumazet
@ 2016-08-27 16:13 ` Yuchung Cheng
  2016-08-27 16:25   ` Eric Dumazet
  2016-08-27 18:24 ` Neal Cardwell
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Yuchung Cheng @ 2016-08-27 16:13 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, Neal Cardwell

On Sat, Aug 27, 2016 at 7:37 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
> From: Eric Dumazet <edumazet@google.com>
>
> When TCP operates in lossy environments (between 1 and 10 % packet
> losses), many SACK blocks can be exchanged, and I noticed we could
> drop them on busy senders, if these SACK blocks have to be queued
> into the socket backlog.
>
> While the main cause is the poor performance of RACK/SACK processing,
I have a patch in preparation for this ;)

> we can try to avoid these drops of valuable information that can lead to
> spurious timeouts and retransmits.
>
> Cause of the drops is the skb->truesize overestimation caused by :
>
> - drivers allocating ~2048 (or more) bytes as a fragment to hold an
>   Ethernet frame.
>
> - various pskb_may_pull() calls bringing the headers into skb->head
>   might have pulled all the frame content, but skb->truesize could
>   not be lowered, as the stack has no idea of each fragment truesize.
>
> The backlog drops are also more visible on bidirectional flows, since
> their sk_rmem_alloc can be quite big.
>
> Let's add some room for the backlog, as only the socket owner
> can selectively take action to lower memory needs, like collapsing
> receive queues or partial ofo pruning.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Cc: Yuchung Cheng <ycheng@google.com>
> Cc: Neal Cardwell <ncardwell@google.com>
> ---
>  include/net/tcp.h   |    1 +
>  net/ipv4/tcp_ipv4.c |   33 +++++++++++++++++++++++++++++----
>  net/ipv6/tcp_ipv6.c |    5 +----
>  3 files changed, 31 insertions(+), 8 deletions(-)
>
> diff --git a/include/net/tcp.h b/include/net/tcp.h
> index 25d64f6de69e1f639ed1531bf2d2df3f00fd76a2..5f5f09f6e019682ef29c864d2f43a8f247fcdd9a 100644
> --- a/include/net/tcp.h
> +++ b/include/net/tcp.h
> @@ -1163,6 +1163,7 @@ static inline void tcp_prequeue_init(struct tcp_sock *tp)
>  }
>
>  bool tcp_prequeue(struct sock *sk, struct sk_buff *skb);
> +bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb);
>
>  #undef STATE_TRACE
>
> diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
> index ad41e8ecf796bba1bd6d9ed155ca4a57ced96844..53e80cd004b6ce401c3acbb4b243b243c5c3c4a3 100644
> --- a/net/ipv4/tcp_ipv4.c
> +++ b/net/ipv4/tcp_ipv4.c
> @@ -1532,6 +1532,34 @@ bool tcp_prequeue(struct sock *sk, struct sk_buff *skb)
>  }
>  EXPORT_SYMBOL(tcp_prequeue);
>
> +bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
> +{
> +       u32 limit = sk->sk_rcvbuf + sk->sk_sndbuf;
> +
> +       /* Only socket owner can try to collapse/prune rx queues
> +        * to reduce memory overhead, so add a little headroom here.
> +        * Few sockets backlog are possibly concurrently non empty.
> +        */
> +       limit += 64*1024;
Just a thought: only add the headroom if ofo queue exists (e.g., signs
of losses ore recovery).

btw is the added headroom subject to the memory pressure check?

> +
> +       /* In case all data was pulled from skb frags (in __pskb_pull_tail()),
> +        * we can fix skb->truesize to its real value to avoid future drops.
> +        * This is valid because skb is not yet charged to the socket.
> +        * It has been noticed pure SACK packets were sometimes dropped
> +        * (if cooked by drivers without copybreak feature).
> +        */
> +       if (!skb->data_len)
> +               skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
nice!

> +
> +       if (unlikely(sk_add_backlog(sk, skb, limit))) {
> +               bh_unlock_sock(sk);
> +               __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPBACKLOGDROP);
> +               return true;
> +       }
> +       return false;
> +}
> +EXPORT_SYMBOL(tcp_add_backlog);
> +
>  /*
>   *     From tcp_input.c
>   */
> @@ -1662,10 +1690,7 @@ process:
>         if (!sock_owned_by_user(sk)) {
>                 if (!tcp_prequeue(sk, skb))
>                         ret = tcp_v4_do_rcv(sk, skb);
> -       } else if (unlikely(sk_add_backlog(sk, skb,
> -                                          sk->sk_rcvbuf + sk->sk_sndbuf))) {
> -               bh_unlock_sock(sk);
> -               __NET_INC_STATS(net, LINUX_MIB_TCPBACKLOGDROP);
> +       } else if (tcp_add_backlog(sk, skb)) {
>                 goto discard_and_relse;
>         }
>         bh_unlock_sock(sk);
> diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
> index e4f55683af314438da8a09473927213a140e6e5c..5bf460bd299ff61ae759e8e545ba6298a1d1373c 100644
> --- a/net/ipv6/tcp_ipv6.c
> +++ b/net/ipv6/tcp_ipv6.c
> @@ -1467,10 +1467,7 @@ process:
>         if (!sock_owned_by_user(sk)) {
>                 if (!tcp_prequeue(sk, skb))
>                         ret = tcp_v6_do_rcv(sk, skb);
> -       } else if (unlikely(sk_add_backlog(sk, skb,
> -                                          sk->sk_rcvbuf + sk->sk_sndbuf))) {
> -               bh_unlock_sock(sk);
> -               __NET_INC_STATS(net, LINUX_MIB_TCPBACKLOGDROP);
> +       } else if (tcp_add_backlog(sk, skb)) {
>                 goto discard_and_relse;
>         }
>         bh_unlock_sock(sk);
>
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-08-27 16:13 ` Yuchung Cheng
@ 2016-08-27 16:25   ` Eric Dumazet
  2016-08-29 16:53     ` Yuchung Cheng
  0 siblings, 1 reply; 17+ messages in thread
From: Eric Dumazet @ 2016-08-27 16:25 UTC (permalink / raw)
  To: Yuchung Cheng; +Cc: David Miller, netdev, Neal Cardwell

On Sat, 2016-08-27 at 09:13 -0700, Yuchung Cheng wrote:
> On Sat, Aug 27, 2016 at 7:37 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> >

> > +       /* Only socket owner can try to collapse/prune rx queues
> > +        * to reduce memory overhead, so add a little headroom here.
> > +        * Few sockets backlog are possibly concurrently non empty.
> > +        */
> > +       limit += 64*1024;
> Just a thought: only add the headroom if ofo queue exists (e.g., signs
> of losses ore recovery).

Testing the ofo would add a cache line miss, and likely slow down the
other cpu processing the other packets for this flow.

Also, even if the ofo does not exist, the sk_rcvbuf budget can be
consumed by regular receive queue.

We still need to be able to process incoming ACK, if both send and
receive queues are 'full'.

> 
> btw is the added headroom subject to the memory pressure check?

Remind that the backlog check here is mostly to avoid some kind of DOS
attacks that we had in the past.

While we should definitely prevents DOS attacks, we should also not drop
legitimate traffic.

Here, number of backlogged sockets is limited by the number of cpus in
the host (if CONFIG_PREEMPT is disabled), or number of threads blocked
during a sendmsg()/recvmsg() (if CONFIG_PREEMPT is enabled)

So we do not need to be ultra precise, just have a safe guard.

The pressure check will be done at the time skbs will be added into a
receive/ofo queue in the very near future.

Thanks !

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-08-27 14:37 [PATCH net-next] tcp: add tcp_add_backlog() Eric Dumazet
  2016-08-27 16:13 ` Yuchung Cheng
@ 2016-08-27 18:24 ` Neal Cardwell
  2016-08-29  4:20 ` David Miller
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 17+ messages in thread
From: Neal Cardwell @ 2016-08-27 18:24 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, Yuchung Cheng

On Sat, Aug 27, 2016 at 10:37 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> From: Eric Dumazet <edumazet@google.com>
>
> When TCP operates in lossy environments (between 1 and 10 % packet
> losses), many SACK blocks can be exchanged, and I noticed we could
> drop them on busy senders, if these SACK blocks have to be queued
> into the socket backlog.
>
> While the main cause is the poor performance of RACK/SACK processing,
> we can try to avoid these drops of valuable information that can lead to
> spurious timeouts and retransmits.
>
> Cause of the drops is the skb->truesize overestimation caused by :
>
> - drivers allocating ~2048 (or more) bytes as a fragment to hold an
>   Ethernet frame.
>
> - various pskb_may_pull() calls bringing the headers into skb->head
>   might have pulled all the frame content, but skb->truesize could
>   not be lowered, as the stack has no idea of each fragment truesize.
>
> The backlog drops are also more visible on bidirectional flows, since
> their sk_rmem_alloc can be quite big.
>
> Let's add some room for the backlog, as only the socket owner
> can selectively take action to lower memory needs, like collapsing
> receive queues or partial ofo pruning.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Cc: Yuchung Cheng <ycheng@google.com>
> Cc: Neal Cardwell <ncardwell@google.com>
> ---
>  include/net/tcp.h   |    1 +
>  net/ipv4/tcp_ipv4.c |   33 +++++++++++++++++++++++++++++----
>  net/ipv6/tcp_ipv6.c |    5 +----
>  3 files changed, 31 insertions(+), 8 deletions(-)
>
> diff --git a/include/net/tcp.h b/include/net/tcp.h
> index 25d64f6de69e1f639ed1531bf2d2df3f00fd76a2..5f5f09f6e019682ef29c864d2f43a8f247fcdd9a 100644

Thanks for doing this, and thanks for the detailed answers to Yuchung's e-mail!

Acked-by: Neal Cardwell <ncardwell@google.com>

neal

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-08-27 14:37 [PATCH net-next] tcp: add tcp_add_backlog() Eric Dumazet
  2016-08-27 16:13 ` Yuchung Cheng
  2016-08-27 18:24 ` Neal Cardwell
@ 2016-08-29  4:20 ` David Miller
  2016-08-29 18:51 ` Marcelo Ricardo Leitner
  2016-09-22 22:34 ` Marcelo Ricardo Leitner
  4 siblings, 0 replies; 17+ messages in thread
From: David Miller @ 2016-08-29  4:20 UTC (permalink / raw)
  To: eric.dumazet; +Cc: netdev, ncardwell, ycheng

From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Sat, 27 Aug 2016 07:37:54 -0700

> From: Eric Dumazet <edumazet@google.com>
> 
> When TCP operates in lossy environments (between 1 and 10 % packet
> losses), many SACK blocks can be exchanged, and I noticed we could
> drop them on busy senders, if these SACK blocks have to be queued
> into the socket backlog.
> 
> While the main cause is the poor performance of RACK/SACK processing,
> we can try to avoid these drops of valuable information that can lead to
> spurious timeouts and retransmits.
> 
> Cause of the drops is the skb->truesize overestimation caused by :
> 
> - drivers allocating ~2048 (or more) bytes as a fragment to hold an
>   Ethernet frame.
> 
> - various pskb_may_pull() calls bringing the headers into skb->head
>   might have pulled all the frame content, but skb->truesize could
>   not be lowered, as the stack has no idea of each fragment truesize.
> 
> The backlog drops are also more visible on bidirectional flows, since
> their sk_rmem_alloc can be quite big.
> 
> Let's add some room for the backlog, as only the socket owner
> can selectively take action to lower memory needs, like collapsing
> receive queues or partial ofo pruning.
> 
> Signed-off-by: Eric Dumazet <edumazet@google.com>

Really nice change, thanks Eric.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-08-27 16:25   ` Eric Dumazet
@ 2016-08-29 16:53     ` Yuchung Cheng
  0 siblings, 0 replies; 17+ messages in thread
From: Yuchung Cheng @ 2016-08-29 16:53 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, Neal Cardwell

On Sat, Aug 27, 2016 at 9:25 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
> On Sat, 2016-08-27 at 09:13 -0700, Yuchung Cheng wrote:
> > On Sat, Aug 27, 2016 at 7:37 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> > >
>
> > > +       /* Only socket owner can try to collapse/prune rx queues
> > > +        * to reduce memory overhead, so add a little headroom here.
> > > +        * Few sockets backlog are possibly concurrently non empty.
> > > +        */
> > > +       limit += 64*1024;
> > Just a thought: only add the headroom if ofo queue exists (e.g., signs
> > of losses ore recovery).
>
> Testing the ofo would add a cache line miss, and likely slow down the
> other cpu processing the other packets for this flow.
>
> Also, even if the ofo does not exist, the sk_rcvbuf budget can be
> consumed by regular receive queue.
>
> We still need to be able to process incoming ACK, if both send and
> receive queues are 'full'.
>
> >
> > btw is the added headroom subject to the memory pressure check?
>
> Remind that the backlog check here is mostly to avoid some kind of DOS
> attacks that we had in the past.
>
> While we should definitely prevents DOS attacks, we should also not drop
> legitimate traffic.
>
> Here, number of backlogged sockets is limited by the number of cpus in
> the host (if CONFIG_PREEMPT is disabled), or number of threads blocked
> during a sendmsg()/recvmsg() (if CONFIG_PREEMPT is enabled)
>
> So we do not need to be ultra precise, just have a safe guard.
>
> The pressure check will be done at the time skbs will be added into a
> receive/ofo queue in the very near future.
Good to know. Thanks.

>
>
> Thanks !
>
>
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-08-27 14:37 [PATCH net-next] tcp: add tcp_add_backlog() Eric Dumazet
                   ` (2 preceding siblings ...)
  2016-08-29  4:20 ` David Miller
@ 2016-08-29 18:51 ` Marcelo Ricardo Leitner
  2016-08-29 19:22   ` Eric Dumazet
  2016-09-22 22:34 ` Marcelo Ricardo Leitner
  4 siblings, 1 reply; 17+ messages in thread
From: Marcelo Ricardo Leitner @ 2016-08-29 18:51 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

On Sat, Aug 27, 2016 at 07:37:54AM -0700, Eric Dumazet wrote:
> From: Eric Dumazet <edumazet@google.com>
> 
> When TCP operates in lossy environments (between 1 and 10 % packet
> losses), many SACK blocks can be exchanged, and I noticed we could
> drop them on busy senders, if these SACK blocks have to be queued
> into the socket backlog.
> 
> While the main cause is the poor performance of RACK/SACK processing,
> we can try to avoid these drops of valuable information that can lead to
> spurious timeouts and retransmits.
> 
> Cause of the drops is the skb->truesize overestimation caused by :
> 
> - drivers allocating ~2048 (or more) bytes as a fragment to hold an
>   Ethernet frame.
> 
> - various pskb_may_pull() calls bringing the headers into skb->head
>   might have pulled all the frame content, but skb->truesize could
>   not be lowered, as the stack has no idea of each fragment truesize.
> 
> The backlog drops are also more visible on bidirectional flows, since
> their sk_rmem_alloc can be quite big.
> 
> Let's add some room for the backlog, as only the socket owner
> can selectively take action to lower memory needs, like collapsing
> receive queues or partial ofo pruning.
> 
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Cc: Yuchung Cheng <ycheng@google.com>
> Cc: Neal Cardwell <ncardwell@google.com>
> ---
>  include/net/tcp.h   |    1 +
>  net/ipv4/tcp_ipv4.c |   33 +++++++++++++++++++++++++++++----
>  net/ipv6/tcp_ipv6.c |    5 +----
>  3 files changed, 31 insertions(+), 8 deletions(-)
> 
> diff --git a/include/net/tcp.h b/include/net/tcp.h
> index 25d64f6de69e1f639ed1531bf2d2df3f00fd76a2..5f5f09f6e019682ef29c864d2f43a8f247fcdd9a 100644
> --- a/include/net/tcp.h
> +++ b/include/net/tcp.h
> @@ -1163,6 +1163,7 @@ static inline void tcp_prequeue_init(struct tcp_sock *tp)
>  }
>  
>  bool tcp_prequeue(struct sock *sk, struct sk_buff *skb);
> +bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb);
>  
>  #undef STATE_TRACE
>  
> diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
> index ad41e8ecf796bba1bd6d9ed155ca4a57ced96844..53e80cd004b6ce401c3acbb4b243b243c5c3c4a3 100644
> --- a/net/ipv4/tcp_ipv4.c
> +++ b/net/ipv4/tcp_ipv4.c
> @@ -1532,6 +1532,34 @@ bool tcp_prequeue(struct sock *sk, struct sk_buff *skb)
>  }
>  EXPORT_SYMBOL(tcp_prequeue);
>  
> +bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
> +{
> +	u32 limit = sk->sk_rcvbuf + sk->sk_sndbuf;
> +
> +	/* Only socket owner can try to collapse/prune rx queues
> +	 * to reduce memory overhead, so add a little headroom here.
> +	 * Few sockets backlog are possibly concurrently non empty.
> +	 */
> +	limit += 64*1024;
> +
> +	/* In case all data was pulled from skb frags (in __pskb_pull_tail()),
> +	 * we can fix skb->truesize to its real value to avoid future drops.
> +	 * This is valid because skb is not yet charged to the socket.
> +	 * It has been noticed pure SACK packets were sometimes dropped
> +	 * (if cooked by drivers without copybreak feature).
> +	 */
> +	if (!skb->data_len)
> +		skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));

Shouldn't __pskb_pull_tail() already fix this? As it seems the expected
behavior and it would have a more global effect then. For drivers not
using copybreak, that's needed here anyway, but maybe this help other
protocols/situations too.

Thanks,
  Marcelo

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-08-29 18:51 ` Marcelo Ricardo Leitner
@ 2016-08-29 19:22   ` Eric Dumazet
  2016-08-29 19:33     ` Marcelo Ricardo Leitner
  0 siblings, 1 reply; 17+ messages in thread
From: Eric Dumazet @ 2016-08-29 19:22 UTC (permalink / raw)
  To: Marcelo Ricardo Leitner
  Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

On Mon, 2016-08-29 at 15:51 -0300, Marcelo Ricardo Leitner wrote:
> 	skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
> 
> Shouldn't __pskb_pull_tail() already fix this? As it seems the expected
> behavior and it would have a more global effect then. For drivers not
> using copybreak, that's needed here anyway, but maybe this help other
> protocols/situations too.

That would be difficult, because some callers do their own truesize
tacking (skb might be attached/charged to a socket, so changing
skb->truesize would need to adjust the amount that was charged)

This is why pskb_expand_head() is not allowed to mess with skb->truesize
(but in the opposite way, since we probably are increasing
skb->truesize)

Not sure it is worth the pain in fast path, where packets are consumed
so fast that their skb->truesize being slightly over estimated is not an
issue.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-08-29 19:22   ` Eric Dumazet
@ 2016-08-29 19:33     ` Marcelo Ricardo Leitner
  0 siblings, 0 replies; 17+ messages in thread
From: Marcelo Ricardo Leitner @ 2016-08-29 19:33 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

On Mon, Aug 29, 2016 at 12:22:37PM -0700, Eric Dumazet wrote:
> On Mon, 2016-08-29 at 15:51 -0300, Marcelo Ricardo Leitner wrote:
> > 	skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
> > 
> > Shouldn't __pskb_pull_tail() already fix this? As it seems the expected
> > behavior and it would have a more global effect then. For drivers not
> > using copybreak, that's needed here anyway, but maybe this help other
> > protocols/situations too.
> 
> That would be difficult, because some callers do their own truesize
> tacking (skb might be attached/charged to a socket, so changing
> skb->truesize would need to adjust the amount that was charged)
> 
> This is why pskb_expand_head() is not allowed to mess with skb->truesize
> (but in the opposite way, since we probably are increasing
> skb->truesize)
> 

Ok, makes sense.

> Not sure it is worth the pain in fast path, where packets are consumed
> so fast that their skb->truesize being slightly over estimated is not an
> issue.

Good point, thanks.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-08-27 14:37 [PATCH net-next] tcp: add tcp_add_backlog() Eric Dumazet
                   ` (3 preceding siblings ...)
  2016-08-29 18:51 ` Marcelo Ricardo Leitner
@ 2016-09-22 22:34 ` Marcelo Ricardo Leitner
  2016-09-22 23:21   ` Eric Dumazet
  4 siblings, 1 reply; 17+ messages in thread
From: Marcelo Ricardo Leitner @ 2016-09-22 22:34 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

On Sat, Aug 27, 2016 at 07:37:54AM -0700, Eric Dumazet wrote:
> +bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
> +{
> +	u32 limit = sk->sk_rcvbuf + sk->sk_sndbuf;
                                 ^^^
...
> +	if (!skb->data_len)
> +		skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
> +
> +	if (unlikely(sk_add_backlog(sk, skb, limit))) {
...
> -	} else if (unlikely(sk_add_backlog(sk, skb,
> -					   sk->sk_rcvbuf + sk->sk_sndbuf))) {
	                                                 ^---- [1]
> -		bh_unlock_sock(sk);
> -		__NET_INC_STATS(net, LINUX_MIB_TCPBACKLOGDROP);
> +	} else if (tcp_add_backlog(sk, skb)) {

Hi Eric, after this patch, do you think we still need to add sk_sndbuf
as a stretching factor to the backlog here?

It was added by [1] and it was justified that the (s)ack packets were
just too big for the rx buf size. Maybe this new patch alone is enough
already, as such packets will have a very small truesize then.

  Marcelo

[1] da882c1f2eca ("tcp: sk_add_backlog() is too agressive for TCP")

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-09-22 22:34 ` Marcelo Ricardo Leitner
@ 2016-09-22 23:21   ` Eric Dumazet
  2016-09-23 12:45     ` Marcelo Ricardo Leitner
  0 siblings, 1 reply; 17+ messages in thread
From: Eric Dumazet @ 2016-09-22 23:21 UTC (permalink / raw)
  To: Marcelo Ricardo Leitner
  Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

On Thu, 2016-09-22 at 19:34 -0300, Marcelo Ricardo Leitner wrote:
> On Sat, Aug 27, 2016 at 07:37:54AM -0700, Eric Dumazet wrote:
> > +bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
> > +{
> > +	u32 limit = sk->sk_rcvbuf + sk->sk_sndbuf;
>                                  ^^^
> ...
> > +	if (!skb->data_len)
> > +		skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
> > +
> > +	if (unlikely(sk_add_backlog(sk, skb, limit))) {
> ...
> > -	} else if (unlikely(sk_add_backlog(sk, skb,
> > -					   sk->sk_rcvbuf + sk->sk_sndbuf))) {
> 	                                                 ^---- [1]
> > -		bh_unlock_sock(sk);
> > -		__NET_INC_STATS(net, LINUX_MIB_TCPBACKLOGDROP);
> > +	} else if (tcp_add_backlog(sk, skb)) {
> 
> Hi Eric, after this patch, do you think we still need to add sk_sndbuf
> as a stretching factor to the backlog here?
> 
> It was added by [1] and it was justified that the (s)ack packets were
> just too big for the rx buf size. Maybe this new patch alone is enough
> already, as such packets will have a very small truesize then.
> 
>   Marcelo
> 
> [1] da882c1f2eca ("tcp: sk_add_backlog() is too agressive for TCP")
> 

Hi Marcelo

Yes, it is still needed, some drivers provide linear skbs, so the
skb->truesize of ack packets will likely be the same (skb->head points
to a full size frame allocated by the driver)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-09-22 23:21   ` Eric Dumazet
@ 2016-09-23 12:45     ` Marcelo Ricardo Leitner
  2016-09-23 13:42       ` Eric Dumazet
  0 siblings, 1 reply; 17+ messages in thread
From: Marcelo Ricardo Leitner @ 2016-09-23 12:45 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

On Thu, Sep 22, 2016 at 04:21:30PM -0700, Eric Dumazet wrote:
> On Thu, 2016-09-22 at 19:34 -0300, Marcelo Ricardo Leitner wrote:
> > On Sat, Aug 27, 2016 at 07:37:54AM -0700, Eric Dumazet wrote:
> > > +bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
> > > +{
> > > +	u32 limit = sk->sk_rcvbuf + sk->sk_sndbuf;
> >                                  ^^^
> > ...
> > > +	if (!skb->data_len)
> > > +		skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
> > > +
> > > +	if (unlikely(sk_add_backlog(sk, skb, limit))) {
> > ...
> > > -	} else if (unlikely(sk_add_backlog(sk, skb,
> > > -					   sk->sk_rcvbuf + sk->sk_sndbuf))) {
> > 	                                                 ^---- [1]
> > > -		bh_unlock_sock(sk);
> > > -		__NET_INC_STATS(net, LINUX_MIB_TCPBACKLOGDROP);
> > > +	} else if (tcp_add_backlog(sk, skb)) {
> > 
> > Hi Eric, after this patch, do you think we still need to add sk_sndbuf
> > as a stretching factor to the backlog here?
> > 
> > It was added by [1] and it was justified that the (s)ack packets were
> > just too big for the rx buf size. Maybe this new patch alone is enough
> > already, as such packets will have a very small truesize then.
> > 
> >   Marcelo
> > 
> > [1] da882c1f2eca ("tcp: sk_add_backlog() is too agressive for TCP")
> > 
> 
> Hi Marcelo
> 
> Yes, it is still needed, some drivers provide linear skbs, so the
> skb->truesize of ack packets will likely be the same (skb->head points
> to a full size frame allocated by the driver)

Aye. In that case, what about using tail instead of end? Because
accounting for something that we have to tweak the limits to accept is
like adding a constant to both sides of the equation.
But perhaps that would cut out too much of the fat which could be used
later by the stack.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-09-23 12:45     ` Marcelo Ricardo Leitner
@ 2016-09-23 13:42       ` Eric Dumazet
  2016-09-23 14:09         ` Marcelo Ricardo Leitner
  0 siblings, 1 reply; 17+ messages in thread
From: Eric Dumazet @ 2016-09-23 13:42 UTC (permalink / raw)
  To: Marcelo Ricardo Leitner
  Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

On Fri, 2016-09-23 at 09:45 -0300, Marcelo Ricardo Leitner wrote:

> Aye. In that case, what about using tail instead of end?


What do you mean exactly ?

>  Because
> accounting for something that we have to tweak the limits to accept is
> like adding a constant to both sides of the equation.
> But perhaps that would cut out too much of the fat which could be used
> later by the stack.

Are you facing a particular problem with current code ?

I am working to reduce the SACK at their source (the receiver), instead
of trying to filter them when they had to travel all the way back to TCP
sender.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-09-23 13:42       ` Eric Dumazet
@ 2016-09-23 14:09         ` Marcelo Ricardo Leitner
  2016-09-23 14:36           ` Eric Dumazet
  0 siblings, 1 reply; 17+ messages in thread
From: Marcelo Ricardo Leitner @ 2016-09-23 14:09 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

On Fri, Sep 23, 2016 at 06:42:51AM -0700, Eric Dumazet wrote:
> On Fri, 2016-09-23 at 09:45 -0300, Marcelo Ricardo Leitner wrote:
> 
> > Aye. In that case, what about using tail instead of end?
> 
> 
> What do you mean exactly ?

Something like:
-skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
+skb->truesize = SKB_TRUESIZE(skb_tail_offset(skb));

And define skb_tail_offset() to something similar skb_end_offset(), so
that it would account only for the data and not unused space in the
buffer.

> 
> >  Because
> > accounting for something that we have to tweak the limits to accept is
> > like adding a constant to both sides of the equation.
> > But perhaps that would cut out too much of the fat which could be used
> > later by the stack.
> 
> Are you facing a particular problem with current code ?
> 

For TCP, no, just wondering. :-)

I'm having similar issues with SCTP: if the socket gets backlogged, the
buffer accounting gets pretty messy. SCTP calculates the rwnd to be just
rcvbuf/2, but this ratio is often different in backlog and it causes the
advertized window to be too big, resulting in packet drops in the
backlog.

SCTP has some way to identify and compensate this "extra" rwnd, via
rwnd_press, and will shrink it if it detects that the window is bigger
than the buffer available. But as the socket is backlogged, it's doesn't
kick in soon enough to prevent such drops.

It's not just a matter of adjusting the overhead ratio (rcvbuf/2)
because with SCTP the packets may have different sizes, so a packet with
a chunk of 100 bytes will have a ratio and another with 1000 bytes will
have another, within the same association.

So I'm leaning towards on updating truesize before adding to the
backlog, but to account just for the actual packet, regardless of the
buffer that was used for it. It still has the overhead ratio issue with
different packet sizes, though, but smaller.

Note that SCTP doesn't have buffer auto-tuning yet.

> I am working to reduce the SACK at their source (the receiver), instead
> of trying to filter them when they had to travel all the way back to TCP
> sender.
>

Cool.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-09-23 14:09         ` Marcelo Ricardo Leitner
@ 2016-09-23 14:36           ` Eric Dumazet
  2016-09-23 14:43             ` David Laight
  2016-09-23 15:12             ` Marcelo Ricardo Leitner
  0 siblings, 2 replies; 17+ messages in thread
From: Eric Dumazet @ 2016-09-23 14:36 UTC (permalink / raw)
  To: Marcelo Ricardo Leitner
  Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

On Fri, 2016-09-23 at 11:09 -0300, Marcelo Ricardo Leitner wrote:
> On Fri, Sep 23, 2016 at 06:42:51AM -0700, Eric Dumazet wrote:
> > On Fri, 2016-09-23 at 09:45 -0300, Marcelo Ricardo Leitner wrote:
> > 
> > > Aye. In that case, what about using tail instead of end?
> > 
> > 
> > What do you mean exactly ?
> 
> Something like:
> -skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
> +skb->truesize = SKB_TRUESIZE(skb_tail_offset(skb));

Certainly not ;)

This would be lying.
We really want a precise memory accounting to avoid OOM.

Some USB drivers use 8KB for their skb->head, you do not want to pretend
its 66+NET_SKB_PAF=F bytes just because there is no payload in the
packet.

> 
> And define skb_tail_offset() to something similar skb_end_offset(), so
> that it would account only for the data and not unused space in the
> buffer.
> 
> > 
> > >  Because
> > > accounting for something that we have to tweak the limits to accept is
> > > like adding a constant to both sides of the equation.
> > > But perhaps that would cut out too much of the fat which could be used
> > > later by the stack.
> > 
> > Are you facing a particular problem with current code ?
> > 
> 
> For TCP, no, just wondering. :-)
> 
> I'm having similar issues with SCTP: if the socket gets backlogged, the
> buffer accounting gets pretty messy. SCTP calculates the rwnd to be just
> rcvbuf/2, but this ratio is often different in backlog and it causes the
> advertized window to be too big, resulting in packet drops in the
> backlog.
> 
> SCTP has some way to identify and compensate this "extra" rwnd, via
> rwnd_press, and will shrink it if it detects that the window is bigger
> than the buffer available. But as the socket is backlogged, it's doesn't
> kick in soon enough to prevent such drops.
> 
> It's not just a matter of adjusting the overhead ratio (rcvbuf/2)
> because with SCTP the packets may have different sizes, so a packet with
> a chunk of 100 bytes will have a ratio and another with 1000 bytes will
> have another, within the same association.
> 
> So I'm leaning towards on updating truesize before adding to the
> backlog, but to account just for the actual packet, regardless of the
> buffer that was used for it. It still has the overhead ratio issue with
> different packet sizes, though, but smaller.
> 
> Note that SCTP doesn't have buffer auto-tuning yet.

Also for TCP, we might need to use sk->sk_wmem_queued instead of
sk->sk_sndbuf

This is because SACK processing can suddenly split skbs in 1-MSS pieces.

Problem is that for very large BDP, we can end up with thousands of skb
in backlog. So I am also considering to try to coalesce stupid ACK sent
by non GRO receivers or simply the verbose SACK blocks...

eg if backlog is under pressure and its tail is : 

ACK 1   <sack 4000:5000>

and the incoming packet is :

ACK 1  <sack 4000:6000>

Then we could replace the tail by the incoming packet with minimal
impact.

Since we might receive hundred of 'sequential' SACK blocks, this would
help to reduce time taken by the application to process the (now
smaller) backlog

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-09-23 14:36           ` Eric Dumazet
@ 2016-09-23 14:43             ` David Laight
  2016-09-23 15:12             ` Marcelo Ricardo Leitner
  1 sibling, 0 replies; 17+ messages in thread
From: David Laight @ 2016-09-23 14:43 UTC (permalink / raw)
  To: 'Eric Dumazet', Marcelo Ricardo Leitner
  Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

From: Eric Dumazet
> Sent: 23 September 2016 15:37
> On Fri, 2016-09-23 at 11:09 -0300, Marcelo Ricardo Leitner wrote:
> > On Fri, Sep 23, 2016 at 06:42:51AM -0700, Eric Dumazet wrote:
> > > On Fri, 2016-09-23 at 09:45 -0300, Marcelo Ricardo Leitner wrote:
> > >
> > > > Aye. In that case, what about using tail instead of end?
> > >
> > >
> > > What do you mean exactly ?
> >
> > Something like:
> > -skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
> > +skb->truesize = SKB_TRUESIZE(skb_tail_offset(skb));
> 
> Certainly not ;)
> 
> This would be lying.
> We really want a precise memory accounting to avoid OOM.
> 
> Some USB drivers use 8KB for their skb->head, you do not want to pretend
> its 66+NET_SKB_PAF=F bytes just because there is no payload in the
> packet.

Last I look some of the USBnet drivers used 64k+ skb (and then
lied about truesize).

That whole usbnet stuff needs a rework for usb3 (xhci) so that once
the rings (etc) are all setup packet transfer looks much more like
a normal ethernet device.
I think it needs to stop trying to receive into skb, instead just
receive into usb sized buffers - and then generate skb that contain
the ethernet frame data.
Not that I'm volunteering :-)

	David


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH net-next] tcp: add tcp_add_backlog()
  2016-09-23 14:36           ` Eric Dumazet
  2016-09-23 14:43             ` David Laight
@ 2016-09-23 15:12             ` Marcelo Ricardo Leitner
  1 sibling, 0 replies; 17+ messages in thread
From: Marcelo Ricardo Leitner @ 2016-09-23 15:12 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

On Fri, Sep 23, 2016 at 07:36:32AM -0700, Eric Dumazet wrote:
> On Fri, 2016-09-23 at 11:09 -0300, Marcelo Ricardo Leitner wrote:
> > On Fri, Sep 23, 2016 at 06:42:51AM -0700, Eric Dumazet wrote:
> > > On Fri, 2016-09-23 at 09:45 -0300, Marcelo Ricardo Leitner wrote:
> > > 
> > > > Aye. In that case, what about using tail instead of end?
> > > 
> > > 
> > > What do you mean exactly ?
> > 
> > Something like:
> > -skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
> > +skb->truesize = SKB_TRUESIZE(skb_tail_offset(skb));
> 
> Certainly not ;)
> 
> This would be lying.

Yep, but also so is adding txbuf to the equation to account for that,
right? :-) Unless you're considering that acks should/can be sort of
accounted by the txbuf instead, then it makes sense to sum the buffer
sizes in there.

> We really want a precise memory accounting to avoid OOM.

Indeed

> 
> Some USB drivers use 8KB for their skb->head, you do not want to pretend
> its 66+NET_SKB_PAF=F bytes just because there is no payload in the
> packet.

Oh.

> 
> > 
> > And define skb_tail_offset() to something similar skb_end_offset(), so
> > that it would account only for the data and not unused space in the
> > buffer.
> > 
> > > 
> > > >  Because
> > > > accounting for something that we have to tweak the limits to accept is
> > > > like adding a constant to both sides of the equation.
> > > > But perhaps that would cut out too much of the fat which could be used
> > > > later by the stack.
> > > 
> > > Are you facing a particular problem with current code ?
> > > 
> > 
> > For TCP, no, just wondering. :-)
> > 
> > I'm having similar issues with SCTP: if the socket gets backlogged, the
> > buffer accounting gets pretty messy. SCTP calculates the rwnd to be just
> > rcvbuf/2, but this ratio is often different in backlog and it causes the
> > advertized window to be too big, resulting in packet drops in the
> > backlog.
> > 
> > SCTP has some way to identify and compensate this "extra" rwnd, via
> > rwnd_press, and will shrink it if it detects that the window is bigger
> > than the buffer available. But as the socket is backlogged, it's doesn't
> > kick in soon enough to prevent such drops.
> > 
> > It's not just a matter of adjusting the overhead ratio (rcvbuf/2)
> > because with SCTP the packets may have different sizes, so a packet with
> > a chunk of 100 bytes will have a ratio and another with 1000 bytes will
> > have another, within the same association.
> > 
> > So I'm leaning towards on updating truesize before adding to the
> > backlog, but to account just for the actual packet, regardless of the
> > buffer that was used for it. It still has the overhead ratio issue with
> > different packet sizes, though, but smaller.
> > 
> > Note that SCTP doesn't have buffer auto-tuning yet.
> 
> Also for TCP, we might need to use sk->sk_wmem_queued instead of
> sk->sk_sndbuf

This is interesting. Then it would only stretch the backlog limit if
there is a heavy tx going on.

> 
> This is because SACK processing can suddenly split skbs in 1-MSS pieces.
> 
> Problem is that for very large BDP, we can end up with thousands of skb
> in backlog. So I am also considering to try to coalesce stupid ACK sent
> by non GRO receivers or simply the verbose SACK blocks...
> 
> eg if backlog is under pressure and its tail is : 
> 
> ACK 1   <sack 4000:5000>
> 
> and the incoming packet is :
> 
> ACK 1  <sack 4000:6000>
> 
> Then we could replace the tail by the incoming packet with minimal
> impact.
> 
> Since we might receive hundred of 'sequential' SACK blocks, this would
> help to reduce time taken by the application to process the (now
> smaller) backlog
> 

That would be cool.

Thanks,
Marcelo

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2016-09-23 15:12 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-27 14:37 [PATCH net-next] tcp: add tcp_add_backlog() Eric Dumazet
2016-08-27 16:13 ` Yuchung Cheng
2016-08-27 16:25   ` Eric Dumazet
2016-08-29 16:53     ` Yuchung Cheng
2016-08-27 18:24 ` Neal Cardwell
2016-08-29  4:20 ` David Miller
2016-08-29 18:51 ` Marcelo Ricardo Leitner
2016-08-29 19:22   ` Eric Dumazet
2016-08-29 19:33     ` Marcelo Ricardo Leitner
2016-09-22 22:34 ` Marcelo Ricardo Leitner
2016-09-22 23:21   ` Eric Dumazet
2016-09-23 12:45     ` Marcelo Ricardo Leitner
2016-09-23 13:42       ` Eric Dumazet
2016-09-23 14:09         ` Marcelo Ricardo Leitner
2016-09-23 14:36           ` Eric Dumazet
2016-09-23 14:43             ` David Laight
2016-09-23 15:12             ` Marcelo Ricardo Leitner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.