From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yuchung Cheng Subject: [PATCH net-next 4/8] tcp: account lost retransmit after timeout Date: Wed, 16 May 2018 16:40:13 -0700 Message-ID: <20180516234017.172775-5-ycheng@google.com> References: <20180516234017.172775-1-ycheng@google.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: netdev@vger.kernel.org, edumazet@google.com, ncardwell@google.com, soheil@google.com, priyarjha@google.com, Yuchung Cheng To: davem@davemloft.net Return-path: Received: from mail-wr0-f193.google.com ([209.85.128.193]:42895 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751698AbeEPXkq (ORCPT ); Wed, 16 May 2018 19:40:46 -0400 Received: by mail-wr0-f193.google.com with SMTP id t16-v6so560018wrm.9 for ; Wed, 16 May 2018 16:40:45 -0700 (PDT) In-Reply-To: <20180516234017.172775-1-ycheng@google.com> Sender: netdev-owner@vger.kernel.org List-ID: The previous approach for the lost and retransmit bits was to wipe the slate clean: zero all the lost and retransmit bits, correspondingly zero the lost_out and retrans_out counters, and then add back the lost bits (and correspondingly increment lost_out). The new approach is to treat this very much like marking packets lost in fast recovery. We don’t wipe the slate clean. We just say that for all packets that were not yet marked sacked or lost, we now mark them as lost in exactly the same way we do for fast recovery. This fixes the lost retransmit accounting at RTO time and greatly simplifies the RTO code by sharing much of the logic with Fast Recovery. Signed-off-by: Yuchung Cheng Signed-off-by: Neal Cardwell Reviewed-by: Eric Dumazet Reviewed-by: Soheil Hassas Yeganeh Reviewed-by: Priyaranjan Jha --- include/net/tcp.h | 1 + net/ipv4/tcp_input.c | 18 +++--------------- net/ipv4/tcp_recovery.c | 4 ++-- 3 files changed, 6 insertions(+), 17 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index d7f81325bee5..402484ed9b57 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1878,6 +1878,7 @@ void tcp_v4_init(void); void tcp_init(void); /* tcp_recovery.c */ +void tcp_mark_skb_lost(struct sock *sk, struct sk_buff *skb); void tcp_newreno_mark_lost(struct sock *sk, bool snd_una_advanced); extern void tcp_rack_mark_lost(struct sock *sk); extern void tcp_rack_advance(struct tcp_sock *tp, u8 sacked, u32 end_seq, diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 076206873e3e..6fb0a28977a0 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -1929,7 +1929,6 @@ void tcp_enter_loss(struct sock *sk) struct sk_buff *skb; bool new_recovery = icsk->icsk_ca_state < TCP_CA_Recovery; bool is_reneg; /* is receiver reneging on SACKs? */ - bool mark_lost; /* Reduce ssthresh if it has not yet been made inside this window. */ if (icsk->icsk_ca_state <= TCP_CA_Disorder || @@ -1945,9 +1944,6 @@ void tcp_enter_loss(struct sock *sk) tp->snd_cwnd_cnt = 0; tp->snd_cwnd_stamp = tcp_jiffies32; - tp->retrans_out = 0; - tp->lost_out = 0; - if (tcp_is_reno(tp)) tcp_reset_reno_sack(tp); @@ -1959,21 +1955,13 @@ void tcp_enter_loss(struct sock *sk) /* Mark SACK reneging until we recover from this loss event. */ tp->is_sack_reneg = 1; } - tcp_clear_all_retrans_hints(tp); - skb_rbtree_walk_from(skb) { - mark_lost = (!(TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED) || - is_reneg); - if (mark_lost) - tcp_sum_lost(tp, skb); - TCP_SKB_CB(skb)->sacked &= (~TCPCB_TAGBITS)|TCPCB_SACKED_ACKED; - if (mark_lost) { + if (is_reneg) TCP_SKB_CB(skb)->sacked &= ~TCPCB_SACKED_ACKED; - TCP_SKB_CB(skb)->sacked |= TCPCB_LOST; - tp->lost_out += tcp_skb_pcount(skb); - } + tcp_mark_skb_lost(sk, skb); } tcp_verify_left_out(tp); + tcp_clear_all_retrans_hints(tp); /* Timeout in disordered state after receiving substantial DUPACKs * suggests that the degree of reordering is over-estimated. diff --git a/net/ipv4/tcp_recovery.c b/net/ipv4/tcp_recovery.c index 299b0e38aa9a..b2f9be388bf3 100644 --- a/net/ipv4/tcp_recovery.c +++ b/net/ipv4/tcp_recovery.c @@ -2,7 +2,7 @@ #include #include -static void tcp_rack_mark_skb_lost(struct sock *sk, struct sk_buff *skb) +void tcp_mark_skb_lost(struct sock *sk, struct sk_buff *skb) { struct tcp_sock *tp = tcp_sk(sk); @@ -95,7 +95,7 @@ static void tcp_rack_detect_loss(struct sock *sk, u32 *reo_timeout) remaining = tp->rack.rtt_us + reo_wnd - tcp_stamp_us_delta(tp->tcp_mstamp, skb->skb_mstamp); if (remaining <= 0) { - tcp_rack_mark_skb_lost(sk, skb); + tcp_mark_skb_lost(sk, skb); list_del_init(&skb->tcp_tsorted_anchor); } else { /* Record maximum wait time */ -- 2.17.0.441.gb46fe60e1d-goog