* retrans_stamp not cleared while testing NewReno implementation.
@ 2022-09-02 10:29 Arankal, Nagaraj
2022-09-02 13:55 ` Neal Cardwell
0 siblings, 1 reply; 4+ messages in thread
From: Arankal, Nagaraj @ 2022-09-02 10:29 UTC (permalink / raw)
To: netdev
While testing newReno implementation on 4.19.197 based debian kernel, NewReno(SACK disabled) with connections that have a very low traffic, we may timeout the connection too early if a second loss occurs after the first one was successfully acked but no data was transferred later. Below is his description of it:
When SACK is disabled, and a socket suffers multiple separate TCP retransmissions, that socket's ETIMEDOUT value is calculated from the time of the *first* retransmission instead of the *latest* retransmission.
This happens because the tcp_sock's retrans_stamp is set once then never cleared.
Take the following connection:
(*1) One data packet sent.
(*2) Because no ACK packet is received, the packet is retransmitted.
(*3) The ACK packet is received. The transmitted packet is acknowledged.
At this point the first "retransmission event" has passed and been recovered from. Any future retransmission is a completely new "event".
(*4) After 16 minutes (to correspond with tcp_retries2=15), a new data packet is sent. Note: No data is transmitted between (*3) and (*4) and we disabled keep alives.
The socket's timeout SHOULD be calculated from this point in time, but instead it's calculated from the prior "event" 16 minutes ago.
(*5) Because no ACK packet is received, the packet is retransmitted.
(*6) At the time of the 2nd retransmission, the socket returns ETIMEDOUT.
From the history I came to know that there was a fix included, which would resolve above issue. Please find below patch.
static bool tcp_try_undo_recovery(struct sock *sk)
* is ACKed. For Reno it is MUST to prevent false
* fast retransmits (RFC2582). SACK TCP is safe. */
tcp_moderate_cwnd(tp);
+ if (!tcp_any_retrans_done(sk))
+ tp->retrans_stamp = 0;
return true;
}
However, after introducing following fix,
[net,1/2] tcp: only undo on partial ACKs in CA_Loss
I am not able to see retrains_stamp reset to Zero.
Inside tcp_process_loss , we are returning from below code path.
if ((flag & FLAG_SND_UNA_ADVANCED) &&
tcp_try_undo_loss(sk, false))
return;
because of which tp->retrans_stamp is never cleared as we failed to invoke tcp_try_undo_recovery.
Is this a known bug in kernel code or is it an expected behavior.
- Thanks in advance,
Nagaraj
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: retrans_stamp not cleared while testing NewReno implementation.
2022-09-02 10:29 retrans_stamp not cleared while testing NewReno implementation Arankal, Nagaraj
@ 2022-09-02 13:55 ` Neal Cardwell
2022-09-02 14:19 ` Arankal, Nagaraj
0 siblings, 1 reply; 4+ messages in thread
From: Neal Cardwell @ 2022-09-02 13:55 UTC (permalink / raw)
To: Arankal, Nagaraj; +Cc: netdev, Eric Dumazet, Yuchung Cheng
"
On Fri, Sep 2, 2022 at 6:29 AM Arankal, Nagaraj
<nagaraj.p.arankal@hpe.com> wrote:
>
> While testing newReno implementation on 4.19.197 based debian kernel, NewReno(SACK disabled) with connections that have a very low traffic, we may timeout the connection too early if a second loss occurs after the first one was successfully acked but no data was transferred later. Below is his description of it:
>
> When SACK is disabled, and a socket suffers multiple separate TCP retransmissions, that socket's ETIMEDOUT value is calculated from the time of the *first* retransmission instead of the *latest* retransmission.
>
> This happens because the tcp_sock's retrans_stamp is set once then never cleared.
>
> Take the following connection:
>
>
> (*1) One data packet sent.
> (*2) Because no ACK packet is received, the packet is retransmitted.
> (*3) The ACK packet is received. The transmitted packet is acknowledged.
>
> At this point the first "retransmission event" has passed and been recovered from. Any future retransmission is a completely new "event".
>
> (*4) After 16 minutes (to correspond with tcp_retries2=15), a new data packet is sent. Note: No data is transmitted between (*3) and (*4) and we disabled keep alives.
>
> The socket's timeout SHOULD be calculated from this point in time, but instead it's calculated from the prior "event" 16 minutes ago.
>
> (*5) Because no ACK packet is received, the packet is retransmitted.
> (*6) At the time of the 2nd retransmission, the socket returns ETIMEDOUT.
>
> From the history I came to know that there was a fix included, which would resolve above issue. Please find below patch.
>
> static bool tcp_try_undo_recovery(struct sock *sk)
> * is ACKed. For Reno it is MUST to prevent false
> * fast retransmits (RFC2582). SACK TCP is safe. */
> tcp_moderate_cwnd(tp);
> + if (!tcp_any_retrans_done(sk))
> + tp->retrans_stamp = 0;
> return true;
> }
>
> However, after introducing following fix,
>
> [net,1/2] tcp: only undo on partial ACKs in CA_Loss
>
> I am not able to see retrains_stamp reset to Zero.
> Inside tcp_process_loss , we are returning from below code path.
>
> if ((flag & FLAG_SND_UNA_ADVANCED) &&
> tcp_try_undo_loss(sk, false))
> return;
> because of which tp->retrans_stamp is never cleared as we failed to invoke tcp_try_undo_recovery.
>
> Is this a known bug in kernel code or is it an expected behavior.
>
>
> - Thanks in advance,
> Nagaraj
Thanks for the detailed bug report and analysis! I agree that
"tcp: only undo on partial ACKs in CA_Loss" introduced the
bug that you are analyzing.
I suspect we need a fix along the lines below. I will try to create
a packetdrill test to reproduce this and verify the fix below,
and will run this fix through our existing packetdrill tests.
Thanks!
commit d2f706c1be7e9822a99477edd69bc13ddd00557f
Author: Neal Cardwell <ncardwell@google.com>
Date: Fri Sep 2 09:36:23 2022 -0400
tcp: fix early ETIMEDOUT after spurious non-SACK RTO
Fix a bug reported and analyzed by Nagaraj Arankal, where the handling
of a spurious non-SACK RTO could cause a connection to fail to clear
retrans_stamp, causing a later RTO to very prematurely time out the
connection with ETIMEDOUT.
Here is the buggy scenario, expanding upon Nagaraj Arankal's excellent
report:
(*1) Send one data packet on a non-SACK connection
(*2) Because no ACK packet is received, the packet is retransmitted
and we enter CA_Loss; but this retransmission is spurious.
(*3) The ACK for the original data is received. The transmitted packet
is acknowledged. The TCP timestamp is before the retrans_stamp,
so tcp_may_undo() returns true, and tcp_try_undo_loss() returns
true without changing state to Open (because tcp_is_sack() is
false), and tcp_process_loss() returns without calling
tcp_try_undo_recovery(). Normally after undoing a CA_Loss
episode, tcp_fastretrans_alert() would see that the connection
has returned to CA_Open and fall through and call
tcp_try_to_open(), which would set retrans_stamp to 0. However,
for non-SACK connections we hold the connection in CA_Loss, so do
not fall through to call tcp_try_to_open() and do not set
retrans_stamp to 0. So retrans_stamp is (erroneously) still
non-zero.
At this point the first "retransmission event" has passed and
been recovered from. Any future retransmission is a completely
new "event". However, retrans_stamp is erroneously still
set. (And we are still in CA_Loss, which is correct.)
(*4) After 16 minutes (to correspond with tcp_retries2=15), a new data
packet is sent. Note: No data is transmitted between (*3) and
(*4) and we disabled keep alives.
The socket's timeout SHOULD be calculated from this point in
time, but instead it's calculated from the prior "event" 16
minutes ago (step (*2)).
(*5) Because no ACK packet is received, the packet is retransmitted.
(*6) At the time of the 2nd retransmission, the socket returns
ETIMEDOUT, prematurely, because retrans_stamp is (erroneously)
too far in the past (set at the time of (*2)).
This commit fixes this bug by ensuring that we reuse in
tcp_try_undo_loss() the same careful logic for non-SACK connections
that we have in tcp_try_undo_recovery(). To avoid duplicating logic,
we factor out that logic into a new
tcp_is_non_sack_preventing_reopen() helper and call that helper from
both undo functions.
Fixes: da34ac7626b5 ("tcp: only undo on partial ACKs in CA_Loss")
Reported-by: Nagaraj Arankal <nagaraj.p.arankal@hpe.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Change-Id: Ie58ea40bdbfe0643111a17a41eda0674f62ce76d
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index b85a9f755da41..bc2ea12221f95 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -2513,6 +2513,21 @@ static inline bool tcp_may_undo(const struct
tcp_sock *tp)
return tp->undo_marker && (!tp->undo_retrans ||
tcp_packet_delayed(tp));
}
+static bool tcp_is_non_sack_preventing_reopen(struct sock *sk)
+{
+ struct tcp_sock *tp = tcp_sk(sk);
+
+ if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) {
+ /* Hold old state until something *above* high_seq
+ * is ACKed. For Reno it is MUST to prevent false
+ * fast retransmits (RFC2582). SACK TCP is safe. */
+ if (!tcp_any_retrans_done(sk))
+ tp->retrans_stamp = 0;
+ return true;
+ }
+ return false;
+}
+
/* People celebrate: "We love our President!" */
static bool tcp_try_undo_recovery(struct sock *sk)
{
@@ -2535,14 +2550,8 @@ static bool tcp_try_undo_recovery(struct sock *sk)
} else if (tp->rack.reo_wnd_persist) {
tp->rack.reo_wnd_persist--;
}
- if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) {
- /* Hold old state until something *above* high_seq
- * is ACKed. For Reno it is MUST to prevent false
- * fast retransmits (RFC2582). SACK TCP is safe. */
- if (!tcp_any_retrans_done(sk))
- tp->retrans_stamp = 0;
+ if (tcp_is_non_sack_preventing_reopen(sk))
return true;
- }
tcp_set_ca_state(sk, TCP_CA_Open);
tp->is_sack_reneg = 0;
return false;
@@ -2578,6 +2587,8 @@ static bool tcp_try_undo_loss(struct sock *sk,
bool frto_undo)
NET_INC_STATS(sock_net(sk),
LINUX_MIB_TCPSPURIOUSRTOS);
inet_csk(sk)->icsk_retransmits = 0;
+ if (tcp_is_non_sack_preventing_reopen(sk))
+ return true;
if (frto_undo || tcp_is_sack(tp)) {
tcp_set_ca_state(sk, TCP_CA_Open);
tp->is_sack_reneg = 0;
neal
^ permalink raw reply related [flat|nested] 4+ messages in thread
* RE: retrans_stamp not cleared while testing NewReno implementation.
2022-09-02 13:55 ` Neal Cardwell
@ 2022-09-02 14:19 ` Arankal, Nagaraj
2022-09-02 19:17 ` Neal Cardwell
0 siblings, 1 reply; 4+ messages in thread
From: Arankal, Nagaraj @ 2022-09-02 14:19 UTC (permalink / raw)
To: Neal Cardwell; +Cc: netdev, Eric Dumazet, Yuchung Cheng
Hi Neal,
Thanks a lot.
I shall update my Kernel with proposed fix and run the tests again.
Regards,
Nagaraj P Arankal
-----Original Message-----
From: Neal Cardwell <ncardwell@google.com>
Sent: Friday, September 2, 2022 7:25 PM
To: Arankal, Nagaraj <nagaraj.p.arankal@hpe.com>
Cc: netdev@vger.kernel.org; Eric Dumazet <edumazet@google.com>; Yuchung Cheng <ycheng@google.com>
Subject: Re: retrans_stamp not cleared while testing NewReno implementation.
"
On Fri, Sep 2, 2022 at 6:29 AM Arankal, Nagaraj <nagaraj.p.arankal@hpe.com> wrote:
>
> While testing newReno implementation on 4.19.197 based debian kernel, NewReno(SACK disabled) with connections that have a very low traffic, we may timeout the connection too early if a second loss occurs after the first one was successfully acked but no data was transferred later. Below is his description of it:
>
> When SACK is disabled, and a socket suffers multiple separate TCP retransmissions, that socket's ETIMEDOUT value is calculated from the time of the *first* retransmission instead of the *latest* retransmission.
>
> This happens because the tcp_sock's retrans_stamp is set once then never cleared.
>
> Take the following connection:
>
>
> (*1) One data packet sent.
> (*2) Because no ACK packet is received, the packet is retransmitted.
> (*3) The ACK packet is received. The transmitted packet is acknowledged.
>
> At this point the first "retransmission event" has passed and been recovered from. Any future retransmission is a completely new "event".
>
> (*4) After 16 minutes (to correspond with tcp_retries2=15), a new data packet is sent. Note: No data is transmitted between (*3) and (*4) and we disabled keep alives.
>
> The socket's timeout SHOULD be calculated from this point in time, but instead it's calculated from the prior "event" 16 minutes ago.
>
> (*5) Because no ACK packet is received, the packet is retransmitted.
> (*6) At the time of the 2nd retransmission, the socket returns ETIMEDOUT.
>
> From the history I came to know that there was a fix included, which would resolve above issue. Please find below patch.
>
> static bool tcp_try_undo_recovery(struct sock *sk)
> * is ACKed. For Reno it is MUST to prevent false
> * fast retransmits (RFC2582). SACK TCP is safe. */
> tcp_moderate_cwnd(tp);
> + if (!tcp_any_retrans_done(sk))
> + tp->retrans_stamp = 0;
> return true;
> }
>
> However, after introducing following fix,
>
> [net,1/2] tcp: only undo on partial ACKs in CA_Loss
>
> I am not able to see retrains_stamp reset to Zero.
> Inside tcp_process_loss , we are returning from below code path.
>
> if ((flag & FLAG_SND_UNA_ADVANCED) &&
> tcp_try_undo_loss(sk, false))
> return;
> because of which tp->retrans_stamp is never cleared as we failed to invoke tcp_try_undo_recovery.
>
> Is this a known bug in kernel code or is it an expected behavior.
>
>
> - Thanks in advance,
> Nagaraj
Thanks for the detailed bug report and analysis! I agree that
"tcp: only undo on partial ACKs in CA_Loss" introduced the bug that you are analyzing.
I suspect we need a fix along the lines below. I will try to create a packetdrill test to reproduce this and verify the fix below, and will run this fix through our existing packetdrill tests.
Thanks!
commit d2f706c1be7e9822a99477edd69bc13ddd00557f
Author: Neal Cardwell <ncardwell@google.com>
Date: Fri Sep 2 09:36:23 2022 -0400
tcp: fix early ETIMEDOUT after spurious non-SACK RTO
Fix a bug reported and analyzed by Nagaraj Arankal, where the handling
of a spurious non-SACK RTO could cause a connection to fail to clear
retrans_stamp, causing a later RTO to very prematurely time out the
connection with ETIMEDOUT.
Here is the buggy scenario, expanding upon Nagaraj Arankal's excellent
report:
(*1) Send one data packet on a non-SACK connection
(*2) Because no ACK packet is received, the packet is retransmitted
and we enter CA_Loss; but this retransmission is spurious.
(*3) The ACK for the original data is received. The transmitted packet
is acknowledged. The TCP timestamp is before the retrans_stamp,
so tcp_may_undo() returns true, and tcp_try_undo_loss() returns
true without changing state to Open (because tcp_is_sack() is
false), and tcp_process_loss() returns without calling
tcp_try_undo_recovery(). Normally after undoing a CA_Loss
episode, tcp_fastretrans_alert() would see that the connection
has returned to CA_Open and fall through and call
tcp_try_to_open(), which would set retrans_stamp to 0. However,
for non-SACK connections we hold the connection in CA_Loss, so do
not fall through to call tcp_try_to_open() and do not set
retrans_stamp to 0. So retrans_stamp is (erroneously) still
non-zero.
At this point the first "retransmission event" has passed and
been recovered from. Any future retransmission is a completely
new "event". However, retrans_stamp is erroneously still
set. (And we are still in CA_Loss, which is correct.)
(*4) After 16 minutes (to correspond with tcp_retries2=15), a new data
packet is sent. Note: No data is transmitted between (*3) and
(*4) and we disabled keep alives.
The socket's timeout SHOULD be calculated from this point in
time, but instead it's calculated from the prior "event" 16
minutes ago (step (*2)).
(*5) Because no ACK packet is received, the packet is retransmitted.
(*6) At the time of the 2nd retransmission, the socket returns
ETIMEDOUT, prematurely, because retrans_stamp is (erroneously)
too far in the past (set at the time of (*2)).
This commit fixes this bug by ensuring that we reuse in
tcp_try_undo_loss() the same careful logic for non-SACK connections
that we have in tcp_try_undo_recovery(). To avoid duplicating logic,
we factor out that logic into a new
tcp_is_non_sack_preventing_reopen() helper and call that helper from
both undo functions.
Fixes: da34ac7626b5 ("tcp: only undo on partial ACKs in CA_Loss")
Reported-by: Nagaraj Arankal <nagaraj.p.arankal@hpe.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Change-Id: Ie58ea40bdbfe0643111a17a41eda0674f62ce76d
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index b85a9f755da41..bc2ea12221f95 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -2513,6 +2513,21 @@ static inline bool tcp_may_undo(const struct tcp_sock *tp)
return tp->undo_marker && (!tp->undo_retrans || tcp_packet_delayed(tp)); }
+static bool tcp_is_non_sack_preventing_reopen(struct sock *sk) {
+ struct tcp_sock *tp = tcp_sk(sk);
+
+ if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) {
+ /* Hold old state until something *above* high_seq
+ * is ACKed. For Reno it is MUST to prevent false
+ * fast retransmits (RFC2582). SACK TCP is safe. */
+ if (!tcp_any_retrans_done(sk))
+ tp->retrans_stamp = 0;
+ return true;
+ }
+ return false;
+}
+
/* People celebrate: "We love our President!" */ static bool tcp_try_undo_recovery(struct sock *sk) { @@ -2535,14 +2550,8 @@ static bool tcp_try_undo_recovery(struct sock *sk)
} else if (tp->rack.reo_wnd_persist) {
tp->rack.reo_wnd_persist--;
}
- if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) {
- /* Hold old state until something *above* high_seq
- * is ACKed. For Reno it is MUST to prevent false
- * fast retransmits (RFC2582). SACK TCP is safe. */
- if (!tcp_any_retrans_done(sk))
- tp->retrans_stamp = 0;
+ if (tcp_is_non_sack_preventing_reopen(sk))
return true;
- }
tcp_set_ca_state(sk, TCP_CA_Open);
tp->is_sack_reneg = 0;
return false;
@@ -2578,6 +2587,8 @@ static bool tcp_try_undo_loss(struct sock *sk, bool frto_undo)
NET_INC_STATS(sock_net(sk),
LINUX_MIB_TCPSPURIOUSRTOS);
inet_csk(sk)->icsk_retransmits = 0;
+ if (tcp_is_non_sack_preventing_reopen(sk))
+ return true;
if (frto_undo || tcp_is_sack(tp)) {
tcp_set_ca_state(sk, TCP_CA_Open);
tp->is_sack_reneg = 0;
neal
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: retrans_stamp not cleared while testing NewReno implementation.
2022-09-02 14:19 ` Arankal, Nagaraj
@ 2022-09-02 19:17 ` Neal Cardwell
0 siblings, 0 replies; 4+ messages in thread
From: Neal Cardwell @ 2022-09-02 19:17 UTC (permalink / raw)
To: Arankal, Nagaraj; +Cc: netdev, Eric Dumazet, Yuchung Cheng
On Fri, Sep 2, 2022 at 10:19 AM Arankal, Nagaraj
<nagaraj.p.arankal@hpe.com> wrote:
>
> Hi Neal,
> Thanks a lot.
> I shall update my Kernel with proposed fix and run the tests again.
Great. Thanks for testing the patch!
I cooked a packetdrill test case, based on your scenario, to reproduce
the problem, and was able to reproduce the problem. I have pasted that
below. And then below that I have pasted a packetdrill test showing
the fixed behavior after the proposed fix patch is applied.
We will post an official patch on the list for review/discussion.
### Here is a version showing the buggy behavior on an unpatched
kernel. The TCP sender only sends 2 RTO retransmissions before timing
out the connection, even though we requested 5 retries
(net.ipv4.tcp_retries2=5):
// Reproduce a scenario reported in the netdev thread:
// "retrans_stamp not cleared while testing NewReno implementation."
// Ensure that retrans_stamp is cleared during TS undo of an RTO episode.
--tcp_ts_tick_usecs=1000
// Set tcp_retries2 to a low value so that we time out sockets quickly.
`../common/defaults.sh
sysctl -q net.ipv4.tcp_retries2=5`
0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0
+0 < S 0:0(0) win 32792 <mss 1012,nop,nop,TS val 100 ecr 0,nop,wscale 7>
+0 > S. 0:0(0) ack 1 <mss 1460,nop,nop,TS val 0 ecr 100,nop,wscale 8>
+.020 < . 1:1(0) ack 1 win 257 <nop,nop,TS val 120 ecr 0>
+0 accept(3, ..., ...) = 4
+0 write(4, ..., 1000) = 1000
+0 > P. 1:1001(1000) ack 1 <nop,nop,TS val 20 ecr 100>
+0 %{ assert tcpi_snd_cwnd == 10, tcpi_snd_cwnd }%
// RTO and retransmit head spuriously.
+.220 > P. 1:1001(1000) ack 1 <nop,nop,TS val 220 ecr 100>
+0 %{ assert tcpi_snd_cwnd == 1, tcpi_snd_cwnd }%
+0 %{ assert tcpi_ca_state == TCP_CA_Loss }%
// ACK arrives with an ECR indicating it's ACKing the original skb,
// so we undo the loss recovery. However, since this is a non-SACK
// connection we remain in CA_Loss.
+.005 < . 1:1(0) ack 1001 win 257 <nop,nop,TS val 140 ecr 20>
+0 %{ assert tcpi_snd_cwnd == 10, tcpi_snd_cwnd }%
+0 %{ assert tcpi_ca_state == TCP_CA_Loss }%
// Much later we send something.
+11 write(4, ..., 1000) = 1000
+0 > P. 1001:2001(1000) ack 1 <nop,nop,TS val 11253 ecr 140>
+0 %{ assert tcpi_snd_cwnd == 10, tcpi_snd_cwnd }%
// RTO and retransmit head.
+.290 > P. 1001:2001(1000) ack 1 <nop,nop,TS val 11540 ecr 140>
// RTO and retransmit head.
+.618 > P. 1001:2001(1000) ack 1 <nop,nop,TS val 12148 ecr 140>
// Check whether connection is timed out yet (it should not be):
+1.30 write(4, ..., 1000) = -1 ETIMEDOUT (Connection Timed Out)
### Here is a version showing the fixed behavior on a patched kernel.
The TCP sender correctly sends 5 RTO retransmissions before timing out
the connection, since we requested 5 retries
(net.ipv4.tcp_retries2=5):
// Reproduce a scenario reported in the netdev thread:
// "retrans_stamp not cleared while testing NewReno implementation."
// Ensure that retrans_stamp is cleared during TS undo of an RTO episode.
--tcp_ts_tick_usecs=1000
// Set tcp_retries2 to 5 so that we should get exactly 5
// RTO retransmissions before the connection times out
// and returns ETIMEDOUT.
`../common/defaults.sh
sysctl -q net.ipv4.tcp_retries2=5`
0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0
+0 < S 0:0(0) win 32792 <mss 1012,nop,nop,TS val 100 ecr 0,nop,wscale 7>
+0 > S. 0:0(0) ack 1 <mss 1460,nop,nop,TS val 0 ecr 100,nop,wscale 8>
+.020 < . 1:1(0) ack 1 win 257 <nop,nop,TS val 120 ecr 0>
+0 accept(3, ..., ...) = 4
+0 write(4, ..., 1000) = 1000
+0 > P. 1:1001(1000) ack 1 <nop,nop,TS val 20 ecr 100>
+0 %{ assert tcpi_snd_cwnd == 10, tcpi_snd_cwnd }%
// RTO and retransmit head spuriously.
+.220 > P. 1:1001(1000) ack 1 <nop,nop,TS val 220 ecr 100>
+0 %{ assert tcpi_snd_cwnd == 1, tcpi_snd_cwnd }%
+0 %{ assert tcpi_ca_state == TCP_CA_Loss }%
// ACK arrives with an ECR indicating it's ACKing the original skb,
// so we undo the loss recovery. However, since this is a non-SACK
// connection we remain in CA_Loss.
+.005 < . 1:1(0) ack 1001 win 257 <nop,nop,TS val 140 ecr 20>
+0 %{ assert tcpi_snd_cwnd == 10, tcpi_snd_cwnd }%
+0 %{ assert tcpi_ca_state == TCP_CA_Loss }%
// Much later we send something.
+11 write(4, ..., 1000) = 1000
+0 > P. 1001:2001(1000) ack 1 <nop,nop,TS val 11253 ecr 140>
+0 %{ assert tcpi_snd_cwnd == 10, tcpi_snd_cwnd }%
// RTO and retransmit head.
+.290 > P. 1001:2001(1000) ack 1 <nop,nop,TS val 11540 ecr 140>
// RTO and retransmit head.
+.618 > P. 1001:2001(1000) ack 1 <nop,nop,TS val 12148 ecr 140>
// RTO and retransmit head.
+1.216 > P. 1001:2001(1000) ack 1 <nop,nop,TS val 12148 ecr 140>
// RTO and retransmit head.
+2.432 > P. 1001:2001(1000) ack 1 <nop,nop,TS val 15768 ecr 140>
// RTO and retransmit head.
+4.864 > P. 1001:2001(1000) ack 1 <nop,nop,TS val 20642 ecr 140>
// Check whether connection is timed out yet (it should not be):
+9.8 write(4, ..., 1000) = -1 ETIMEDOUT (Connection Timed Out)
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2022-09-02 19:17 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-02 10:29 retrans_stamp not cleared while testing NewReno implementation Arankal, Nagaraj
2022-09-02 13:55 ` Neal Cardwell
2022-09-02 14:19 ` Arankal, Nagaraj
2022-09-02 19:17 ` Neal Cardwell
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.