All of lore.kernel.org
 help / color / mirror / Atom feed
* Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
@ 2019-10-20 20:25 Subash Abhinov Kasiviswanathan
  2019-10-20 22:16 ` Neal Cardwell
  2019-10-21 14:17 ` Eric Dumazet
  0 siblings, 2 replies; 23+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-10-20 20:25 UTC (permalink / raw)
  To: netdev, ycheng, eric.dumazet, ncardwell

We are seeing a crash in the TCP ACK codepath often in our regression
racks with an ARM64 device with 4.19 based kernel.

It appears that the tp->highest_ack is invalid when being accessed when 
a
FIN-ACK is received. In all the instances of the crash, the tcp socket
is in TCP_FIN_WAIT1 state.

[include/net/tcp.h]
static inline u32 tcp_highest_sack_seq(struct tcp_sock *tp)
{
	if (!tp->sacked_out)
		return tp->snd_una;

	if (tp->highest_sack == NULL)
		return tp->snd_nxt;

	return TCP_SKB_CB(tp->highest_sack)->seq;
}

[net/ipv4/tcp_input.c]
static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
{
...
	prior_fack = tcp_is_sack(tp) ? tcp_highest_sack_seq(tp) : tp->snd_una;


Crash call stack below-

  16496.596106:   <6> Unable to handle kernel paging request at virtual 
address fffffff2cd81a368
  16496.730771:   <2> pc : tcp_ack+0x174/0x11e8
  16496.734536:   <2> lr : tcp_rcv_state_process+0x318/0x1300
  16497.183109:   <2> Call trace:
  16497.183114:   <2>  tcp_ack+0x174/0x11e8
  16497.183115:   <2>  tcp_rcv_state_process+0x318/0x1300
  16497.183117:   <2>  tcp_v4_do_rcv+0x1a8/0x1f0
  16497.183118:   <2>  tcp_v4_rcv+0xe90/0xec8
  16497.183120:   <2>  ip_protocol_deliver_rcu+0x150/0x298
  16497.183121:   <2>  ip_local_deliver+0x21c/0x2a8
  16497.183122:   <2>  ip_rcv+0x1c4/0x210
  16497.183124:   <2>  __netif_receive_skb_core+0xab0/0xd90
  16497.183125:   <2>  netif_receive_skb_internal+0x12c/0x368
  16497.183126:   <2>  napi_gro_receive+0x1e0/0x290

Is it expected for the tp->highest_ack to be
accessed in this state?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-20 20:25 Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state Subash Abhinov Kasiviswanathan
@ 2019-10-20 22:16 ` Neal Cardwell
  2019-10-20 23:15   ` Subash Abhinov Kasiviswanathan
  2019-10-21 14:17 ` Eric Dumazet
  1 sibling, 1 reply; 23+ messages in thread
From: Neal Cardwell @ 2019-10-20 22:16 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan; +Cc: Netdev, Yuchung Cheng, Eric Dumazet

tcp_write_queue_purgeOn Sun, Oct 20, 2019 at 4:25 PM Subash Abhinov
Kasiviswanathan <subashab@codeaurora.org> wrote:
>
> We are seeing a crash in the TCP ACK codepath often in our regression
> racks with an ARM64 device with 4.19 based kernel.
>
> It appears that the tp->highest_ack is invalid when being accessed when
> a
> FIN-ACK is received. In all the instances of the crash, the tcp socket
> is in TCP_FIN_WAIT1 state.

Hmm. Random related thought while searching for a possible cause: I
wonder if tcp_write_queue_purge() should clear tp->highest_sack (and
possibly tp->sacked_out)? The tcp_write_queue_purge() code is careful
to call  tcp_clear_all_retrans_hints(tcp_sk(sk)) and I would imagine
that similar considerations would imply that we should clear at least
tp->highest_sack?

neal

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-20 22:16 ` Neal Cardwell
@ 2019-10-20 23:15   ` Subash Abhinov Kasiviswanathan
  2019-10-21  1:20     ` Neal Cardwell
  0 siblings, 1 reply; 23+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-10-20 23:15 UTC (permalink / raw)
  To: Neal Cardwell; +Cc: Netdev, Yuchung Cheng, Eric Dumazet

> Hmm. Random related thought while searching for a possible cause: I
> wonder if tcp_write_queue_purge() should clear tp->highest_sack (and
> possibly tp->sacked_out)? The tcp_write_queue_purge() code is careful
> to call  tcp_clear_all_retrans_hints(tcp_sk(sk)) and I would imagine
> that similar considerations would imply that we should clear at least
> tp->highest_sack?
> 
> neal

Hi Neal

If the socket is in FIN-WAIT1, does that mean that all the segments
corresponding to SACK blocks are sent and ACKed already?

tp->sacked_out is non zero in all these crashes (is the SACK information
possibly invalid or stale here?).

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-20 23:15   ` Subash Abhinov Kasiviswanathan
@ 2019-10-21  1:20     ` Neal Cardwell
  2019-10-21  2:45       ` Subash Abhinov Kasiviswanathan
  0 siblings, 1 reply; 23+ messages in thread
From: Neal Cardwell @ 2019-10-21  1:20 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan; +Cc: Netdev, Yuchung Cheng, Eric Dumazet

On Sun, Oct 20, 2019 at 7:15 PM Subash Abhinov Kasiviswanathan
<subashab@codeaurora.org> wrote:
>
> > Hmm. Random related thought while searching for a possible cause: I
> > wonder if tcp_write_queue_purge() should clear tp->highest_sack (and
> > possibly tp->sacked_out)? The tcp_write_queue_purge() code is careful
> > to call  tcp_clear_all_retrans_hints(tcp_sk(sk)) and I would imagine
> > that similar considerations would imply that we should clear at least
> > tp->highest_sack?
> >
> > neal
>
> Hi Neal
>
> If the socket is in FIN-WAIT1, does that mean that all the segments
> corresponding to SACK blocks are sent and ACKed already?

FIN-WAIT1 just means the local application has called close() or
shutdown() to shut down the sending direction of the socket, and the
local TCP stack has sent a FIN, and is waiting to receive a FIN and an
ACK from the other side (in either order, or simultaneously). The
ASCII art state transition diagram on page 22 of RFC 793 (e.g.
https://tools.ietf.org/html/rfc793#section-3.2 ) is one source for
this, though the W. Richard Stevens books have a much more readable
diagram.

There may still be unacked and SACKed data in the retransmit queue at
this point.

> tp->sacked_out is non zero in all these crashes

Thanks, that is a useful data point. Do you know what particular value
 tp->sacked_out has? Would you be able to capture/log the value of
tp->packets_out, tp->lost_out, and tp->retrans_out as well?

> (is the SACK information possibly invalid or stale here?).

Yes, one guess would be that somehow the skbs in the retransmit queue
have been freed, but tp->sacked_out is still non-zero and
tp->highest_sack is still a dangling pointer into one of those freed
skbs. The tcp_write_queue_purge() function is one function that fees
the skbs in the retransmit queue and leaves tp->sacked_out as non-zero
and  tp->highest_sack as a dangling pointer to a freed skb, AFAICT, so
that's why I'm wondering about that function. I can't think of a
specific sequence of events that would involve tcp_write_queue_purge()
and then a socket that's still in FIN-WAIT1. Maybe I'm not being
creative enough, or maybe that guess is on the wrong track. Would you
be able to set a new bit in the tcp_sock in tcp_write_queue_purge()
and log it in your instrumentation point, to see if
tcp_write_queue_purge()  was called for these connections that cause
this crash?

thanks,
neal

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-21  1:20     ` Neal Cardwell
@ 2019-10-21  2:45       ` Subash Abhinov Kasiviswanathan
  2019-10-21 11:47         ` Neal Cardwell
  0 siblings, 1 reply; 23+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-10-21  2:45 UTC (permalink / raw)
  To: Neal Cardwell; +Cc: Netdev, Yuchung Cheng, Eric Dumazet

> FIN-WAIT1 just means the local application has called close() or
> shutdown() to shut down the sending direction of the socket, and the
> local TCP stack has sent a FIN, and is waiting to receive a FIN and an
> ACK from the other side (in either order, or simultaneously). The
> ASCII art state transition diagram on page 22 of RFC 793 (e.g.
> https://tools.ietf.org/html/rfc793#section-3.2 ) is one source for
> this, though the W. Richard Stevens books have a much more readable
> diagram.
> 
> There may still be unacked and SACKed data in the retransmit queue at
> this point.
> 

Thanks for the clarification.

> Thanks, that is a useful data point. Do you know what particular value
>  tp->sacked_out has? Would you be able to capture/log the value of
> tp->packets_out, tp->lost_out, and tp->retrans_out as well?
> 

tp->sacket_out varies per crash instance - 55, 180 etc.
However the other values are always the same - tp->packets_out is 0,
tp->lost_out is 1 and tp->retrans_out is 1.

> Yes, one guess would be that somehow the skbs in the retransmit queue
> have been freed, but tp->sacked_out is still non-zero and
> tp->highest_sack is still a dangling pointer into one of those freed
> skbs. The tcp_write_queue_purge() function is one function that fees
> the skbs in the retransmit queue and leaves tp->sacked_out as non-zero
> and  tp->highest_sack as a dangling pointer to a freed skb, AFAICT, so
> that's why I'm wondering about that function. I can't think of a
> specific sequence of events that would involve tcp_write_queue_purge()
> and then a socket that's still in FIN-WAIT1. Maybe I'm not being
> creative enough, or maybe that guess is on the wrong track. Would you
> be able to set a new bit in the tcp_sock in tcp_write_queue_purge()
> and log it in your instrumentation point, to see if
> tcp_write_queue_purge()  was called for these connections that cause
> this crash?

Sure, I can try this out.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-21  2:45       ` Subash Abhinov Kasiviswanathan
@ 2019-10-21 11:47         ` Neal Cardwell
  2019-10-22  0:04           ` Subash Abhinov Kasiviswanathan
  0 siblings, 1 reply; 23+ messages in thread
From: Neal Cardwell @ 2019-10-21 11:47 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan; +Cc: Netdev, Yuchung Cheng, Eric Dumazet

On Sun, Oct 20, 2019 at 10:45 PM Subash Abhinov Kasiviswanathan
<subashab@codeaurora.org> wrote:
>
> > FIN-WAIT1 just means the local application has called close() or
> > shutdown() to shut down the sending direction of the socket, and the
> > local TCP stack has sent a FIN, and is waiting to receive a FIN and an
> > ACK from the other side (in either order, or simultaneously). The
> > ASCII art state transition diagram on page 22 of RFC 793 (e.g.
> > https://tools.ietf.org/html/rfc793#section-3.2 ) is one source for
> > this, though the W. Richard Stevens books have a much more readable
> > diagram.
> >
> > There may still be unacked and SACKed data in the retransmit queue at
> > this point.
> >
>
> Thanks for the clarification.
>
> > Thanks, that is a useful data point. Do you know what particular value
> >  tp->sacked_out has? Would you be able to capture/log the value of
> > tp->packets_out, tp->lost_out, and tp->retrans_out as well?
> >
>
> tp->sacket_out varies per crash instance - 55, 180 etc.
> However the other values are always the same - tp->packets_out is 0,
> tp->lost_out is 1 and tp->retrans_out is 1.

Interesting! As tcp_input.c summarizes, "packets_out is
SND.NXT-SND.UNA counted in packets". In the normal operation of a
socket, tp->packets_out should not be 0 if any of those other fields
are non-zero.

The tcp_write_queue_purge() function sets packets_out to 0:

  https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/net/ipv4/tcp.c?h=v4.19#n2526

So the execution of tcp_write_queue_purge()  before this point is one
way for the socket to end up in this weird state.

> > Yes, one guess would be that somehow the skbs in the retransmit queue
> > have been freed, but tp->sacked_out is still non-zero and
> > tp->highest_sack is still a dangling pointer into one of those freed
> > skbs. The tcp_write_queue_purge() function is one function that fees
> > the skbs in the retransmit queue and leaves tp->sacked_out as non-zero
> > and  tp->highest_sack as a dangling pointer to a freed skb, AFAICT, so
> > that's why I'm wondering about that function. I can't think of a
> > specific sequence of events that would involve tcp_write_queue_purge()
> > and then a socket that's still in FIN-WAIT1. Maybe I'm not being
> > creative enough, or maybe that guess is on the wrong track. Would you
> > be able to set a new bit in the tcp_sock in tcp_write_queue_purge()
> > and log it in your instrumentation point, to see if
> > tcp_write_queue_purge()  was called for these connections that cause
> > this crash?
>
> Sure, I can try this out.

Great! Thanks!

neal

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-20 20:25 Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state Subash Abhinov Kasiviswanathan
  2019-10-20 22:16 ` Neal Cardwell
@ 2019-10-21 14:17 ` Eric Dumazet
  2019-10-21 17:40   ` Subash Abhinov Kasiviswanathan
  1 sibling, 1 reply; 23+ messages in thread
From: Eric Dumazet @ 2019-10-21 14:17 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan, netdev, ycheng, eric.dumazet, ncardwell



On 10/20/19 1:25 PM, Subash Abhinov Kasiviswanathan wrote:
> We are seeing a crash in the TCP ACK codepath often in our regression
> racks with an ARM64 device with 4.19 based kernel.
> 

Please give us a pointer to the exact git tree and sha1.

I do not analyze TCP stack problems without an exact starting point,
or at least a crystal ball, which I do not have.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-21 14:17 ` Eric Dumazet
@ 2019-10-21 17:40   ` Subash Abhinov Kasiviswanathan
  2019-10-21 18:10     ` Josh Hunt
  0 siblings, 1 reply; 23+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-10-21 17:40 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev, ycheng, ncardwell

> Please give us a pointer to the exact git tree and sha1.
> 
> I do not analyze TCP stack problems without an exact starting point,
> or at least a crystal ball, which I do not have.

Hi Eric

We are at this commit - Merge 4.19.75 into android-4.19-q.

https://android.googlesource.com/kernel/common/+/cfe571e421ab873403a8f75413e04ecf5bf7e393

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-21 17:40   ` Subash Abhinov Kasiviswanathan
@ 2019-10-21 18:10     ` Josh Hunt
  0 siblings, 0 replies; 23+ messages in thread
From: Josh Hunt @ 2019-10-21 18:10 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan
  Cc: Eric Dumazet, netdev, Yuchung Cheng, Neal Cardwell, hcaldwel

On Mon, Oct 21, 2019 at 10:42 AM Subash Abhinov Kasiviswanathan
<subashab@codeaurora.org> wrote:
>
> > Please give us a pointer to the exact git tree and sha1.
> >
> > I do not analyze TCP stack problems without an exact starting point,
> > or at least a crystal ball, which I do not have.
>
> Hi Eric
>
> We are at this commit - Merge 4.19.75 into android-4.19-q.
>
> https://android.googlesource.com/kernel/common/+/cfe571e421ab873403a8f75413e04ecf5bf7e393
>
> --
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> a Linux Foundation Collaborative Project

We were investigating a problem in this area a few months back (it
turned out to be something else), but were wondering if retrans_out,
lost_out, and sacked_out should be cleared in tcp_rtx_queue_purge()? I
meant to get back to it, but it got buried and this just thread jogged
my memory...

-- 
Josh

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-21 11:47         ` Neal Cardwell
@ 2019-10-22  0:04           ` Subash Abhinov Kasiviswanathan
  2019-10-22  1:28             ` Neal Cardwell
  0 siblings, 1 reply; 23+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-10-22  0:04 UTC (permalink / raw)
  To: Neal Cardwell; +Cc: Netdev, Yuchung Cheng, Eric Dumazet

> Interesting! As tcp_input.c summarizes, "packets_out is
> SND.NXT-SND.UNA counted in packets". In the normal operation of a
> socket, tp->packets_out should not be 0 if any of those other fields
> are non-zero.
> 
> The tcp_write_queue_purge() function sets packets_out to 0:
> 
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/net/ipv4/tcp.c?h=v4.19#n2526
> 
> So the execution of tcp_write_queue_purge()  before this point is one
> way for the socket to end up in this weird state.
> 

In one of the instances, the values are tp->snd_nxt = 1016118098,
tp->snd_una = 1016047820

tp->mss_cache = 1378

I assume the number of outstanding segments should be
(tp->snd_nxt - tp->snd_una)/tp->mss_cache = 51

tp->packets_out = 0 and tp->sacked_out = 158 in this case.

>> > Yes, one guess would be that somehow the skbs in the retransmit queue
>> > have been freed, but tp->sacked_out is still non-zero and
>> > tp->highest_sack is still a dangling pointer into one of those freed
>> > skbs. The tcp_write_queue_purge() function is one function that fees
>> > the skbs in the retransmit queue and leaves tp->sacked_out as non-zero
>> > and  tp->highest_sack as a dangling pointer to a freed skb, AFAICT, so
>> > that's why I'm wondering about that function. I can't think of a
>> > specific sequence of events that would involve tcp_write_queue_purge()
>> > and then a socket that's still in FIN-WAIT1. Maybe I'm not being
>> > creative enough, or maybe that guess is on the wrong track. Would you
>> > be able to set a new bit in the tcp_sock in tcp_write_queue_purge()
>> > and log it in your instrumentation point, to see if
>> > tcp_write_queue_purge()  was called for these connections that cause
>> > this crash?

I've queued up a build which logs calls to tcp_write_queue_purge and
clears tp->highest_sack and tp->sacked_out. I will let you know how
it fares by end of week.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-22  0:04           ` Subash Abhinov Kasiviswanathan
@ 2019-10-22  1:28             ` Neal Cardwell
  2019-10-29  1:36               ` Subash Abhinov Kasiviswanathan
  0 siblings, 1 reply; 23+ messages in thread
From: Neal Cardwell @ 2019-10-22  1:28 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan; +Cc: Netdev, Yuchung Cheng, Eric Dumazet

On Mon, Oct 21, 2019 at 8:04 PM Subash Abhinov Kasiviswanathan
<subashab@codeaurora.org> wrote:
>
> > Interesting! As tcp_input.c summarizes, "packets_out is
> > SND.NXT-SND.UNA counted in packets". In the normal operation of a
> > socket, tp->packets_out should not be 0 if any of those other fields
> > are non-zero.
> >
> > The tcp_write_queue_purge() function sets packets_out to 0:
> >
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/net/ipv4/tcp.c?h=v4.19#n2526
> >
> > So the execution of tcp_write_queue_purge()  before this point is one
> > way for the socket to end up in this weird state.
> >
>
> In one of the instances, the values are tp->snd_nxt = 1016118098,
> tp->snd_una = 1016047820
>
> tp->mss_cache = 1378
>
> I assume the number of outstanding segments should be
> (tp->snd_nxt - tp->snd_una)/tp->mss_cache = 51

That would be a good expectation if all the packets were full-sized.

> tp->packets_out = 0 and tp->sacked_out = 158 in this case.

OK, thanks. It could be that sacked_out is reasonable and some of the
packets were not full-sized. But, as discussed above, typically the
packets_out should not be 0 if sacked_out is non-zero (with at least
the exception of the tcp_write_queue_purge()  case).

> >> > Yes, one guess would be that somehow the skbs in the retransmit queue
> >> > have been freed, but tp->sacked_out is still non-zero and
> >> > tp->highest_sack is still a dangling pointer into one of those freed
> >> > skbs. The tcp_write_queue_purge() function is one function that fees
> >> > the skbs in the retransmit queue and leaves tp->sacked_out as non-zero
> >> > and  tp->highest_sack as a dangling pointer to a freed skb, AFAICT, so
> >> > that's why I'm wondering about that function. I can't think of a
> >> > specific sequence of events that would involve tcp_write_queue_purge()
> >> > and then a socket that's still in FIN-WAIT1. Maybe I'm not being
> >> > creative enough, or maybe that guess is on the wrong track. Would you
> >> > be able to set a new bit in the tcp_sock in tcp_write_queue_purge()
> >> > and log it in your instrumentation point, to see if
> >> > tcp_write_queue_purge()  was called for these connections that cause
> >> > this crash?
>
> I've queued up a build which logs calls to tcp_write_queue_purge and
> clears tp->highest_sack and tp->sacked_out. I will let you know how
> it fares by end of week.

OK, thanks. That should be a useful data point.

cheers,
neal

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-22  1:28             ` Neal Cardwell
@ 2019-10-29  1:36               ` Subash Abhinov Kasiviswanathan
  2019-10-30 17:13                 ` Neal Cardwell
  0 siblings, 1 reply; 23+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-10-29  1:36 UTC (permalink / raw)
  To: Neal Cardwell; +Cc: Netdev, Yuchung Cheng, Eric Dumazet

>> I've queued up a build which logs calls to tcp_write_queue_purge and
>> clears tp->highest_sack and tp->sacked_out. I will let you know how
>> it fares by end of week.
> 
> OK, thanks. That should be a useful data point.

I haven't seen the crash in the testing so far.
Looks like clearing these values seems to help.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-29  1:36               ` Subash Abhinov Kasiviswanathan
@ 2019-10-30 17:13                 ` Neal Cardwell
  2019-10-30 18:27                   ` Subash Abhinov Kasiviswanathan
  0 siblings, 1 reply; 23+ messages in thread
From: Neal Cardwell @ 2019-10-30 17:13 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan; +Cc: Netdev, Yuchung Cheng, Eric Dumazet

On Mon, Oct 28, 2019 at 9:36 PM Subash Abhinov Kasiviswanathan
<subashab@codeaurora.org> wrote:
>
> >> I've queued up a build which logs calls to tcp_write_queue_purge and
> >> clears tp->highest_sack and tp->sacked_out. I will let you know how
> >> it fares by end of week.
> >
> > OK, thanks. That should be a useful data point.
>
> I haven't seen the crash in the testing so far.
> Looks like clearing these values seems to help.

Thanks. Do you mind sharing what your patch looked like, so we can
understand precisely what was changed?

Also, are you able to share what the workload looked like that tickled
this issue? (web client? file server?...)

thanks,
neal

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-30 17:13                 ` Neal Cardwell
@ 2019-10-30 18:27                   ` Subash Abhinov Kasiviswanathan
  2019-10-30 21:48                     ` Josh Hunt
  2019-10-31  0:38                     ` Eric Dumazet
  0 siblings, 2 replies; 23+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-10-30 18:27 UTC (permalink / raw)
  To: Neal Cardwell; +Cc: Netdev, Yuchung Cheng, Eric Dumazet

> Thanks. Do you mind sharing what your patch looked like, so we can
> understand precisely what was changed?
> 
> Also, are you able to share what the workload looked like that tickled
> this issue? (web client? file server?...)

Sure. This was seen only on our regression racks and the workload there
is a combination of FTP, browsing and other apps.

diff --git a/include/linux/tcp.h b/include/linux/tcp.h
index 4374196..9af7497 100644
--- a/include/linux/tcp.h
+++ b/include/linux/tcp.h
@@ -232,7 +232,8 @@ struct tcp_sock {
                 fastopen_connect:1, /* FASTOPEN_CONNECT sockopt */
                 fastopen_no_cookie:1, /* Allow send/recv SYN+data 
without a cookie */
                 is_sack_reneg:1,    /* in recovery from loss with SACK 
reneg? */
-               unused:2;
+               unused:1,
+               wqp_called:1;
         u8      nonagle     : 4,/* Disable Nagle algorithm?             
*/
                 thin_lto    : 1,/* Use linear timeouts for thin streams 
*/
                 recvmsg_inq : 1,/* Indicate # of bytes in queue upon 
recvmsg */
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 1a1fcb3..0c29bdd 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2534,6 +2534,9 @@ void tcp_write_queue_purge(struct sock *sk)
         INIT_LIST_HEAD(&tcp_sk(sk)->tsorted_sent_queue);
         sk_mem_reclaim(sk);
         tcp_clear_all_retrans_hints(tcp_sk(sk));
+       tcp_sk(sk)->highest_sack = NULL;
+       tcp_sk(sk)->sacked_out = 0;
+       tcp_sk(sk)->wqp_called = 1;
         tcp_sk(sk)->packets_out = 0;
         inet_csk(sk)->icsk_backoff = 0;
  }


-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-30 18:27                   ` Subash Abhinov Kasiviswanathan
@ 2019-10-30 21:48                     ` Josh Hunt
  2019-10-31  1:27                       ` Eric Dumazet
  2019-10-31  0:38                     ` Eric Dumazet
  1 sibling, 1 reply; 23+ messages in thread
From: Josh Hunt @ 2019-10-30 21:48 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan, Neal Cardwell
  Cc: Netdev, Yuchung Cheng, Eric Dumazet

On 10/30/19 11:27 AM, Subash Abhinov Kasiviswanathan wrote:
>> Thanks. Do you mind sharing what your patch looked like, so we can
>> understand precisely what was changed?
>>
>> Also, are you able to share what the workload looked like that tickled
>> this issue? (web client? file server?...)
> 
> Sure. This was seen only on our regression racks and the workload there
> is a combination of FTP, browsing and other apps.
> 
> diff --git a/include/linux/tcp.h b/include/linux/tcp.h
> index 4374196..9af7497 100644
> --- a/include/linux/tcp.h
> +++ b/include/linux/tcp.h
> @@ -232,7 +232,8 @@ struct tcp_sock {
>                  fastopen_connect:1, /* FASTOPEN_CONNECT sockopt */
>                  fastopen_no_cookie:1, /* Allow send/recv SYN+data 
> without a cookie */
>                  is_sack_reneg:1,    /* in recovery from loss with SACK 
> reneg? */
> -               unused:2;
> +               unused:1,
> +               wqp_called:1;
>          u8      nonagle     : 4,/* Disable Nagle algorithm? */
>                  thin_lto    : 1,/* Use linear timeouts for thin streams */
>                  recvmsg_inq : 1,/* Indicate # of bytes in queue upon 
> recvmsg */
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 1a1fcb3..0c29bdd 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -2534,6 +2534,9 @@ void tcp_write_queue_purge(struct sock *sk)
>          INIT_LIST_HEAD(&tcp_sk(sk)->tsorted_sent_queue);
>          sk_mem_reclaim(sk);
>          tcp_clear_all_retrans_hints(tcp_sk(sk));
> +       tcp_sk(sk)->highest_sack = NULL;
> +       tcp_sk(sk)->sacked_out = 0;
> +       tcp_sk(sk)->wqp_called = 1;
>          tcp_sk(sk)->packets_out = 0;
>          inet_csk(sk)->icsk_backoff = 0;
>   }
> 
> 

Neal

Since tcp_write_queue_purge() calls tcp_rtx_queue_purge() and we're 
deleting everything in the retrans queue there, doesn't it make sense to 
zero out all of those associated counters? Obviously clearing sacked_out 
is helping here, but is there a reason to keep track of lost_out, 
retrans_out, etc if retrans queue is now empty? Maybe calling 
tcp_clear_retrans() from tcp_rtx_queue_purge() ?

Josh

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-30 18:27                   ` Subash Abhinov Kasiviswanathan
  2019-10-30 21:48                     ` Josh Hunt
@ 2019-10-31  0:38                     ` Eric Dumazet
  2019-10-31  1:17                       ` Subash Abhinov Kasiviswanathan
  1 sibling, 1 reply; 23+ messages in thread
From: Eric Dumazet @ 2019-10-31  0:38 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan, Neal Cardwell
  Cc: Netdev, Yuchung Cheng, Eric Dumazet



On 10/30/19 11:27 AM, Subash Abhinov Kasiviswanathan wrote:
>> Thanks. Do you mind sharing what your patch looked like, so we can
>> understand precisely what was changed?
>>
>> Also, are you able to share what the workload looked like that tickled
>> this issue? (web client? file server?...)
> 
> Sure. This was seen only on our regression racks and the workload there
> is a combination of FTP, browsing and other apps.
> 
> diff --git a/include/linux/tcp.h b/include/linux/tcp.h
> index 4374196..9af7497 100644
> --- a/include/linux/tcp.h
> +++ b/include/linux/tcp.h
> @@ -232,7 +232,8 @@ struct tcp_sock {
>                 fastopen_connect:1, /* FASTOPEN_CONNECT sockopt */
>                 fastopen_no_cookie:1, /* Allow send/recv SYN+data without a cookie */
>                 is_sack_reneg:1,    /* in recovery from loss with SACK reneg? */
> -               unused:2;
> +               unused:1,
> +               wqp_called:1;
>         u8      nonagle     : 4,/* Disable Nagle algorithm?             */
>                 thin_lto    : 1,/* Use linear timeouts for thin streams */
>                 recvmsg_inq : 1,/* Indicate # of bytes in queue upon recvmsg */
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 1a1fcb3..0c29bdd 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -2534,6 +2534,9 @@ void tcp_write_queue_purge(struct sock *sk)
>         INIT_LIST_HEAD(&tcp_sk(sk)->tsorted_sent_queue);
>         sk_mem_reclaim(sk);
>         tcp_clear_all_retrans_hints(tcp_sk(sk));
> +       tcp_sk(sk)->highest_sack = NULL;
> +       tcp_sk(sk)->sacked_out = 0;
> +       tcp_sk(sk)->wqp_called = 1;

What is this wqp_called exactly ?


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-31  0:38                     ` Eric Dumazet
@ 2019-10-31  1:17                       ` Subash Abhinov Kasiviswanathan
  0 siblings, 0 replies; 23+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-10-31  1:17 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Neal Cardwell, Netdev, Yuchung Cheng

> What is this wqp_called exactly ?

This was just a debug to ensure that tcp_write_queue_purge()
was called. In case the same crash was seen with the patch,
this bit would atleast tell whether this function was invoked or not.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-30 21:48                     ` Josh Hunt
@ 2019-10-31  1:27                       ` Eric Dumazet
  2019-11-27  5:30                         ` Eric Dumazet
  0 siblings, 1 reply; 23+ messages in thread
From: Eric Dumazet @ 2019-10-31  1:27 UTC (permalink / raw)
  To: Josh Hunt, Subash Abhinov Kasiviswanathan, Neal Cardwell
  Cc: Netdev, Yuchung Cheng, Eric Dumazet



On 10/30/19 2:48 PM, Josh Hunt wrote:
> On 10/30/19 11:27 AM, Subash Abhinov Kasiviswanathan wrote:
>>> Thanks. Do you mind sharing what your patch looked like, so we can
>>> understand precisely what was changed?
>>>
>>> Also, are you able to share what the workload looked like that tickled
>>> this issue? (web client? file server?...)
>>
>> Sure. This was seen only on our regression racks and the workload there
>> is a combination of FTP, browsing and other apps.
>>
>> diff --git a/include/linux/tcp.h b/include/linux/tcp.h
>> index 4374196..9af7497 100644
>> --- a/include/linux/tcp.h
>> +++ b/include/linux/tcp.h
>> @@ -232,7 +232,8 @@ struct tcp_sock {
>>                  fastopen_connect:1, /* FASTOPEN_CONNECT sockopt */
>>                  fastopen_no_cookie:1, /* Allow send/recv SYN+data without a cookie */
>>                  is_sack_reneg:1,    /* in recovery from loss with SACK reneg? */
>> -               unused:2;
>> +               unused:1,
>> +               wqp_called:1;
>>          u8      nonagle     : 4,/* Disable Nagle algorithm? */
>>                  thin_lto    : 1,/* Use linear timeouts for thin streams */
>>                  recvmsg_inq : 1,/* Indicate # of bytes in queue upon recvmsg */
>> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
>> index 1a1fcb3..0c29bdd 100644
>> --- a/net/ipv4/tcp.c
>> +++ b/net/ipv4/tcp.c
>> @@ -2534,6 +2534,9 @@ void tcp_write_queue_purge(struct sock *sk)
>>          INIT_LIST_HEAD(&tcp_sk(sk)->tsorted_sent_queue);
>>          sk_mem_reclaim(sk);
>>          tcp_clear_all_retrans_hints(tcp_sk(sk));
>> +       tcp_sk(sk)->highest_sack = NULL;
>> +       tcp_sk(sk)->sacked_out = 0;
>> +       tcp_sk(sk)->wqp_called = 1;
>>          tcp_sk(sk)->packets_out = 0;
>>          inet_csk(sk)->icsk_backoff = 0;
>>   }
>>
>>
> 
> Neal
> 
> Since tcp_write_queue_purge() calls tcp_rtx_queue_purge() and we're deleting everything in the retrans queue there, doesn't it make sense to zero out all of those associated counters? Obviously clearing sacked_out is helping here, but is there a reason to keep track of lost_out, retrans_out, etc if retrans queue is now empty? Maybe calling tcp_clear_retrans() from tcp_rtx_queue_purge() ?

First, I would like to understand if we hit this problem on current upstream kernels.

Maybe a backport forgot a dependency.

tcp_write_queue_purge() calls tcp_clear_all_retrans_hints(), not tcp_clear_retrans(),
this is probably for a reason.

Brute force clearing these fields might hide a serious bug.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-10-31  1:27                       ` Eric Dumazet
@ 2019-11-27  5:30                         ` Eric Dumazet
  2019-11-30  2:51                           ` subashab
       [not found]                           ` <0101016eba38455f-e79cd85a-a807-4309-bf3b-8a788135f3f2-000000@us-west-2.amazonses.com>
  0 siblings, 2 replies; 23+ messages in thread
From: Eric Dumazet @ 2019-11-27  5:30 UTC (permalink / raw)
  To: Eric Dumazet, Josh Hunt, Subash Abhinov Kasiviswanathan, Neal Cardwell
  Cc: Netdev, Yuchung Cheng



On 10/30/19 6:27 PM, Eric Dumazet wrote:
> 
> 
> On 10/30/19 2:48 PM, Josh Hunt wrote:
>> On 10/30/19 11:27 AM, Subash Abhinov Kasiviswanathan wrote:
>>>> Thanks. Do you mind sharing what your patch looked like, so we can
>>>> understand precisely what was changed?
>>>>
>>>> Also, are you able to share what the workload looked like that tickled
>>>> this issue? (web client? file server?...)
>>>
>>> Sure. This was seen only on our regression racks and the workload there
>>> is a combination of FTP, browsing and other apps.
>>>
>>> diff --git a/include/linux/tcp.h b/include/linux/tcp.h
>>> index 4374196..9af7497 100644
>>> --- a/include/linux/tcp.h
>>> +++ b/include/linux/tcp.h
>>> @@ -232,7 +232,8 @@ struct tcp_sock {
>>>                  fastopen_connect:1, /* FASTOPEN_CONNECT sockopt */
>>>                  fastopen_no_cookie:1, /* Allow send/recv SYN+data without a cookie */
>>>                  is_sack_reneg:1,    /* in recovery from loss with SACK reneg? */
>>> -               unused:2;
>>> +               unused:1,
>>> +               wqp_called:1;
>>>          u8      nonagle     : 4,/* Disable Nagle algorithm? */
>>>                  thin_lto    : 1,/* Use linear timeouts for thin streams */
>>>                  recvmsg_inq : 1,/* Indicate # of bytes in queue upon recvmsg */
>>> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
>>> index 1a1fcb3..0c29bdd 100644
>>> --- a/net/ipv4/tcp.c
>>> +++ b/net/ipv4/tcp.c
>>> @@ -2534,6 +2534,9 @@ void tcp_write_queue_purge(struct sock *sk)
>>>          INIT_LIST_HEAD(&tcp_sk(sk)->tsorted_sent_queue);
>>>          sk_mem_reclaim(sk);
>>>          tcp_clear_all_retrans_hints(tcp_sk(sk));
>>> +       tcp_sk(sk)->highest_sack = NULL;
>>> +       tcp_sk(sk)->sacked_out = 0;
>>> +       tcp_sk(sk)->wqp_called = 1;
>>>          tcp_sk(sk)->packets_out = 0;
>>>          inet_csk(sk)->icsk_backoff = 0;
>>>   }
>>>
>>>
>>
>> Neal
>>
>> Since tcp_write_queue_purge() calls tcp_rtx_queue_purge() and we're deleting everything in the retrans queue there, doesn't it make sense to zero out all of those associated counters? Obviously clearing sacked_out is helping here, but is there a reason to keep track of lost_out, retrans_out, etc if retrans queue is now empty? Maybe calling tcp_clear_retrans() from tcp_rtx_queue_purge() ?
> 
> First, I would like to understand if we hit this problem on current upstream kernels.
> 
> Maybe a backport forgot a dependency.
> 
> tcp_write_queue_purge() calls tcp_clear_all_retrans_hints(), not tcp_clear_retrans(),
> this is probably for a reason.
> 
> Brute force clearing these fields might hide a serious bug.
> 

I guess we are all too busy to get more understanding on this :/

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-11-27  5:30                         ` Eric Dumazet
@ 2019-11-30  2:51                           ` subashab
  2019-11-30  5:39                             ` Avinash Patil
       [not found]                           ` <0101016eba38455f-e79cd85a-a807-4309-bf3b-8a788135f3f2-000000@us-west-2.amazonses.com>
  1 sibling, 1 reply; 23+ messages in thread
From: subashab @ 2019-11-30  2:51 UTC (permalink / raw)
  To: Eric Dumazet, Josh Hunt; +Cc: Neal Cardwell, Netdev, Yuchung Cheng

>>> Since tcp_write_queue_purge() calls tcp_rtx_queue_purge() and we're 
>>> deleting everything in the retrans queue there, doesn't it make sense 
>>> to zero out all of those associated counters? Obviously clearing 
>>> sacked_out is helping here, but is there a reason to keep track of 
>>> lost_out, retrans_out, etc if retrans queue is now empty? Maybe 
>>> calling tcp_clear_retrans() from tcp_rtx_queue_purge() ?
>> 
>> First, I would like to understand if we hit this problem on current 
>> upstream kernels.
>> 
>> Maybe a backport forgot a dependency.
>> 
>> tcp_write_queue_purge() calls tcp_clear_all_retrans_hints(), not 
>> tcp_clear_retrans(),
>> this is probably for a reason.
>> 
>> Brute force clearing these fields might hide a serious bug.
>> 
> 
> I guess we are all too busy to get more understanding on this :/

Our test devices are on 4.19.x and it is not possible to switch to a 
newer
version. Perhaps Josh has seen this on a newer kernel.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-11-30  2:51                           ` subashab
@ 2019-11-30  5:39                             ` Avinash Patil
  2019-12-02  2:23                               ` Eric Dumazet
  0 siblings, 1 reply; 23+ messages in thread
From: Avinash Patil @ 2019-11-30  5:39 UTC (permalink / raw)
  To: subashab; +Cc: Eric Dumazet, Josh Hunt, Neal Cardwell, Netdev, Yuchung Cheng

Hi Eric,

This crash looks quite similar to the one I am experiencing [1] and
reported already.

[1] https://www.spinics.net/lists/netdev/msg611694.html

Thanks,
Avinash

On Fri, Nov 29, 2019 at 6:52 PM <subashab@codeaurora.org> wrote:
>
> >>> Since tcp_write_queue_purge() calls tcp_rtx_queue_purge() and we're
> >>> deleting everything in the retrans queue there, doesn't it make sense
> >>> to zero out all of those associated counters? Obviously clearing
> >>> sacked_out is helping here, but is there a reason to keep track of
> >>> lost_out, retrans_out, etc if retrans queue is now empty? Maybe
> >>> calling tcp_clear_retrans() from tcp_rtx_queue_purge() ?
> >>
> >> First, I would like to understand if we hit this problem on current
> >> upstream kernels.
> >>
> >> Maybe a backport forgot a dependency.
> >>
> >> tcp_write_queue_purge() calls tcp_clear_all_retrans_hints(), not
> >> tcp_clear_retrans(),
> >> this is probably for a reason.
> >>
> >> Brute force clearing these fields might hide a serious bug.
> >>
> >
> > I guess we are all too busy to get more understanding on this :/
>
> Our test devices are on 4.19.x and it is not possible to switch to a
> newer
> version. Perhaps Josh has seen this on a newer kernel.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
  2019-11-30  5:39                             ` Avinash Patil
@ 2019-12-02  2:23                               ` Eric Dumazet
  0 siblings, 0 replies; 23+ messages in thread
From: Eric Dumazet @ 2019-12-02  2:23 UTC (permalink / raw)
  To: Avinash Patil, subashab
  Cc: Eric Dumazet, Josh Hunt, Neal Cardwell, Netdev, Yuchung Cheng



On 11/29/19 9:39 PM, Avinash Patil wrote:
> Hi Eric,
> 
> This crash looks quite similar to the one I am experiencing [1] and
> reported already.
> 
> [1] https://www.spinics.net/lists/netdev/msg611694.html
>

Please start a bisection.

Thanks you.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state
       [not found]                           ` <0101016eba38455f-e79cd85a-a807-4309-bf3b-8a788135f3f2-000000@us-west-2.amazonses.com>
@ 2019-12-03 17:24                             ` Josh Hunt
  0 siblings, 0 replies; 23+ messages in thread
From: Josh Hunt @ 2019-12-03 17:24 UTC (permalink / raw)
  To: subashab, Eric Dumazet; +Cc: Neal Cardwell, Netdev, Yuchung Cheng

On 11/29/19 6:51 PM, subashab@codeaurora.org wrote:
>>>> Since tcp_write_queue_purge() calls tcp_rtx_queue_purge() and we're 
>>>> deleting everything in the retrans queue there, doesn't it make 
>>>> sense to zero out all of those associated counters? Obviously 
>>>> clearing sacked_out is helping here, but is there a reason to keep 
>>>> track of lost_out, retrans_out, etc if retrans queue is now empty? 
>>>> Maybe calling tcp_clear_retrans() from tcp_rtx_queue_purge() ?
>>>
>>> First, I would like to understand if we hit this problem on current 
>>> upstream kernels.
>>>
>>> Maybe a backport forgot a dependency.
>>>
>>> tcp_write_queue_purge() calls tcp_clear_all_retrans_hints(), not 
>>> tcp_clear_retrans(),
>>> this is probably for a reason.
>>>
>>> Brute force clearing these fields might hide a serious bug.
>>>
>>
>> I guess we are all too busy to get more understanding on this :/
> 
> Our test devices are on 4.19.x and it is not possible to switch to a newer
> version. Perhaps Josh has seen this on a newer kernel.

Sorry I've been out of town without email access. To be clear I've never 
seen this crash. I've only noticed that we do not clear some counters 
when we clear out the retransmit queue and this caught my eye when 
debugging another unrelated issue. I will try and get some cycles this 
week to instrument a kernel and reproduce the behavior I was seeing. My 
concern IIRC was more around tcp_left_out() being > packets_out and 
retrans_out causing tcp_packets_in_flight() to wrap. Anyway I'll report 
my findings on this thread if they seem relevant otherwise maybe I'll 
start another discussion thread. I don't want to pollute this one with 
my ramblings...

Josh

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2019-12-03 17:24 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-20 20:25 Crash when receiving FIN-ACK in TCP_FIN_WAIT1 state Subash Abhinov Kasiviswanathan
2019-10-20 22:16 ` Neal Cardwell
2019-10-20 23:15   ` Subash Abhinov Kasiviswanathan
2019-10-21  1:20     ` Neal Cardwell
2019-10-21  2:45       ` Subash Abhinov Kasiviswanathan
2019-10-21 11:47         ` Neal Cardwell
2019-10-22  0:04           ` Subash Abhinov Kasiviswanathan
2019-10-22  1:28             ` Neal Cardwell
2019-10-29  1:36               ` Subash Abhinov Kasiviswanathan
2019-10-30 17:13                 ` Neal Cardwell
2019-10-30 18:27                   ` Subash Abhinov Kasiviswanathan
2019-10-30 21:48                     ` Josh Hunt
2019-10-31  1:27                       ` Eric Dumazet
2019-11-27  5:30                         ` Eric Dumazet
2019-11-30  2:51                           ` subashab
2019-11-30  5:39                             ` Avinash Patil
2019-12-02  2:23                               ` Eric Dumazet
     [not found]                           ` <0101016eba38455f-e79cd85a-a807-4309-bf3b-8a788135f3f2-000000@us-west-2.amazonses.com>
2019-12-03 17:24                             ` Josh Hunt
2019-10-31  0:38                     ` Eric Dumazet
2019-10-31  1:17                       ` Subash Abhinov Kasiviswanathan
2019-10-21 14:17 ` Eric Dumazet
2019-10-21 17:40   ` Subash Abhinov Kasiviswanathan
2019-10-21 18:10     ` Josh Hunt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.