Netdev Archive on lore.kernel.org
 help / color / Atom feed
* TCP stall issue
@ 2021-02-23 10:09 Gil Pedersen
  2021-02-23 15:41 ` Neal Cardwell
  0 siblings, 1 reply; 9+ messages in thread
From: Gil Pedersen @ 2021-02-23 10:09 UTC (permalink / raw)
  To: davem, yoshfuji, dsahern; +Cc: netdev

Hi,

I am investigating a TCP stall that can occur when sending to an Android device (kernel 4.9.148) from an Ubuntu server running kernel 5.11.0.

The issue seems to be that RACK is not applied when a D-SACK (with SACK) is received on the server after an RTO re-transmission (CA_Loss state). Here the re-transmitted segment is considered to be already delivered and loss undo logic is applied. Then nothing is re-transmitted until the next RTO, where the next segment is sent and the same thing happens again. The causes the retransmitted segments to be delivered at a rate of ~1 per second, so a burst loss of eg. 20 segments cause a 20+ second stall. I would expect RACK to kick in long before this happens.

Note the D-SACK should not be considered spurious, as the TSecr value matches the re-transmission TSval.

Also, the Android receiver is definitely sending strange D-SACKs that does not properly advance the ACK number to include received segments. However, I can't control it and need to fix it on the server by quickly re-transmitting the segments. The connection itself is functional. If the client makes a request to the server in this state, it can respond and the client will receive any segments sent in reply.

I can see from counters that TcpExtTCPLossUndo & TcpExtTCPSackFailures are incremented on the server when this happens.
The issue appears both with F-RTO enabled and disabled. Also appears both with BBR and RENO.

Any idea of why this happens, or suggestions on how to debug the issue further?

/Gil

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TCP stall issue
  2021-02-23 10:09 TCP stall issue Gil Pedersen
@ 2021-02-23 15:41 ` Neal Cardwell
  2021-02-24 10:03   ` Gil Pedersen
  0 siblings, 1 reply; 9+ messages in thread
From: Neal Cardwell @ 2021-02-23 15:41 UTC (permalink / raw)
  To: Gil Pedersen
  Cc: David Miller, Hideaki YOSHIFUJI, dsahern, Netdev, Yuchung Cheng,
	Eric Dumazet

On Tue, Feb 23, 2021 at 5:13 AM Gil Pedersen <kanongil@gmail.com> wrote:
>
> Hi,
>
> I am investigating a TCP stall that can occur when sending to an Android device (kernel 4.9.148) from an Ubuntu server running kernel 5.11.0.
>
> The issue seems to be that RACK is not applied when a D-SACK (with SACK) is received on the server after an RTO re-transmission (CA_Loss state). Here the re-transmitted segment is considered to be already delivered and loss undo logic is applied. Then nothing is re-transmitted until the next RTO, where the next segment is sent and the same thing happens again. The causes the retransmitted segments to be delivered at a rate of ~1 per second, so a burst loss of eg. 20 segments cause a 20+ second stall. I would expect RACK to kick in long before this happens.
>
> Note the D-SACK should not be considered spurious, as the TSecr value matches the re-transmission TSval.
>
> Also, the Android receiver is definitely sending strange D-SACKs that does not properly advance the ACK number to include received segments. However, I can't control it and need to fix it on the server by quickly re-transmitting the segments. The connection itself is functional. If the client makes a request to the server in this state, it can respond and the client will receive any segments sent in reply.
>
> I can see from counters that TcpExtTCPLossUndo & TcpExtTCPSackFailures are incremented on the server when this happens.
> The issue appears both with F-RTO enabled and disabled. Also appears both with BBR and RENO.
>
> Any idea of why this happens, or suggestions on how to debug the issue further?
>
> /Gil

Thanks for the detailed report! It sounds like you have a trace. Can
you please attach (or post the URL of) a binary tcpdump .pcap trace
that illustrates the problem, to make sure we can understand and
reproduce the issue?

thanks,
neal

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TCP stall issue
  2021-02-23 15:41 ` Neal Cardwell
@ 2021-02-24 10:03   ` Gil Pedersen
  2021-02-24 14:55     ` Neal Cardwell
  0 siblings, 1 reply; 9+ messages in thread
From: Gil Pedersen @ 2021-02-24 10:03 UTC (permalink / raw)
  To: Neal Cardwell
  Cc: David Miller, Hideaki YOSHIFUJI, dsahern, Netdev, Yuchung Cheng,
	Eric Dumazet


[-- Attachment #1: Type: text/plain, Size: 2422 bytes --]



> On 23 Feb 2021, at 16.41, Neal Cardwell <ncardwell@google.com> wrote:
> 
> On Tue, Feb 23, 2021 at 5:13 AM Gil Pedersen <kanongil@gmail.com> wrote:
>> 
>> Hi,
>> 
>> I am investigating a TCP stall that can occur when sending to an Android device (kernel 4.9.148) from an Ubuntu server running kernel 5.11.0.
>> 
>> The issue seems to be that RACK is not applied when a D-SACK (with SACK) is received on the server after an RTO re-transmission (CA_Loss state). Here the re-transmitted segment is considered to be already delivered and loss undo logic is applied. Then nothing is re-transmitted until the next RTO, where the next segment is sent and the same thing happens again. The causes the retransmitted segments to be delivered at a rate of ~1 per second, so a burst loss of eg. 20 segments cause a 20+ second stall. I would expect RACK to kick in long before this happens.
>> 
>> Note the D-SACK should not be considered spurious, as the TSecr value matches the re-transmission TSval.
>> 
>> Also, the Android receiver is definitely sending strange D-SACKs that does not properly advance the ACK number to include received segments. However, I can't control it and need to fix it on the server by quickly re-transmitting the segments. The connection itself is functional. If the client makes a request to the server in this state, it can respond and the client will receive any segments sent in reply.
>> 
>> I can see from counters that TcpExtTCPLossUndo & TcpExtTCPSackFailures are incremented on the server when this happens.
>> The issue appears both with F-RTO enabled and disabled. Also appears both with BBR and RENO.
>> 
>> Any idea of why this happens, or suggestions on how to debug the issue further?
>> 
>> /Gil
> 
> Thanks for the detailed report! It sounds like you have a trace. Can
> you please attach (or post the URL of) a binary tcpdump .pcap trace
> that illustrates the problem, to make sure we can understand and
> reproduce the issue?
> 
> thanks,
> neal

Sure, I attached a trace from the server that should illustrate the issue.

The trace is cut from a longer flow with the server at 188.120.85.11 and a client window scaling factor of 256.

Packet 78 is a TLP, followed by a delayed DUPACK with a SACK from the client.
The SACK triggers a single segment fast re-transmit with an ignored?? D-SACK in packet 81.
The first RTO happens at packet 82.


[-- Attachment #2: rack-rto-stall.pcap --]
[-- Type: application/octet-stream, Size: 439417 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TCP stall issue
  2021-02-24 10:03   ` Gil Pedersen
@ 2021-02-24 14:55     ` Neal Cardwell
  2021-02-24 15:36       ` Gil Pedersen
  0 siblings, 1 reply; 9+ messages in thread
From: Neal Cardwell @ 2021-02-24 14:55 UTC (permalink / raw)
  To: Gil Pedersen
  Cc: David Miller, Hideaki YOSHIFUJI, dsahern, Netdev, Yuchung Cheng,
	Eric Dumazet


[-- Attachment #1: Type: text/plain, Size: 1469 bytes --]

On Wed, Feb 24, 2021 at 5:03 AM Gil Pedersen <kanongil@gmail.com> wrote:
> Sure, I attached a trace from the server that should illustrate the issue.
>
> The trace is cut from a longer flow with the server at 188.120.85.11 and a client window scaling factor of 256.
>
> Packet 78 is a TLP, followed by a delayed DUPACK with a SACK from the client.
> The SACK triggers a single segment fast re-transmit with an ignored?? D-SACK in packet 81.
> The first RTO happens at packet 82.

Thanks for the trace! That is very helpful. I have attached a plot and
my notes on the trace, for discussion.

AFAICT the client appears to be badly misbehaving, and misrepresenting
what has happened.  At each point where the client sends a DSACK,
there is an apparent contradiction. Either the client has received
that data before, or it hasn't. If the client *has* already received
that data, then it should have already cumulatively ACKed it. If the
client has *not* already received that data, then it shouldn't send a
DSACK for it.

Given that, from the server's perspective, the client is
misbehaving/lying, it's not clear what inferences the server can
safely make. Though I agree it's probably possible to do much better
than the current server behavior.

A few questions.

(a) is there a middlebox (firewall, NAT, etc) in the path?

(b) is it possible to capture a client-side trace, to help
disambiguate whether there is a client-side Linux bug or a middlebox
bug?

thanks,
neal

[-- Attachment #2: slow-recovery.txt --]
[-- Type: text/plain, Size: 3821 bytes --]

# server finishes sending a flight of data:
04:23:49.383419 IP 51.91.154.158.443 > 188.120.85.11.55038: . 358577:365817(7240) ack 395 win 182 <nop,nop,TS val 1514702387 ecr 72958509>
04:23:49.384558 IP 51.91.154.158.443 > 188.120.85.11.55038: . 365817:373057(7240) ack 395 win 182 <nop,nop,TS val 1514702388 ecr 72958509>
04:23:49.385693 IP 51.91.154.158.443 > 188.120.85.11.55038: . 373057:380297(7240) ack 395 win 182 <nop,nop,TS val 1514702389 ecr 72958509>
04:23:49.386798 IP 51.91.154.158.443 > 188.120.85.11.55038: . 380297:387537(7240) ack 395 win 182 <nop,nop,TS val 1514702390 ecr 72958509>
04:23:49.387903 IP 51.91.154.158.443 > 188.120.85.11.55038: . 387537:394777(7240) ack 395 win 182 <nop,nop,TS val 1514702391 ecr 72958509>
04:23:49.389012 IP 51.91.154.158.443 > 188.120.85.11.55038: . 394777:402017(7240) ack 395 win 182 <nop,nop,TS val 1514702392 ecr 72958509>
04:23:49.390117 IP 51.91.154.158.443 > 188.120.85.11.55038: P. 402017:406316(4299) ack 395 win 182 <nop,nop,TS val 1514702393 ecr 72958509>

# client ACKs the first portion of the flight:
04:23:49.492714 IP 188.120.85.11.55038 > 51.91.154.158.443: . 395:395(0) ack 3111 win 2087 <nop,nop,TS val 72958562 ecr 1514702331>
04:23:49.508450 IP 188.120.85.11.55038 > 51.91.154.158.443: . 395:395(0) ack 189161 win 1540 <nop,nop,TS val 72958563 ecr 1514702332>

# client sends a request:
04:23:49.543869 IP 188.120.85.11.55038 > 51.91.154.158.443: P. 395:498(103) ack 189161 win 1770 <nop,nop,TS val 72958567 ecr 1514702332>
# server ACKs request:
04:23:49.543928 IP 51.91.154.158.443 > 188.120.85.11.55038: . 406316:406316(0) ack 498 win 182 <nop,nop,TS val 1514702547 ecr 72958567>

# client sends another request:
04:23:49.565850 IP 188.120.85.11.55038 > 51.91.154.158.443: P. 498:705(207) ack 189161 win 2087 <nop,nop,TS val 72958569 ecr 1514702332>
# server ACKs request:
04:23:49.565888 IP 51.91.154.158.443 > 188.120.85.11.55038: . 406316:406316(0) ack 705 win 182 <nop,nop,TS val 1514702569 ecr 72958569>

# server sends TLP retransmit:
04:23:49.824734 IP 51.91.154.158.443 > 188.120.85.11.55038: P. 404913:406316(1403) ack 705 win 182 <nop,nop,TS val 1514702828 ecr 72958569>
# client SACKs TLP retransmit:
04:23:49.846128 IP 188.120.85.11.55038 > 51.91.154.158.443: . 705:705(0) ack 189161 win 2087 <nop,nop,TS val 72958597 ecr 1514702332,nop,nop,sack 1 {404913:406316}>

# server fast-retransmits head packet at snd_una:
04:23:49.846141 IP 51.91.154.158.443 > 188.120.85.11.55038: . 189161:190609(1448) ack 705 win 182 <nop,nop,TS val 1514702849 ecr 72958597>
# 21ms later, client cumulatively ACKs, DSACKs, and echoes timestamp from the fast retransmit:
04:23:49.867017 IP 188.120.85.11.55038 > 51.91.154.158.443: . 705:705(0) ack 190609 win 2082 <nop,nop,TS val 72958599 ecr 1514702849,nop,nop,sack 2 {189161:190609}{404913:406316}>

# server RTOs and retransmits head packet at snd_una:
04:23:50.240699 IP 51.91.154.158.443 > 188.120.85.11.55038: . 190609:192057(1448) ack 705 win 182 <nop,nop,TS val 1514703244 ecr 72958599>
# 478ms later, client cumulatively ACKs, DSACKs, and echoes timestamp from the RTO retansmit:
04:23:50.718823 IP 188.120.85.11.55038 > 51.91.154.158.443: . 705:705(0) ack 192057 win 2080 <nop,nop,TS val 72958684 ecr 1514703244,nop,nop,sack 2 {190609:192057}{404913:406316}>

# again, server RTOs and retransmits head packet at snd_una:
04:23:51.328726 IP 51.91.154.158.443 > 188.120.85.11.55038: . 192057:193505(1448) ack 705 win 182 <nop,nop,TS val 1514704332 ecr 72958684>
# 620ms later, client again sends cumulative-ACK/DSACK/ecr...
04:23:51.949296 IP 188.120.85.11.55038 > 51.91.154.158.443: . 705:705(0) ack 193505 win 2080 <nop,nop,TS val 72958807 ecr 1514704332,nop,nop,sack 2 {192057:193505}{404913:406316}>

# repeat RTO/DSACK process, with ACKs arriving mostly just over 200ms after data is sent...

[-- Attachment #3: slow-recovery-time-seq-plot.png --]
[-- Type: image/png, Size: 38360 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TCP stall issue
  2021-02-24 14:55     ` Neal Cardwell
@ 2021-02-24 15:36       ` Gil Pedersen
  2021-02-25 15:05         ` Neal Cardwell
  0 siblings, 1 reply; 9+ messages in thread
From: Gil Pedersen @ 2021-02-24 15:36 UTC (permalink / raw)
  To: Neal Cardwell
  Cc: David Miller, Hideaki YOSHIFUJI, dsahern, Netdev, Yuchung Cheng,
	Eric Dumazet


> On 24 Feb 2021, at 15.55, Neal Cardwell <ncardwell@google.com> wrote:
> 
> On Wed, Feb 24, 2021 at 5:03 AM Gil Pedersen <kanongil@gmail.com> wrote:
>> Sure, I attached a trace from the server that should illustrate the issue.
>> 
>> The trace is cut from a longer flow with the server at 188.120.85.11 and a client window scaling factor of 256.
>> 
>> Packet 78 is a TLP, followed by a delayed DUPACK with a SACK from the client.
>> The SACK triggers a single segment fast re-transmit with an ignored?? D-SACK in packet 81.
>> The first RTO happens at packet 82.
> 
> Thanks for the trace! That is very helpful. I have attached a plot and
> my notes on the trace, for discussion.
> 
> AFAICT the client appears to be badly misbehaving, and misrepresenting
> what has happened.  At each point where the client sends a DSACK,
> there is an apparent contradiction. Either the client has received
> that data before, or it hasn't. If the client *has* already received
> that data, then it should have already cumulatively ACKed it. If the
> client has *not* already received that data, then it shouldn't send a
> DSACK for it.
> 
> Given that, from the server's perspective, the client is
> misbehaving/lying, it's not clear what inferences the server can
> safely make. Though I agree it's probably possible to do much better
> than the current server behavior.
> 
> A few questions.
> 
> (a) is there a middlebox (firewall, NAT, etc) in the path?
> 
> (b) is it possible to capture a client-side trace, to help
> disambiguate whether there is a client-side Linux bug or a middlebox
> bug?

Yes, this sounds like a sound analysis, and matches my observation. The client is confused about whether it has the data or not.

Unfortunately I only have that (un-rooted) device available, so I can't do traces on it. The connection path is Client -> Wi-Fi -> NAT -> NAT -> Internet -> Server (which has a basic UFW firewall).
I will try to do a trace on the first NAT router.

My first priority is to make the server behave better in this case, but I understand that you would like to investigate the client / connection issue as well? From the server POV, this is clearly an edge case, but a fast re-transmit does seem more appropriate.

Btw. the "client SACKs TLP retransmit" note is not correct. This is an old ACK, which can be seen from the ecr value.

/Gil

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TCP stall issue
  2021-02-24 15:36       ` Gil Pedersen
@ 2021-02-25 15:05         ` Neal Cardwell
  2021-02-26 14:39           ` David Laight
  0 siblings, 1 reply; 9+ messages in thread
From: Neal Cardwell @ 2021-02-25 15:05 UTC (permalink / raw)
  To: Gil Pedersen
  Cc: David Miller, Hideaki YOSHIFUJI, dsahern, Netdev, Yuchung Cheng,
	Eric Dumazet

On Wed, Feb 24, 2021 at 10:36 AM Gil Pedersen <kanongil@gmail.com> wrote:
>
>
> > On 24 Feb 2021, at 15.55, Neal Cardwell <ncardwell@google.com> wrote:
> >
> > On Wed, Feb 24, 2021 at 5:03 AM Gil Pedersen <kanongil@gmail.com> wrote:
> >> Sure, I attached a trace from the server that should illustrate the issue.
> >>
> >> The trace is cut from a longer flow with the server at 188.120.85.11 and a client window scaling factor of 256.
> >>
> >> Packet 78 is a TLP, followed by a delayed DUPACK with a SACK from the client.
> >> The SACK triggers a single segment fast re-transmit with an ignored?? D-SACK in packet 81.
> >> The first RTO happens at packet 82.
> >
> > Thanks for the trace! That is very helpful. I have attached a plot and
> > my notes on the trace, for discussion.
> >
> > AFAICT the client appears to be badly misbehaving, and misrepresenting
> > what has happened.  At each point where the client sends a DSACK,
> > there is an apparent contradiction. Either the client has received
> > that data before, or it hasn't. If the client *has* already received
> > that data, then it should have already cumulatively ACKed it. If the
> > client has *not* already received that data, then it shouldn't send a
> > DSACK for it.
> >
> > Given that, from the server's perspective, the client is
> > misbehaving/lying, it's not clear what inferences the server can
> > safely make. Though I agree it's probably possible to do much better
> > than the current server behavior.
> >
> > A few questions.
> >
> > (a) is there a middlebox (firewall, NAT, etc) in the path?
> >
> > (b) is it possible to capture a client-side trace, to help
> > disambiguate whether there is a client-side Linux bug or a middlebox
> > bug?
>
> Yes, this sounds like a sound analysis, and matches my observation. The client is confused about whether it has the data or not.
>
> Unfortunately I only have that (un-rooted) device available, so I can't do traces on it. The connection path is Client -> Wi-Fi -> NAT -> NAT -> Internet -> Server (which has a basic UFW firewall).
> I will try to do a trace on the first NAT router.
>
> My first priority is to make the server behave better in this case, but I understand that you would like to investigate the client / connection issue as well? From the server POV, this is clearly an edge case, but a fast re-transmit does seem more appropriate.

Regarding improving the server's retransmit behavior and having it use
a fast retransmit here.

I don't think this is a bug in RACK, because the DSACK clearly
indicates that the retransmission was spurious, so all the packets
already marked lost by RACK are thus unmarked.

I guess the questions are:

(a) How would we craft a general heuristic that would cause a fast
retransmit here in the misbehaving receiver case, without causing lots
of spurious retransmits for well-behaved receivers? Do you have a
suggestion?

(b) Do we want to add the new complexity for this heuristic, given
that this is a misbehaving receiver and we don't yet have an
indication that it's a widespread bug?

> Btw. the "client SACKs TLP retransmit" note is not correct. This is an old ACK, which can be seen from the ecr value.

I believe your analysis of the ECR value here is incorrect. The TS ecr
value in ACKs with SACK blocks will generally not match the TS val of
the SACKed segment due to the rules of RFC 7323, specifically rule (2)
on page 17 in section 4.3 (
https://tools.ietf.org/html/rfc7323#section-4.3 ), which says:

   (2)  If:

            SEG.TSval >= TS.Recent and SEG.SEQ <= Last.ACK.sent

        then SEG.TSval is copied to TS.Recent; otherwise, it is ignored.

Because the sequence number on the SACKed TLP retransmit is >
Last.ACK.sent, SEG.TSval is *not* copied to TS.Recent, and so the TS
ecr value does not reflect the TS val of the TLP retransmit.

So AFAICT this is not an old ACK, but is indeed a SACK of the TLP retransmit.

best,
neal

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: TCP stall issue
  2021-02-25 15:05         ` Neal Cardwell
@ 2021-02-26 14:39           ` David Laight
  2021-02-26 16:26             ` Gil Pedersen
  0 siblings, 1 reply; 9+ messages in thread
From: David Laight @ 2021-02-26 14:39 UTC (permalink / raw)
  To: 'Neal Cardwell', Gil Pedersen
  Cc: David Miller, Hideaki YOSHIFUJI, dsahern, Netdev, Yuchung Cheng,
	Eric Dumazet

Some thoughts...

Does a non-android linux system behave correctly through the same NAT gateways?
Particularly with a similar kernel version.

If you have a USB OTG cable and USB ethernet dongle you may be able to get
android to use a wired ethernet connection - excluding any WiFi issues.
(OTG usually works for keyboard and mouse, dunno if ethernet support is there.)

Does you android device work on any other networks?

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TCP stall issue
  2021-02-26 14:39           ` David Laight
@ 2021-02-26 16:26             ` Gil Pedersen
  2021-02-26 17:50               ` Neal Cardwell
  0 siblings, 1 reply; 9+ messages in thread
From: Gil Pedersen @ 2021-02-26 16:26 UTC (permalink / raw)
  To: David Laight
  Cc: Neal Cardwell, David Miller, Hideaki YOSHIFUJI, dsahern, Netdev,
	Yuchung Cheng, Eric Dumazet


> On 26 Feb 2021, at 15.39, David Laight <David.Laight@ACULAB.COM> wrote:
> 
> Some thoughts...
> 
> Does a non-android linux system behave correctly through the same NAT gateways?
> Particularly with a similar kernel version.
> 
> If you have a USB OTG cable and USB ethernet dongle you may be able to get
> android to use a wired ethernet connection - excluding any WiFi issues.
> (OTG usually works for keyboard and mouse, dunno if ethernet support is there.)
> 
> Does you android device work on any other networks?

I have done some further tests. I managed to find another Android device (kernel 4.9.113), which thankfully does _not_ send the weird D-SACKs and quickly recovers, so the problem appears to be on the original device.

Additionally, I have managed to do a trace on the WLAN AP, where I can confirm that all packets seem to be transferred without unnecessary modifications or re-ordering. Ie. all segments sent from the server make it to the device and any loss will be device local. As such this points to a driver-level issue?

I don't have an ethernet dongle ready. I tried to connect using cellular and was unable to replicate the issue, so this further points at a driver-level issue.

Given that it now seems relevant, the device is an Android P20 Lite, running a variant of Android 9.1 with an update from this year (kernel was built jan. 05 2021).

/Gil

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TCP stall issue
  2021-02-26 16:26             ` Gil Pedersen
@ 2021-02-26 17:50               ` Neal Cardwell
  0 siblings, 0 replies; 9+ messages in thread
From: Neal Cardwell @ 2021-02-26 17:50 UTC (permalink / raw)
  To: Gil Pedersen
  Cc: David Laight, David Miller, Hideaki YOSHIFUJI, dsahern, Netdev,
	Yuchung Cheng, Eric Dumazet, Maciej Żenczykowski

On Fri, Feb 26, 2021 at 11:26 AM Gil Pedersen <kanongil@gmail.com> wrote:
>
>
> > On 26 Feb 2021, at 15.39, David Laight <David.Laight@ACULAB.COM> wrote:
> >
> > Some thoughts...
> >
> > Does a non-android linux system behave correctly through the same NAT gateways?
> > Particularly with a similar kernel version.
> >
> > If you have a USB OTG cable and USB ethernet dongle you may be able to get
> > android to use a wired ethernet connection - excluding any WiFi issues.
> > (OTG usually works for keyboard and mouse, dunno if ethernet support is there.)
> >
> > Does you android device work on any other networks?
>
> I have done some further tests. I managed to find another Android device (kernel 4.9.113), which thankfully does _not_ send the weird D-SACKs and quickly recovers, so the problem appears to be on the original device.
>
> Additionally, I have managed to do a trace on the WLAN AP, where I can confirm that all packets seem to be transferred without unnecessary modifications or re-ordering. Ie. all segments sent from the server make it to the device and any loss will be device local. As such this points to a driver-level issue?
>
> I don't have an ethernet dongle ready. I tried to connect using cellular and was unable to replicate the issue, so this further points at a driver-level issue.
>
> Given that it now seems relevant, the device is an Android P20 Lite, running a variant of Android 9.1 with an update from this year (kernel was built jan. 05 2021).

Thanks for the details. Agreed, it does sound as if a wifi
hardware/firmare/driver issue on that particular Android device is the
most likely cause of those symptoms.

The only sequence I can think of that would cause these  symptoms
would be if the wifi hardware/firmer/driver on that device is somehow
both:

(1) duplicating each of the retransmit packets that it passes up the
network stack, and
(2) dropping the first ACK packet generated by the first of the two
copies of the retransmit

Though that sounds so unlikely that perhaps there is a different explanation...

neal

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, back to index

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-23 10:09 TCP stall issue Gil Pedersen
2021-02-23 15:41 ` Neal Cardwell
2021-02-24 10:03   ` Gil Pedersen
2021-02-24 14:55     ` Neal Cardwell
2021-02-24 15:36       ` Gil Pedersen
2021-02-25 15:05         ` Neal Cardwell
2021-02-26 14:39           ` David Laight
2021-02-26 16:26             ` Gil Pedersen
2021-02-26 17:50               ` Neal Cardwell

Netdev Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/netdev/0 netdev/git/0.git
	git clone --mirror https://lore.kernel.org/netdev/1 netdev/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 netdev netdev/ https://lore.kernel.org/netdev \
		netdev@vger.kernel.org
	public-inbox-index netdev

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.netdev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git