From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yuchung Cheng Subject: Re: [PATCH net-next 3/4] tcp: add SACK compression Date: Thu, 17 May 2018 09:59:06 -0700 Message-ID: References: <20180517121213.43559-1-edumazet@google.com> <20180517121213.43559-4-edumazet@google.com> <58bcf9c0-e4f0-691d-8d6a-40ff3629f500@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Cc: Eric Dumazet , Eric Dumazet , David Miller , Netdev , =?UTF-8?B?VG9rZSBIw7hpbGFuZC1Kw7hyZ2Vuc2Vu?= , Soheil Hassas Yeganeh , Christoph Paasch To: Neal Cardwell Return-path: Received: from mail-io0-f194.google.com ([209.85.223.194]:43412 "EHLO mail-io0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750825AbeEQQ7s (ORCPT ); Thu, 17 May 2018 12:59:48 -0400 Received: by mail-io0-f194.google.com with SMTP id t23-v6so2847329ioc.10 for ; Thu, 17 May 2018 09:59:48 -0700 (PDT) In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: On Thu, May 17, 2018 at 9:41 AM, Neal Cardwell wrote: > > On Thu, May 17, 2018 at 11:40 AM Eric Dumazet > wrote: > > On 05/17/2018 08:14 AM, Neal Cardwell wrote: > > > Is there a particular motivation for the cap of 127? IMHO 127 ACKs is > quite > > > a few to compress. Experience seems to show that it works well to have > one > > > GRO ACK for ~64KBytes that triggers a single TSO skb of ~64KBytes. It > might > > > be nice to try to match those dynamics in this SACK compression case, > so it > > > might be nice to cap the number of compressed ACKs at something like 44? > > > (0xffff / 1448 - 1). That way for high-speed paths we could try to keep > > > the ACK clock going with ACKs for ~64KBytes that trigger a single TSO > skb > > > of ~64KBytes, no matter whether we are sending SACKs or cumulative ACKs. > > > 127 was chosen because the field is u8, and since skb allocation for the > ACK > > can fail, we could have cases were the field goes above 127. > > > Ultimately, I believe a followup patch would add a sysctl, so that we can > fine-tune > > this, and eventually disable ACK compression if this sysctl is set to 0 > > OK, a sysctl sounds good. I would still vote for a default of 44. :-) > > > > >> + if (hrtimer_is_queued(&tp->compressed_ack_timer)) > > >> + return; > > >> + > > >> + /* compress ack timer : 5 % of srtt, but no more than 2.5 ms */ > > >> + > > >> + delay = min_t(unsigned long, 2500 * NSEC_PER_USEC, > > >> + tp->rcv_rtt_est.rtt_us * (NSEC_PER_USEC >> > 3)/20); > > > > > > Any particular motivation for the 2.5ms here? It might be nice to match > the > > > existing TSO autosizing dynamics and use 1ms here instead of having a > > > separate new constant of 2.5ms. Smaller time scales here should lead to > > > less burstiness and queue pressure from data packets in the network, > and we > > > know from experience that the CPU overhead of 1ms chunks is acceptable. > > > This came from my tests on wifi really :) > > > I also had the idea to make this threshold adjustable for wifi, like we > did for sk_pacing_shift. > > > (On wifi, we might want to increase the max delay between ACK) > > > So maybe use 1ms delay, when sk_pacing_shift == 10, but increase it if > sk_pacing_shift has been lowered. > > Sounds good to me. > > Thanks for implementing this! Overall this patch seems nice to me. > > Acked-by: Neal Cardwell > > BTW, I guess we should spread the word to maintainers of other major TCP > stacks that they need to be prepared for what may be a much higher degree > of compression/aggregation in the SACK stream. Linux stacks going back many > years should be fine with this, but I'm not sure about the other major OSes > (they may only allow sending one MSS per ACK-with-SACKs received). Patch looks really good but Neal's comment just reminds me a potential legacy issue. I recall at least Apple and Windows TCP stacks still need 3+ DUPACKs (!= a SACK covering 3+ packets) to trigger fast recovery. Will we have an issue there interacting w/ these stacks? > > neal