From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33EFFC433E0 for ; Wed, 3 Jun 2020 05:44:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F38B82072F for ; Wed, 3 Jun 2020 05:44:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LgBR4wYD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725951AbgFCFoS (ORCPT ); Wed, 3 Jun 2020 01:44:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725876AbgFCFoR (ORCPT ); Wed, 3 Jun 2020 01:44:17 -0400 Received: from mail-yb1-xb41.google.com (mail-yb1-xb41.google.com [IPv6:2607:f8b0:4864:20::b41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0D1AC05BD1E for ; Tue, 2 Jun 2020 22:44:16 -0700 (PDT) Received: by mail-yb1-xb41.google.com with SMTP id n123so434081ybf.11 for ; Tue, 02 Jun 2020 22:44:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=drl8sixri3zXLhlgjm5kMzdzkBeIGW4B0LZy7UPGm64=; b=LgBR4wYDM7EbnCEdMCLz/CEAYXc8kPuPCPYULmEQY1HVM3FGzw8s2FcaAZEYiT7bgR cJmFg0kNSOTRmTIQ+O6lcy8i1eCq2bbR9ydwSIgEIj5wsXP4Mb+/r7J5+JKrKYUq2r/Q J0NgZiiS1XVa9CxF76KP9wGJ/+FH1mMwDIhCKuoG7Pzcycm+haecCiMcNUDckKntMAjv 0yzwRLIeoxyqujzKd8MTunXunne0KAbO98XnJE8dCxTt8Uivw3SmqJTq4eSAbbQlg70p 0NuHMv28ewUpNrJhhY/ltCjOaH2fxVxixFFtR2HTDHdEe10vZfdHTuRVj/8XBaI4qbbz viwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=drl8sixri3zXLhlgjm5kMzdzkBeIGW4B0LZy7UPGm64=; b=RDGOVGB4VX5yyF+DBURA4QKkiAl2ad06mSGTMdv1P8XXTavtbAwq3n4/nD5xrlWt+P uR7TVgHGwSxLDJEcTswb6P21gTTPRxB1PKTroJqzD85wxnp7tEv0gPhiGbnb+ebBRyGk W4q1vgJVpcwC2tk/A1qp5mpB9zn6pXA613bpuM8K6WO1ZZ3JXtWQFp8ZaLI3eWqJxOp0 yNHeiLEgZ7A5lcUT9eWJniqzTpam1nVM/ym7CGxisTkRH49uREDYPi5ugxVbtcm1GFQj LA1JAqd7/d5tVYtrS1kO4zb0D0AtjwPbaEgJDUxjTMD3rl01yGsAhe9Ta/2FtrU5d6mo qFew== X-Gm-Message-State: AOAM530tGff4+jGQrcabkQBxc0q5wELvL3DDpdPMBwbEp3MHSpYsq9t8 9B7JWkfVsCzToaxMQyQqkA/opHL+l/Odz3obWDz1aA== X-Google-Smtp-Source: ABdhPJzfNb4M+A75OQNFB0yn0Chinhpyl8JiTiVSPfQpUK6LtL2KmSzSASlR846d42VW556gV29IBHchKq067B/1Lbk= X-Received: by 2002:a25:9a49:: with SMTP id r9mr21893156ybo.520.1591163055744; Tue, 02 Jun 2020 22:44:15 -0700 (PDT) MIME-Version: 1.0 References: <20200602080425.93712-1-kerneljasonxing@gmail.com> In-Reply-To: From: Eric Dumazet Date: Tue, 2 Jun 2020 22:44:02 -0700 Message-ID: Subject: Re: [PATCH] tcp: fix TCP socks unreleased in BBR mode To: Jason Xing Cc: David Miller , Alexey Kuznetsov , Hideaki YOSHIFUJI , netdev , LKML , liweishi@kuaishou.com, Shujin Li Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 2, 2020 at 10:05 PM Jason Xing wrote: > > Hi Eric, > > I'm still trying to understand what you're saying before. Would this > be better as following: > 1) discard the tcp_internal_pacing() function. > 2) remove where the tcp_internal_pacing() is called in the > __tcp_transmit_skb() function. > > If we do so, we could avoid 'too late to give up pacing'. Meanwhile, > should we introduce the tcp_wstamp_ns socket field as commit > (864e5c090749) does? > Please do not top-post on netdev mailing list. I basically suggested double-checking which point in TCP could end up calling tcp_internal_pacing() while the timer was already armed. I guess this is mtu probing. Please try the following patch : If we still have another bug, a WARNING should give us a stack trace. diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index cc4ba42052c21b206850594db6751810d8fc72b4..8f4081b228486305222767d4d118b9b6ed0ffda3 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -977,12 +977,26 @@ static void tcp_internal_pacing(struct sock *sk, const struct sk_buff *skb) len_ns = (u64)skb->len * NSEC_PER_SEC; do_div(len_ns, rate); + + /* If hrtimer is already armed, then our caller has not properly + * used tcp_pacing_check(). + */ + if (unlikely(hrtimer_is_queued(&tcp_sk(sk)->pacing_timer))) { + WARN_ON_ONCE(1); + return; + } hrtimer_start(&tcp_sk(sk)->pacing_timer, ktime_add_ns(ktime_get(), len_ns), HRTIMER_MODE_ABS_PINNED_SOFT); sock_hold(sk); } +static bool tcp_pacing_check(const struct sock *sk) +{ + return tcp_needs_internal_pacing(sk) && + hrtimer_is_queued(&tcp_sk(sk)->pacing_timer); +} + static void tcp_update_skb_after_send(struct tcp_sock *tp, struct sk_buff *skb) { skb->skb_mstamp = tp->tcp_mstamp; @@ -2117,6 +2131,9 @@ static int tcp_mtu_probe(struct sock *sk) if (!tcp_can_coalesce_send_queue_head(sk, probe_size)) return -1; + if (tcp_pacing_check(sk)) + return -1; + /* We're allowed to probe. Build it now. */ nskb = sk_stream_alloc_skb(sk, probe_size, GFP_ATOMIC, false); if (!nskb) @@ -2190,11 +2207,6 @@ static int tcp_mtu_probe(struct sock *sk) return -1; } -static bool tcp_pacing_check(const struct sock *sk) -{ - return tcp_needs_internal_pacing(sk) && - hrtimer_is_queued(&tcp_sk(sk)->pacing_timer); -} /* TCP Small Queues : * Control number of packets in qdisc/devices to two packets / or ~1 ms. > Thanks, > Jason > > On Wed, Jun 3, 2020 at 10:44 AM Eric Dumazet wrote: > > > > On Tue, Jun 2, 2020 at 7:42 PM Jason Xing wrote: > > > > > > I agree with you. The upstream has already dropped and optimized this > > > part (commit 864e5c090749), so it would not happen like that. However > > > the old kernels like LTS still have the problem which causes > > > large-scale crashes on our thousands of machines after running for a > > > long while. I will send the fix to the correct tree soon :) > > > > If you run BBR at scale (thousands of machines), you probably should > > use sch_fq instead of internal pacing, > > just saying ;) > > > > > > > > > > Thanks again, > > > Jason > > > > > > On Wed, Jun 3, 2020 at 10:29 AM Eric Dumazet wrote: > > > > > > > > On Tue, Jun 2, 2020 at 6:53 PM Jason Xing wrote: > > > > > > > > > > Hi Eric, > > > > > > > > > > I'm sorry that I didn't write enough clearly. We're running the > > > > > pristine 4.19.125 linux kernel (the latest LTS version) and have been > > > > > haunted by such an issue. This patch is high-important, I think. So > > > > > I'm going to resend this email with the [patch 4.19] on the headline > > > > > and cc Greg. > > > > > > > > Yes, please always give for which tree a patch is meant for. > > > > > > > > Problem is that your patch is not correct. > > > > In these old kernels, tcp_internal_pacing() is called _after_ the > > > > packet has been sent. > > > > It is too late to 'give up pacing' > > > > > > > > The packet should not have been sent if the pacing timer is queued > > > > (otherwise this means we do not respect pacing) > > > > > > > > So the bug should be caught earlier. check where tcp_pacing_check() > > > > calls are missing. > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks, > > > > > Jason > > > > > > > > > > On Tue, Jun 2, 2020 at 9:05 PM Eric Dumazet wrote: > > > > > > > > > > > > On Tue, Jun 2, 2020 at 1:05 AM wrote: > > > > > > > > > > > > > > From: Jason Xing > > > > > > > > > > > > > > TCP socks cannot be released because of the sock_hold() increasing the > > > > > > > sk_refcnt in the manner of tcp_internal_pacing() when RTO happens. > > > > > > > Therefore, this situation could increase the slab memory and then trigger > > > > > > > the OOM if the machine has beening running for a long time. This issue, > > > > > > > however, can happen on some machine only running a few days. > > > > > > > > > > > > > > We add one exception case to avoid unneeded use of sock_hold if the > > > > > > > pacing_timer is enqueued. > > > > > > > > > > > > > > Reproduce procedure: > > > > > > > 0) cat /proc/slabinfo | grep TCP > > > > > > > 1) switch net.ipv4.tcp_congestion_control to bbr > > > > > > > 2) using wrk tool something like that to send packages > > > > > > > 3) using tc to increase the delay in the dev to simulate the busy case. > > > > > > > 4) cat /proc/slabinfo | grep TCP > > > > > > > 5) kill the wrk command and observe the number of objects and slabs in TCP. > > > > > > > 6) at last, you could notice that the number would not decrease. > > > > > > > > > > > > > > Signed-off-by: Jason Xing > > > > > > > Signed-off-by: liweishi > > > > > > > Signed-off-by: Shujin Li > > > > > > > --- > > > > > > > net/ipv4/tcp_output.c | 3 ++- > > > > > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > > > > > > > > > diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c > > > > > > > index cc4ba42..5cf63d9 100644 > > > > > > > --- a/net/ipv4/tcp_output.c > > > > > > > +++ b/net/ipv4/tcp_output.c > > > > > > > @@ -969,7 +969,8 @@ static void tcp_internal_pacing(struct sock *sk, const struct sk_buff *skb) > > > > > > > u64 len_ns; > > > > > > > u32 rate; > > > > > > > > > > > > > > - if (!tcp_needs_internal_pacing(sk)) > > > > > > > + if (!tcp_needs_internal_pacing(sk) || > > > > > > > + hrtimer_is_queued(&tcp_sk(sk)->pacing_timer)) > > > > > > > return; > > > > > > > rate = sk->sk_pacing_rate; > > > > > > > if (!rate || rate == ~0U) > > > > > > > -- > > > > > > > 1.8.3.1 > > > > > > > > > > > > > > > > > > > Hi Jason. > > > > > > > > > > > > Please do not send patches that do not apply to current upstream trees. > > > > > > > > > > > > Instead, backport to your kernels the needed fixes. > > > > > > > > > > > > I suspect that you are not using a pristine linux kernel, but some > > > > > > heavily modified one and something went wrong in your backports. > > > > > > Do not ask us to spend time finding what went wrong. > > > > > > > > > > > > Thank you.