From mboxrd@z Thu Jan 1 00:00:00 1970 From: Cong Wang Subject: Re: [PATCH net-next] net: preserve sock reference when scrubbing the skb. Date: Wed, 27 Jun 2018 12:06:16 -0700 Message-ID: References: <20180625155610.30802-1-fbl@redhat.com> <48e15faf-f935-0166-e1db-18f7286e7264@gmail.com> <20180626220300.GT19565@plex.lan> <20180626233302.GU19565@plex.lan> <20180627003925.GV19565@plex.lan> <20180627123155.GW19565@plex.lan> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Cc: Eric Dumazet , Linux Kernel Network Developers , Paolo Abeni , David Miller , Florian Westphal , NetFilter To: Flavio Leitner Return-path: Received: from mail-pl0-f68.google.com ([209.85.160.68]:37036 "EHLO mail-pl0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966048AbeF0TG3 (ORCPT ); Wed, 27 Jun 2018 15:06:29 -0400 In-Reply-To: <20180627123155.GW19565@plex.lan> Sender: netdev-owner@vger.kernel.org List-ID: On Wed, Jun 27, 2018 at 5:32 AM Flavio Leitner wrote: > > On Tue, Jun 26, 2018 at 06:28:27PM -0700, Cong Wang wrote: > > On Tue, Jun 26, 2018 at 5:39 PM Flavio Leitner wrote: > > > > > > On Tue, Jun 26, 2018 at 05:29:51PM -0700, Cong Wang wrote: > > > > On Tue, Jun 26, 2018 at 4:33 PM Flavio Leitner wrote: > > > > > > > > > > It is still isolated, the sk carries the netns info and it is > > > > > orphaned when it re-enters the stack. > > > > > > > > Then what difference does your patch make? > > > > > > Don't forget it is fixing two issues. > > > > Sure. I am only talking about TSQ from the very beginning. > > Let me rephrase my above question: > > What difference does your patch make to TSQ? > > It avoids burstiness. Never even mentioned in changelog or in your patch. :-/ > > > > > Before your patch: > > > > veth orphans skb in its xmit > > > > > > > > After your patch: > > > > RX orphans it when re-entering stack (as you claimed, I don't know) > > > > > > ip_rcv, and equivalents. > > > > ip_rcv() is L3, we enter a stack from L1. So your above claim is incorrect. :) > > Maybe you found a problem, could you please point me to where in > between L1 to L3 the socket is relevant? > Of course, ingress qdisc is in L2. Do I need to say more? This is where we can re-route the packets, for example, redirecting it to yet another netns. This is in fact what we use in production, not anything that only in my imagination. You really have to think about why you allow a different netns influence another netns by holding the skb to throttle the source TCP socket. > > > > > And for veth pair: > > > > xmit from one side is RX for the other side > > > > So, where is the queueing? Where is the buffer bloat? GRO list?? > > > > > > CPU backlog. > > > > Yeah, but this is never targeted by TSQ: > > > > tcp_limit_output_bytes limits the number of bytes on qdisc > > or device to reduce artificial RTT/cwnd and reduce bufferbloat. > > > > which means you have to update Documentation/networking/ip-sysctl.txt > > too. > > How it is never targeted? Whole point is to avoid queueing traffic. What queues? You really need to define it, seriously. > Would you be okay if I include this chunk? No, still lack of an explanation why it comes across netns for a good reason. > > diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt > index ce8fbf5aa63c..f4c042be0216 100644 > --- a/Documentation/networking/ip-sysctl.txt > +++ b/Documentation/networking/ip-sysctl.txt > @@ -733,11 +733,11 @@ tcp_limit_output_bytes - INTEGER > Controls TCP Small Queue limit per tcp socket. > TCP bulk sender tends to increase packets in flight until it > gets losses notifications. With SNDBUF autotuning, this can > - result in a large amount of packets queued in qdisc/device > - on the local machine, hurting latency of other flows, for > - typical pfifo_fast qdiscs. > - tcp_limit_output_bytes limits the number of bytes on qdisc > - or device to reduce artificial RTT/cwnd and reduce bufferbloat. > + result in a large amount of packets queued on the local machine > + (e.g.: qdiscs, CPU backlog, or device) hurting latency of other Apparently CPU backlog never happens when leaving the host.