From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17598C4321A for ; Fri, 28 Jun 2019 11:50:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E7ADD2064A for ; Fri, 28 Jun 2019 11:50:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726865AbfF1LuL convert rfc822-to-8bit (ORCPT ); Fri, 28 Jun 2019 07:50:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46752 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726558AbfF1LuL (ORCPT ); Fri, 28 Jun 2019 07:50:11 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D217688306; Fri, 28 Jun 2019 11:50:03 +0000 (UTC) Received: from [10.36.116.200] (ovpn-116-200.ams2.redhat.com [10.36.116.200]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B3A46608D0; Fri, 28 Jun 2019 11:49:45 +0000 (UTC) From: "Eelco Chaudron" To: "Toke =?utf-8?b?SMO4aWxhbmQtSsO4cmdlbnNlbg==?=" Cc: "Jesper Dangaard Brouer" , "Machulsky, Zorik" , "Jubran, Samih" , davem@davemloft.net, netdev@vger.kernel.org, "Woodhouse, David" , "Matushevsky, Alexander" , "Bshara, Saeed" , "Wilson, Matt" , "Liguori, Anthony" , "Bshara, Nafea" , "Tzalik, Guy" , "Belgazal, Netanel" , "Saidi, Ali" , "Herrenschmidt, Benjamin" , "Kiyanovski, Arthur" , "Daniel Borkmann" , "Ilias Apalodimas" , "Alexei Starovoitov" , "Jakub Kicinski" , xdp-newbies@vger.kernel.org Subject: Re: XDP multi-buffer incl. jumbo-frames (Was: [RFC V1 net-next 1/1] net: ena: implement XDP drop support) Date: Fri, 28 Jun 2019 13:49:43 +0200 Message-ID: <5127F708-7CDD-4471-9767-D8C87DC23888@redhat.com> In-Reply-To: <87y31m884a.fsf@toke.dk> References: <20190623070649.18447-1-sameehj@amazon.com> <20190623070649.18447-2-sameehj@amazon.com> <20190623162133.6b7f24e1@carbon> <20190626103829.5360ef2d@carbon> <87y31m884a.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8BIT X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Fri, 28 Jun 2019 11:50:10 +0000 (UTC) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On 28 Jun 2019, at 9:46, Toke Høiland-Jørgensen wrote: > "Eelco Chaudron" writes: > >> On 26 Jun 2019, at 10:38, Jesper Dangaard Brouer wrote: >> >>> On Tue, 25 Jun 2019 03:19:22 +0000 >>> "Machulsky, Zorik" wrote: >>> >>>> On 6/23/19, 7:21 AM, "Jesper Dangaard Brouer" >>>> >>>> wrote: >>>> >>>> On Sun, 23 Jun 2019 10:06:49 +0300 wrote: >>>> >>>> > This commit implements the basic functionality of drop/pass >>>> logic in the >>>> > ena driver. >>>> >>>> Usually we require a driver to implement all the XDP return >>>> codes, >>>> before we accept it. But as Daniel and I discussed with Zorik >>>> during >>>> NetConf[1], we are going to make an exception and accept the >>>> driver >>>> if you also implement XDP_TX. >>>> >>>> As we trust that Zorik/Amazon will follow and implement >>>> XDP_REDIRECT >>>> later, given he/you wants AF_XDP support which requires >>>> XDP_REDIRECT. >>>> >>>> Jesper, thanks for your comments and very helpful discussion during >>>> NetConf! That's the plan, as we agreed. From our side I would like >>>> to >>>> reiterate again the importance of multi-buffer support by xdp >>>> frame. >>>> We would really prefer not to see our MTU shrinking because of xdp >>>> support. >>> >>> Okay we really need to make a serious attempt to find a way to >>> support >>> multi-buffer packets with XDP. With the important criteria of not >>> hurting performance of the single-buffer per packet design. >>> >>> I've created a design document[2], that I will update based on our >>> discussions: [2] >>> https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org >>> >>> The use-case that really convinced me was Eric's packet >>> header-split. >>> >>> >>> Lets refresh: Why XDP don't have multi-buffer support: >>> >>> XDP is designed for maximum performance, which is why certain >>> driver-level >>> use-cases were not supported, like multi-buffer packets (like >>> jumbo-frames). >>> As it e.g. complicated the driver RX-loop and memory model handling. >>> >>> The single buffer per packet design, is also tied into eBPF >>> Direct-Access >>> (DA) to packet data, which can only be allowed if the packet memory >>> is >>> in >>> contiguous memory. This DA feature is essential for XDP >>> performance. >>> >>> >>> One way forward is to define that XDP only get access to the first >>> packet buffer, and it cannot see subsequent buffers. For XDP_TX and >>> XDP_REDIRECT to work then XDP still need to carry pointers (plus >>> len+offset) to the other buffers, which is 16 bytes per extra >>> buffer. >> >> >> I’ve seen various network processor HW designs, and they normally >> get >> the first x bytes (128 - 512) which they can manipulate >> (append/prepend/insert/modify/delete). >> >> There are designs where they can “page in” the additional >> fragments >> but it’s expensive as it requires additional memory transfers. But >> the >> majority do not care (cannot change) the remaining fragments. Can >> also >> not think of a reason why you might want to remove something at the >> end >> of the frame (thinking about routing/forwarding needs here). >> >> If we do want XDP to access other fragments we could do this through >> a >> helper which swaps the packet context? > > Yeah, I was also going to suggest a helper for that. It doesn't > necessarily need to swap the packet context, it could just return a > new > pointer? Yes that will work, my head was still thinking ASICs where there is limited SRAM space…