From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jakub Kicinski Subject: Re: [PATCH net-next 00/13] nfp: abm: add basic support for advanced buffering NIC Date: Tue, 22 May 2018 00:56:59 -0700 Message-ID: References: <20180522051255.9438-1-jakub.kicinski@netronome.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Cc: David Miller , Linux Netdev List , oss-drivers@netronome.com, Andy Gospodarek , linux-internal@mellanox.com To: Or Gerlitz Return-path: Received: from mail-qt0-f178.google.com ([209.85.216.178]:46796 "EHLO mail-qt0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751842AbeEVH5B (ORCPT ); Tue, 22 May 2018 03:57:01 -0400 Received: by mail-qt0-f178.google.com with SMTP id m16-v6so22229900qtg.13 for ; Tue, 22 May 2018 00:57:01 -0700 (PDT) In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: On Mon, May 21, 2018 at 11:32 PM, Or Gerlitz wrote: > On Tue, May 22, 2018 at 8:12 AM, Jakub Kicinski wrote: >> Hi! >> >> This series lays groundwork for advanced buffer management NIC feature. >> It makes necessary NFP core changes, spawns representors and adds devlink >> glue. Following series will add the actual buffering configuration (patch >> series size limit). >> >> First three patches add support for configuring NFP buffer pools via a >> mailbox. The existing devlink APIs are used for the purpose. >> >> Third patch allows us to perform small reads from the NFP memory. >> >> The rest of the patch set adds eswitch mode change support and makes >> the driver spawn appropriate representors. > > Hi Jakub, > > Could you provide more higher level description on the abm use-case > and nature of these representors? I understand that under abm you are > modeling the nic as switch with vNIC ports, does vNIC port and vNIC > port rep have the same characteristics as VF and VF rep (xmit on one side > <--> send on 2nd side), does traffic is to be offloaded using TC, etc. > What one would be doing with vNIC instance, hand it to container ala the Intel > VMDQ concept? > can this be seen as veth HW offload? etc Yes, the reprs can be used like VF reprs but that's not the main use case. We are targeting container world with ABM, so no VFs and no SR-IOV. There is only one vNIC per port and no veth offload etc. In the most basic scenario with 1 PF corresponding to 1 port there is no real use for switching. The main purpose here is that we want to setup the buffering and QoS inside the NIC (both for TX and RX) and then use eBPF to perform filtering, queue assignment and per-application RSS. That's pretty much it at this point. Switching if any will be a basic bridge offload. QoS configuration will all be done using TC qdisc offload, RED etc. exactly like mlxsw :)