From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CAB6C43331 for ; Fri, 6 Sep 2019 03:08:11 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 37001206DE for ; Fri, 6 Sep 2019 03:08:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 37001206DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9BFDD1F0BA; Fri, 6 Sep 2019 05:08:09 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id C78C01F0B2 for ; Fri, 6 Sep 2019 05:08:07 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Sep 2019 20:08:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,472,1559545200"; d="scan'208";a="177508895" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by orsmga008.jf.intel.com with ESMTP; 05 Sep 2019 20:08:05 -0700 Received: from fmsmsx152.amr.corp.intel.com (10.18.125.5) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 5 Sep 2019 20:08:05 -0700 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by FMSMSX152.amr.corp.intel.com (10.18.125.5) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 5 Sep 2019 20:08:05 -0700 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.140]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.53]) with mapi id 14.03.0439.000; Fri, 6 Sep 2019 11:08:02 +0800 From: "Wu, Jingjing" To: Ori Kam , Thomas Monjalon , "Yigit, Ferruh" , "arybchenko@solarflare.com" , Shahaf Shuler , "Slava Ovsiienko" , Alex Rosenbaum CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [RFC] ethdev: support hairpin queue Thread-Index: AQHVUdxrfnM5/LGky0uVHdQiIAAXJKcclZEw//+anYCAAdUVYA== Date: Fri, 6 Sep 2019 03:08:02 +0000 Message-ID: <9BB6961774997848B5B42BEC655768F81150CDEF@SHSMSX103.ccr.corp.intel.com> References: <1565703468-55617-1-git-send-email-orika@mellanox.com> <9BB6961774997848B5B42BEC655768F81150C0CA@SHSMSX103.ccr.corp.intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNDQ3ZTFlNzgtMzAyYy00MDNlLTk2MmYtZWZmYjZiNGUwNjBkIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoidVpsM01FS1NwK2lmWDFZQVRlUGRcL0RKcEl6U2ZmY3RQUkFuY2d4cEJERWVJUkYzb0V4VHpocmdUbjlodytLeDEifQ== x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC] ethdev: support hairpin queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, Ori Thanks for the explanation. I have more question below. Thanks Jingjing > -----Original Message----- > From: Ori Kam [mailto:orika@mellanox.com] > Sent: Thursday, September 5, 2019 1:45 PM > To: Wu, Jingjing ; Thomas Monjalon ; > Yigit, Ferruh ; arybchenko@solarflare.com; Shahaf= Shuler > ; Slava Ovsiienko ; Alex > Rosenbaum > Cc: dev@dpdk.org > Subject: RE: [dpdk-dev] [RFC] ethdev: support hairpin queue >=20 > Hi Wu, > Thanks for your comments PSB, >=20 > Ori >=20 > > -----Original Message----- > > From: Wu, Jingjing > > Sent: Thursday, September 5, 2019 7:01 AM > > To: Ori Kam ; Thomas Monjalon > > ; Yigit, Ferruh ; > > arybchenko@solarflare.com; Shahaf Shuler ; Slava > > Ovsiienko ; Alex Rosenbaum > > > > Cc: dev@dpdk.org > > Subject: RE: [dpdk-dev] [RFC] ethdev: support hairpin queue > > > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ori Kam > > > Sent: Tuesday, August 13, 2019 9:38 PM > > > To: thomas@monjalon.net; Yigit, Ferruh ; > > > arybchenko@solarflare.com; shahafs@mellanox.com; > > viacheslavo@mellanox.com; > > > alexr@mellanox.com > > > Cc: dev@dpdk.org; orika@mellanox.com > > > Subject: [dpdk-dev] [RFC] ethdev: support hairpin queue > > > > > > This RFC replaces RFC[1]. > > > > > > The hairpin feature (different name can be forward) acts as "bump on = the > > wire", > > > meaning that a packet that is received from the wire can be modified = using > > > offloaded action and then sent back to the wire without application > > intervention > > > which save CPU cycles. > > > > > > The hairpin is the inverse function of loopback in which application > > > sends a packet then it is received again by the > > > application without being sent to the wire. > > > > > > The hairpin can be used by a number of different NVF, for example loa= d > > > balancer, gateway and so on. > > > > > > As can be seen from the hairpin description, hairpin is basically RX = queue > > > connected to TX queue. > > > > > > During the design phase I was thinking of two ways to implement this > > > feature the first one is adding a new rte flow action. and the second > > > one is create a special kind of queue. > > > > > > The advantages of using the queue approch: > > > 1. More control for the application. queue depth (the memory size tha= t > > > should be used). > > > 2. Enable QoS. QoS is normaly a parametr of queue, so in this approch= it > > > will be easy to integrate with such system. > > > > > > Which kind of QoS? >=20 > For example latency , packet rate those kinds of makes sense in the queue= level. > I know we don't have any current support but I think we will have during = the next year. >=20 Where would be the QoS API loading? TM API? Or propose other new? > > > > > 3. Native integression with the rte flow API. Just setting the target > > > queue/rss to hairpin queue, will result that the traffic will be rout= ed > > > to the hairpin queue. > > > 4. Enable queue offloading. > > > > > Looks like the hairpin queue is just hardware queue, it has no relation= ship with > > host memory. It makes the queue concept a little bit confusing. And why= do we > > need to setup queues, maybe some info in eth_conf is enough? >=20 > Like stated above it makes sense to have queue related parameters. > For example I can think of application that most packets are going threw = that hairpin > queue, but some control packets are > from the application. So the application can configure the QoS between th= ose two > queues. In addtion this will enable the application > to use the queue like normal queue from rte_flow (see comment below) and = every other > aspect. >=20 Yes, it is typical use case. And rte_flow is used to classify to different = queue? If I understand correct, your hairpin queue is using host memory/or on-card= memory for buffering, but CPU cannot touch it, all the packet processing i= s done by NIC. Queue is created, where the queue ID is used? Tx queue ID may be used as ac= tion of rte_flow? I still don't understand where the hairpin Rx queue ID be= used.=20 In my opinion, if no rx/tx function, it should not be a true queue from hos= t view.=20 > > > > Not sure how your hardware make the hairpin work? Use rte_flow for pack= et > > modification offload? Then how does HW distribute packets to those hard= ware > > queue, classification? If So, why not just extend rte_flow with the hai= rpin > > action? > > >=20 > You are correct, the application uses rte_flow and just points the traffi= c to the requested > hairpin queue/rss. > We could have added a new rte_flow command. The reasons we didn't: > 1. Like stated above some of the hairpin makes sense in queue level. > 2. In the near future, we will also want to support hairpin between diff= erent ports. This > makes much more > sense using queues. >=20 > > > Each hairpin Rxq can be connected Txq / number of Txqs which can belo= ng to > > a > > > different ports assuming the PMD supports it. The same goes the other > > > way each hairpin Txq can be connected to one or more Rxqs. > > > This is the reason that both the Txq setup and Rxq setup are getting = the > > > hairpin configuration structure. > > > > > > From PMD prespctive the number of Rxq/Txq is the total of standard > > > queues + hairpin queues. > > > > > > To configure hairpin queue the user should call > > > rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup inste= ed > > > of the normal queue setup functions. > > > > If the new API introduced to avoid ABI change, would one API > > rte_eth_rx_hairpin_setup be enough? >=20 > I'm not sure I understand your comment. > The rx_hairpin_setup was created for two main reasons: > 1. Avoid API change. > 2. I think it is more correct to use different API since the parameters a= re different. >=20 I mean not use queue setup concept, set hairpin feature through one hairpin= configuration API. > The reason we have both rx and tx setup functions is that we want the use= r to have > control binding the two queues. > It is most important when we will advance to hairpin between ports. Hairpin between ports? It looks like switch but not hairpin, right? >=20 > > > > Thanks > > Jingjing >=20 > Thanks, > Ori