From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1178C3A5A5 for ; Thu, 5 Sep 2019 04:01:00 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 5E00320820 for ; Thu, 5 Sep 2019 04:01:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5E00320820 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C5A071EB60; Thu, 5 Sep 2019 06:00:58 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 246281E93D for ; Thu, 5 Sep 2019 06:00:56 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Sep 2019 21:00:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,469,1559545200"; d="scan'208";a="187845757" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by orsmga006.jf.intel.com with ESMTP; 04 Sep 2019 21:00:55 -0700 Received: from fmsmsx609.amr.corp.intel.com (10.18.126.89) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.439.0; Wed, 4 Sep 2019 21:00:55 -0700 Received: from fmsmsx609.amr.corp.intel.com (10.18.126.89) by fmsmsx609.amr.corp.intel.com (10.18.126.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Wed, 4 Sep 2019 21:00:54 -0700 Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by fmsmsx609.amr.corp.intel.com (10.18.126.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5 via Frontend Transport; Wed, 4 Sep 2019 21:00:54 -0700 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.140]) by SHSMSX104.ccr.corp.intel.com ([169.254.5.32]) with mapi id 14.03.0439.000; Thu, 5 Sep 2019 12:00:53 +0800 From: "Wu, Jingjing" To: Ori Kam , "thomas@monjalon.net" , "Yigit, Ferruh" , "arybchenko@solarflare.com" , "shahafs@mellanox.com" , "viacheslavo@mellanox.com" , "alexr@mellanox.com" CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [RFC] ethdev: support hairpin queue Thread-Index: AQHVUdxrfnM5/LGky0uVHdQiIAAXJKcclZEw Date: Thu, 5 Sep 2019 04:00:52 +0000 Message-ID: <9BB6961774997848B5B42BEC655768F81150C0CA@SHSMSX103.ccr.corp.intel.com> References: <1565703468-55617-1-git-send-email-orika@mellanox.com> In-Reply-To: <1565703468-55617-1-git-send-email-orika@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNDY5MDIzMzQtNDRhYS00MTQ5LWI2YWEtMzM0NzYxODgzM2EwIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiRDJRNWhUWm8rTVNBM0xGRU8zMk9xVlQxQTlrSTB5NUxlQ0h5cEs5MHNwazFcLzZsVG9VSzRqMkpoc1lKZFRcL04wIn0= x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC] ethdev: support hairpin queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ori Kam > Sent: Tuesday, August 13, 2019 9:38 PM > To: thomas@monjalon.net; Yigit, Ferruh ; > arybchenko@solarflare.com; shahafs@mellanox.com; viacheslavo@mellanox.com= ; > alexr@mellanox.com > Cc: dev@dpdk.org; orika@mellanox.com > Subject: [dpdk-dev] [RFC] ethdev: support hairpin queue >=20 > This RFC replaces RFC[1]. >=20 > The hairpin feature (different name can be forward) acts as "bump on the = wire", > meaning that a packet that is received from the wire can be modified usin= g > offloaded action and then sent back to the wire without application inter= vention > which save CPU cycles. >=20 > The hairpin is the inverse function of loopback in which application > sends a packet then it is received again by the > application without being sent to the wire. >=20 > The hairpin can be used by a number of different NVF, for example load > balancer, gateway and so on. >=20 > As can be seen from the hairpin description, hairpin is basically RX queu= e > connected to TX queue. >=20 > During the design phase I was thinking of two ways to implement this > feature the first one is adding a new rte flow action. and the second > one is create a special kind of queue. >=20 > The advantages of using the queue approch: > 1. More control for the application. queue depth (the memory size that > should be used). > 2. Enable QoS. QoS is normaly a parametr of queue, so in this approch it > will be easy to integrate with such system. Which kind of QoS? > 3. Native integression with the rte flow API. Just setting the target > queue/rss to hairpin queue, will result that the traffic will be routed > to the hairpin queue. > 4. Enable queue offloading. >=20 Looks like the hairpin queue is just hardware queue, it has no relationship= with host memory. It makes the queue concept a little bit confusing. And w= hy do we need to setup queues, maybe some info in eth_conf is enough? Not sure how your hardware make the hairpin work? Use rte_flow for packet m= odification offload? Then how does HW distribute packets to those hardware = queue, classification? If So, why not just extend rte_flow with the hairpin= action? > Each hairpin Rxq can be connected Txq / number of Txqs which can belong t= o a > different ports assuming the PMD supports it. The same goes the other > way each hairpin Txq can be connected to one or more Rxqs. > This is the reason that both the Txq setup and Rxq setup are getting the > hairpin configuration structure. >=20 > From PMD prespctive the number of Rxq/Txq is the total of standard > queues + hairpin queues. >=20 > To configure hairpin queue the user should call > rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup insteed > of the normal queue setup functions. If the new API introduced to avoid ABI change, would one API rte_eth_rx_hai= rpin_setup be enough? Thanks Jingjing