From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84474C4743C for ; Mon, 21 Jun 2021 15:56:41 +0000 (UTC) Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by mail.kernel.org (Postfix) with ESMTP id 0DCAC600D3 for ; Mon, 21 Jun 2021 15:56:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0DCAC600D3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4894741158; Mon, 21 Jun 2021 17:56:40 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 819E440040 for ; Mon, 21 Jun 2021 17:56:38 +0200 (CEST) IronPort-SDR: eNVWQFvVD+fVmb/fFDZfnH6AbAfsQXo/PHaDXUcEwO2oOzHA4oxO4uLGMw0HM1LZczjqG/Rj0s PohdaRBYIakQ== X-IronPort-AV: E=McAfee;i="6200,9189,10022"; a="268013378" X-IronPort-AV: E=Sophos;i="5.83,289,1616482800"; d="scan'208";a="268013378" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2021 08:56:37 -0700 IronPort-SDR: O6KPNrvtFMZU/drTf2gow15GfnRdG8Ue6Mp4Yk0gQndxAlCopqUbCJ/tNOT5fOh4GnU3X0E1cA mMlyy/V4agwg== X-IronPort-AV: E=Sophos;i="5.83,289,1616482800"; d="scan'208";a="453928044" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.238.17]) ([10.213.238.17]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2021 08:56:35 -0700 To: "Ananyev, Konstantin" , =?UTF-8?Q?Morten_Br=c3=b8rup?= , Thomas Monjalon , "Richardson, Bruce" Cc: "dev@dpdk.org" , "olivier.matz@6wind.com" , "andrew.rybchenko@oktetlabs.ru" , "honnappa.nagarahalli@arm.com" , "jerinj@marvell.com" , "gakhil@marvell.com" References: <20210614105839.3379790-1-thomas@monjalon.net> <98CBD80474FA8B44BF855DF32C47DC35C6184E@smartserver.smartshare.dk> <2004320.XGyPsaEoyj@thomas> <0bb118ba-2658-a7d7-ad8f-bf27f62849f7@intel.com> <98CBD80474FA8B44BF855DF32C47DC35C6187A@smartserver.smartshare.dk> <5669a171-2bb2-2f99-1dad-e5e0eefb5ddd@intel.com> From: Ferruh Yigit X-User: ferruhy Message-ID: <42bd6871-c5da-4630-31ee-1916eb823a60@intel.com> Date: Mon, 21 Jun 2021 16:56:31 +0100 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] parray: introduce internal API for dynamic arrays X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 6/21/2021 3:38 PM, Ananyev, Konstantin wrote: > >> >> On 6/21/2021 1:30 PM, Ananyev, Konstantin wrote: >>> >>>> >>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev, >>>>> Konstantin >>>>> >>>>>>> How can we hide the callbacks since they are used by inline burst >>>>> functions. >>>>>> >>>>>> I probably I owe a better explanation to what I meant in first mail. >>>>>> Otherwise it sounds confusing. >>>>>> I'll try to write a more detailed one in next few days. >>>>> >>>>> Actually I gave it another thought over weekend, and might be we can >>>>> hide rte_eth_dev_cb even in a simpler way. I'd use eth_rx_burst() as >>>>> an example, but the same principle applies to other 'fast' functions. >>>>> >>>>> 1. Needed changes for PMDs rx_pkt_burst(): >>>>> a) change function prototype to accept 'uint16_t port_id' and >>>>> 'uint16_t queue_id', >>>>> instead of current 'void *'. >>>>> b) Each PMD rx_pkt_burst() will have to call rte_eth_rx_epilog() >>>>> function at return. >>>>> This inline function will do all CB calls for that queue. >>>>> >>>>> To be more specific, let say we have some PMD: xyz with RX function: >>>>> >>>>> uint16_t >>>>> xyz_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t >>>>> nb_pkts) >>>>> { >>>>> struct xyz_rx_queue *rxq = rx_queue; >>>>> uint16_t nb_rx = 0; >>>>> >>>>> /* do actual stuff here */ >>>>> .... >>>>> return nb_rx; >>>>> } >>>>> >>>>> It will be transformed to: >>>>> >>>>> uint16_t >>>>> xyz_recv_pkts(uint16_t port_id, uint16_t queue_id, struct rte_mbuf >>>>> **rx_pkts, uint16_t nb_pkts) >>>>> { >>>>> struct xyz_rx_queue *rxq; >>>>> uint16_t nb_rx; >>>>> >>>>> rxq = _rte_eth_rx_prolog(port_id, queue_id); >>>>> if (rxq == NULL) >>>>> return 0; >>>>> nb_rx = _xyz_real_recv_pkts(rxq, rx_pkts, nb_pkts); >>>>> return _rte_eth_rx_epilog(port_id, queue_id, rx_pkts, >>>>> nb_pkts); >>>>> } >>>>> >>>>> And somewhere in ethdev_private.h: >>>>> >>>>> static inline void * >>>>> _rte_eth_rx_prolog(uint16_t port_id, uint16_t queue_id); >>>>> { >>>>> struct rte_eth_dev *dev = &rte_eth_devices[port_id]; >>>>> >>>>> #ifdef RTE_ETHDEV_DEBUG_RX >>>>> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL); >>>>> RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, NULL); >>>>> >>>>> if (queue_id >= dev->data->nb_rx_queues) { >>>>> RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", >>>>> queue_id); >>>>> return NULL; >>>>> } >>>>> #endif >>>>> return dev->data->rx_queues[queue_id]; >>>>> } >>>>> >>>>> static inline uint16_t >>>>> _rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id, struct rte_mbuf >>>>> **rx_pkts, const uint16_t nb_pkts); >>>>> { >>>>> struct rte_eth_dev *dev = &rte_eth_devices[port_id]; >>>>> >>>>> #ifdef RTE_ETHDEV_RXTX_CALLBACKS >>>>> struct rte_eth_rxtx_callback *cb; >>>>> >>>>> /* __ATOMIC_RELEASE memory order was used when the >>>>> * call back was inserted into the list. >>>>> * Since there is a clear dependency between loading >>>>> * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is >>>>> * not required. >>>>> */ >>>>> cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id], >>>>> __ATOMIC_RELAXED); >>>>> >>>>> if (unlikely(cb != NULL)) { >>>>> do { >>>>> nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, >>>>> nb_rx, >>>>> nb_pkts, cb->param); >>>>> cb = cb->next; >>>>> } while (cb != NULL); >>>>> } >>>>> #endif >>>>> >>>>> rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, >>>>> nb_rx); >>>>> return nb_rx; >>>>> } >>>> >>>> That would make the compiler inline _rte_eth_rx_epilog() into the driver when compiling the DPDK library. But >>>> RTE_ETHDEV_RXTX_CALLBACKS is a definition for the application developer to use when compiling the DPDK application. >>> >>> I believe it is for both - user app and DPDK drivers. >>> AFAIK, they both have to use the same rte_config.h, otherwise things will be broken. >>> If let say RTE_ETHDEV_RXTX_CALLBACKS is not enabled in ethdev, then >>> user wouldn't be able to add a callback at first place. >>> BTW, such change will allow us to make RTE_ETHDEV_RXTX_CALLBACKS >>> internal for ethdev/PMD layer, which is a good thing from my perspective. >>> >> >> It is possible to use binary drivers (.so) as plugin. Currently application can >> decide to use or not use Rx/Tx callbacks even with binary drivers, but this >> change adds a complexity to this usecase. > > Not sure I understand you here... > Can you explain a bit more what do you mean? > Right now if I have a .so driver, I can decide to use or not to use the Rx/Tx callbacks by compiling application with relevant config, and .so will work for both without change. With proposed change, if .so not enabled Rx/Tx callback, application won't able to use it. Application and driver config should be compatible, and adding more compile time config to drivers that is also used in libraries is adding more points to sync, hence adding more complexity I believe to binary drivers usecase. >> >>>> >>>>> >>>>> Now, as you said above, in rte_ethdev.h we will keep only a flat array >>>>> with pointers to 'fast' functions: >>>>> struct { >>>>> eth_rx_burst_t rx_pkt_burst >>>>> eth_tx_burst_t tx_pkt_burst; >>>>> eth_tx_prep_t tx_pkt_prepare; >>>>> ..... >>>>> } rte_eth_dev_burst[]; >>>>> >>>>> And rte_eth_rx_burst() will look like: >>>>> >>>>> static inline uint16_t >>>>> rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, >>>>> struct rte_mbuf **rx_pkts, const uint16_t nb_pkts) >>>>> { >>>>> if (port_id >= RTE_MAX_ETHPORTS) >>>>> return 0; >>>>> return rte_eth_dev_burst[port_id](port_id, queue_id, rx_pkts, >>>>> nb_pkts); >>>>> } >>>>> >>>>> Yes, it will require changes in *all* PMDs, but as I said before the >>>>> changes will be a mechanic ones. >