From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24159C433EF for ; Mon, 11 Oct 2021 09:20:33 +0000 (UTC) Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by mail.kernel.org (Postfix) with ESMTP id 9293560F38 for ; Mon, 11 Oct 2021 09:20:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9293560F38 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=oktetlabs.ru Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dpdk.org Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 08BF9410F5; Mon, 11 Oct 2021 11:20:32 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 1F691410EE for ; Mon, 11 Oct 2021 11:20:31 +0200 (CEST) Received: from [192.168.38.17] (aros.oktetlabs.ru [192.168.38.17]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id A3E837F502; Mon, 11 Oct 2021 12:20:30 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru A3E837F502 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1633944030; bh=nFtyx6+aQ0TZcOOaAeqHL1l/BKYKUxNTvXSNGw1eY7U=; h=Subject:To:Cc:References:From:Date:In-Reply-To; b=F9pdQogQfDu0Wt/YztZ4VC4MqbakicSdt7h3gQHIPek8ODtjmx8HCAqhg3CMsvu61 q/d5bIW4R06DNnXTtxnc+6BihJOC9vv3gvvOcjOf3qkfEuMZ1R7V5Mk2qrjRfFCi3Y BbmfhvyH56DVgELaKbTArzNbuZ7fcDa60WYknw3g= To: Konstantin Ananyev , dev@dpdk.org Cc: xiaoyun.li@intel.com, anoobj@marvell.com, jerinj@marvell.com, ndabilpuram@marvell.com, adwivedi@marvell.com, shepard.siegel@atomicrules.com, ed.czeck@atomicrules.com, john.miller@atomicrules.com, irusskikh@marvell.com, ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com, rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com, sachin.saxena@oss.nxp.com, haiyue.wang@intel.com, johndale@cisco.com, hyonkim@cisco.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com, humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com, beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com, matan@nvidia.com, viacheslavo@nvidia.com, sthemmin@microsoft.com, longli@microsoft.com, heinrich.kuhn@corigine.com, kirankumark@marvell.com, mczekaj@marvell.com, jiawenwu@trustnetic.com, jianwang@trustnetic.com, maxime.coquelin@redhat.com, chenbo.xia@intel.com, thomas@monjalon.net, ferruh.yigit@intel.com, mdr@ashroe.eu, jay.jayatheerthan@intel.com References: <20211004135603.20593-1-konstantin.ananyev@intel.com> <20211007112750.25526-1-konstantin.ananyev@intel.com> <20211007112750.25526-3-konstantin.ananyev@intel.com> From: Andrew Rybchenko Organization: OKTET Labs Message-ID: Date: Mon, 11 Oct 2021 12:20:30 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20211007112750.25526-3-konstantin.ananyev@intel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/7/21 2:27 PM, Konstantin Ananyev wrote: > At queue configure stage always allocate space for maximum possible > number (RTE_MAX_QUEUES_PER_PORT) of queue pointers. > That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer > pointer to internal queue data without extra checking of current number > of configured queues. > That would help in future to hide rte_eth_dev and related structures. > It means that from now on, each ethdev port will always consume: > ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT) > bytes of memory for its queue pointers. > With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port. > > Signed-off-by: Konstantin Ananyev > --- > lib/ethdev/rte_ethdev.c | 36 +++++++++--------------------------- > 1 file changed, 9 insertions(+), 27 deletions(-) > > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c > index ed37f8871b..c8abda6dd7 100644 > --- a/lib/ethdev/rte_ethdev.c > +++ b/lib/ethdev/rte_ethdev.c > @@ -897,7 +897,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) > > if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */ > dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", > - sizeof(dev->data->rx_queues[0]) * nb_queues, > + sizeof(dev->data->rx_queues[0]) * > + RTE_MAX_QUEUES_PER_PORT, > RTE_CACHE_LINE_SIZE); Looking at it I have few questions: 1. Why is nb_queues == 0 case kept as an exception? Yes, strictly speaking it is not the problem of the patch, DPDK will still segfault (non-debug build) if I allocate Tx queues only but call rte_eth_rx_burst(). After reading the patch description I thought that we're trying to address it. 2. Why do we need to allocate memory dynamically? Can we just make rx_queues an array of appropriate size? May be wasting 512K unconditionally is too much. 3. If wasting 512K is too much, I'd consider to move allocation to eth_dev_get(). If > if (dev->data->rx_queues == NULL) { > dev->data->nb_rx_queues = 0; > @@ -908,21 +909,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) > > rxq = dev->data->rx_queues; > > - for (i = nb_queues; i < old_nb_queues; i++) > + for (i = nb_queues; i < old_nb_queues; i++) { > (*dev->dev_ops->rx_queue_release)(rxq[i]); > - rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues, > - RTE_CACHE_LINE_SIZE); > - if (rxq == NULL) > - return -(ENOMEM); > - if (nb_queues > old_nb_queues) { > - uint16_t new_qs = nb_queues - old_nb_queues; > - > - memset(rxq + old_nb_queues, 0, > - sizeof(rxq[0]) * new_qs); > + rxq[i] = NULL; It looks like the patch should be rebased on top of next-net main because of queue release patches. [snip]