From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B58A3C433EF for ; Thu, 30 Sep 2021 15:17:41 +0000 (UTC) Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by mail.kernel.org (Postfix) with ESMTP id 3845761139 for ; Thu, 30 Sep 2021 15:17:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3845761139 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dpdk.org Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9C5BC41135; Thu, 30 Sep 2021 17:17:40 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2055.outbound.protection.outlook.com [40.107.100.55]) by mails.dpdk.org (Postfix) with ESMTP id E317341134 for ; Thu, 30 Sep 2021 17:17:39 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bWwxkmUjdNJP1EoFdjfcRKOcAr1n7ZfGeMFiRoFe002lRmgrSuU8nDSc1l+Um5Zf96TpFfuz0UxRWRevg2/Nce9f3QlCINpTT/Z711k18dmRLImBTSePj3Ud6C1JntKL1aqaTBkpZFCWiVa4kYyYozrshsLiylYVC6FdVbqzt6FZr2k9+Qllzb5PxsnX9tp/YAvlBu8PkLWTnrYRPLb3l7ONiewuD0Axi3GnrCRtsLJYioQCGZQzRX2HeGpvpH/Xfy1lQpHb3GCy6vDt7TC6WcEYve16sc4Vt1LPE3fzaSahkkOLTxUfDfKCiFNYk8utFIeASJS4mnanTAxplLukew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=LjPGKwMF3mbD+6hmuIeNfLzbOB4/VtWOzBxGK1skbSU=; b=b2mI1Mja9TX3FYtdhW3va3DbbvMhDMfqJioyNNre5ToI8xT+42Ag8jP/C0EXPweprnEmLSMvLkvxQQbuILTjI33fXQK7QSrrCR2sY8lvYqm7AldLAZFDmkhApnUMiaRMOQYhRHzfcA0pYs1Rdgkv4LxY0nvS7Fz+tCYk/yeRmQQX72GL2dODn6pO4FspLlaU/9Mgy97rw8qD4n9adsTug/B3p9OEF21FGDSv++b9/8R0LXVNpJ+fEISlgTn/15yW05wngrFVjgwEEcCXp8Pl5+obnc3y7ihNaMpomT5TjLS4i637K3HLnOnUq9E06G4h6S3aGOLCCASiqfETuUWiTw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LjPGKwMF3mbD+6hmuIeNfLzbOB4/VtWOzBxGK1skbSU=; b=cyUZRplLCiEAfFuU+l9zNmI/Rmmcrc3TWT9iNn5TTGI0gJsM6SOWyfI6J1My3urwJa1DouQRzXhp5wA43dMuiMfIiTh5wKnTwsY/Qze3m/0xOs2cM3DwA63y5BQh2cJ9cSLUdwYvfG5aVsVNtChHD9sxU74r7KUJUxuZOkWcYaC+ivN2Am+gQZXcWMrnKRB6Wnsa+ndkR2ULc1yYw0goSx7LL87bEqEEyF84pD2JYHaW8MYAuWLGE5XJiRYbuTcHXKVHNIvAQf5A6t+VJq8EXeqrZepSUPeRkz3m7crUYJg3JIULN/mEhe0soT+CRsK8z5/D6rF+8WSthDS7uQ39tg== Received: from BN9P223CA0019.NAMP223.PROD.OUTLOOK.COM (2603:10b6:408:10b::24) by PH0PR12MB5420.namprd12.prod.outlook.com (2603:10b6:510:e8::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.17; Thu, 30 Sep 2021 15:17:36 +0000 Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10b:cafe::71) by BN9P223CA0019.outlook.office365.com (2603:10b6:408:10b::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15 via Frontend Transport; Thu, 30 Sep 2021 15:17:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Thu, 30 Sep 2021 15:17:36 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 30 Sep 2021 15:17:30 +0000 From: Xueming Li To: CC: Lior Margalit , , Ferruh Yigit , Andrew Rybchenko , Singh Aman Deep , Thomas Monjalon , "John W. Linville" , "Ciara Loftus" , Qi Zhang , "Hemant Agrawal" , Sachin Saxena , Rosen Xu , Gagandeep Singh , "Bruce Richardson" , Maxime Coquelin , Chenbo Xia Date: Thu, 30 Sep 2021 23:17:10 +0800 Message-ID: <20210930151711.779493-2-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210930151711.779493-1-xuemingl@nvidia.com> References: <20210727034134.20556-1-xuemingl@nvidia.com> <20210930151711.779493-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 014403d0-ceca-4d2f-48f6-08d984256f25 X-MS-TrafficTypeDiagnostic: PH0PR12MB5420: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:126; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: h8vZwN37hjtyErpOYvIVfNfB3nCGQJ9nR8seoW6zuidomL1i4mVufKPE7stz725fRRL3HT0DTFs3W04/QiBM7MtdLPhiQYCFu2Tn0opDIoY79xKUyHZdf3WBH8aUNmQDLvx3wa1hDJeIxIQmI2zsxXYUvsWi/JAS/mHbtHY5Y6eVcXhZ71aRXBshvL3V+VMD1R6ssjgwD82y78QfVTfP4l7B55E1vIiPXbgmXDS7Bwn8knJh4jMhaZkdYEEUnywgzfr0CI8eYA+9WjF6IzmHE6KAReyzYolx3UN4OPb8iZ02MA7kqhztfM/UjDQo1GnTdPiRlK2cF0tnr30F1BpNhwAHoVjGZeOKq5ptSyOtLbt9CqENeCNuDMl0j26NnutydrkwN6V2DFFWqNvS7E1glg654QtL0pcwKzN3/bWUdlzRTCbp2S7nhCUYeD7xvOJIM6CPdfx1YSANcKyQAwNBSx3DhhoDni4MU5UIp4pSUcIdMeYK/6wYjlfOJJHN9+8wAqVFmdblR620fBF5P/frlfZz5BAbZynzcWnAzRPDSMFUTrLJa2t1rj8qjDPk2kpzNjOXiQ6y3Q5GyEY/O94uRBHlJy5IpKfXp/Ee/gBa3WLARjpAPo5+p3E7blq8KlIsK7wblPVa2ePe97YopuEgIVqAPYWD1yPXrOVL3E7c3X3FbNsZKvG2MG4Teb46K1D4L5z7D+ZfM7F2HmyYAD3yuw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(7416002)(8936002)(86362001)(7636003)(55016002)(6286002)(8676002)(508600001)(30864003)(36756003)(5660300002)(2906002)(83380400001)(356005)(47076005)(70586007)(70206006)(82310400003)(316002)(7696005)(1076003)(6666004)(16526019)(6916009)(336012)(426003)(2616005)(26005)(186003)(4326008)(54906003)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Sep 2021 15:17:36.0800 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 014403d0-ceca-4d2f-48f6-08d984256f25 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5420 Subject: [dpdk-dev] [PATCH v6 1/2] ethdev: make queue release callback optional X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some drivers don't need Rx and Tx queue release callback, make them optional. Clean up empty queue release callbacks for some drivers. Signed-off-by: Xueming Li Reviewed-by: Andrew Rybchenko Acked-by: Ferruh Yigit --- app/test/virtual_pmd.c | 12 ---- drivers/net/af_packet/rte_eth_af_packet.c | 7 -- drivers/net/af_xdp/rte_eth_af_xdp.c | 7 -- drivers/net/dpaa/dpaa_ethdev.c | 13 ---- drivers/net/dpaa2/dpaa2_ethdev.c | 7 -- drivers/net/ipn3ke/ipn3ke_representor.c | 12 ---- drivers/net/kni/rte_eth_kni.c | 7 -- drivers/net/pcap/pcap_ethdev.c | 7 -- drivers/net/pfe/pfe_ethdev.c | 14 ---- drivers/net/ring/rte_eth_ring.c | 4 -- drivers/net/virtio/virtio_ethdev.c | 8 --- lib/ethdev/rte_ethdev.c | 86 ++++++++++------------- 12 files changed, 36 insertions(+), 148 deletions(-) diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c index 7036f401ed9..7e15b47eb0f 100644 --- a/app/test/virtual_pmd.c +++ b/app/test/virtual_pmd.c @@ -163,16 +163,6 @@ virtual_ethdev_tx_queue_setup_fail(struct rte_eth_dev *dev __rte_unused, return -1; } -static void -virtual_ethdev_rx_queue_release(void *q __rte_unused) -{ -} - -static void -virtual_ethdev_tx_queue_release(void *q __rte_unused) -{ -} - static int virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev, int wait_to_complete __rte_unused) @@ -243,8 +233,6 @@ static const struct eth_dev_ops virtual_ethdev_default_dev_ops = { .dev_infos_get = virtual_ethdev_info_get, .rx_queue_setup = virtual_ethdev_rx_queue_setup_success, .tx_queue_setup = virtual_ethdev_tx_queue_setup_success, - .rx_queue_release = virtual_ethdev_rx_queue_release, - .tx_queue_release = virtual_ethdev_tx_queue_release, .link_update = virtual_ethdev_link_update_success, .mac_addr_set = virtual_ethdev_mac_address_set, .stats_get = virtual_ethdev_stats_get, diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c index fcd80903995..c73d2ec5c86 100644 --- a/drivers/net/af_packet/rte_eth_af_packet.c +++ b/drivers/net/af_packet/rte_eth_af_packet.c @@ -427,11 +427,6 @@ eth_dev_close(struct rte_eth_dev *dev) return 0; } -static void -eth_queue_release(void *q __rte_unused) -{ -} - static int eth_link_update(struct rte_eth_dev *dev __rte_unused, int wait_to_complete __rte_unused) @@ -594,8 +589,6 @@ static const struct eth_dev_ops ops = { .promiscuous_disable = eth_dev_promiscuous_disable, .rx_queue_setup = eth_rx_queue_setup, .tx_queue_setup = eth_tx_queue_setup, - .rx_queue_release = eth_queue_release, - .tx_queue_release = eth_queue_release, .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 9bea0a895a3..a619dd218d0 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -989,11 +989,6 @@ eth_dev_close(struct rte_eth_dev *dev) return 0; } -static void -eth_queue_release(void *q __rte_unused) -{ -} - static int eth_link_update(struct rte_eth_dev *dev __rte_unused, int wait_to_complete __rte_unused) @@ -1474,8 +1469,6 @@ static const struct eth_dev_ops ops = { .promiscuous_disable = eth_dev_promiscuous_disable, .rx_queue_setup = eth_rx_queue_setup, .tx_queue_setup = eth_tx_queue_setup, - .rx_queue_release = eth_queue_release, - .tx_queue_release = eth_queue_release, .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 36d8f9249df..2c12956ff6b 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -1233,12 +1233,6 @@ dpaa_eth_eventq_detach(const struct rte_eth_dev *dev, return 0; } -static -void dpaa_eth_rx_queue_release(void *rxq __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); -} - static int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc __rte_unused, @@ -1272,11 +1266,6 @@ int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return 0; } -static void dpaa_eth_tx_queue_release(void *txq __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); -} - static uint32_t dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -1571,8 +1560,6 @@ static struct eth_dev_ops dpaa_devops = { .rx_queue_setup = dpaa_eth_rx_queue_setup, .tx_queue_setup = dpaa_eth_tx_queue_setup, - .rx_queue_release = dpaa_eth_rx_queue_release, - .tx_queue_release = dpaa_eth_tx_queue_release, .rx_burst_mode_get = dpaa_dev_rx_burst_mode_get, .tx_burst_mode_get = dpaa_dev_tx_burst_mode_get, .rxq_info_get = dpaa_rxq_info_get, diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index c12169578e2..48ffbf6c214 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -1004,12 +1004,6 @@ dpaa2_dev_rx_queue_release(void *q __rte_unused) } } -static void -dpaa2_dev_tx_queue_release(void *q __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); -} - static uint32_t dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -2427,7 +2421,6 @@ static struct eth_dev_ops dpaa2_ethdev_ops = { .rx_queue_setup = dpaa2_dev_rx_queue_setup, .rx_queue_release = dpaa2_dev_rx_queue_release, .tx_queue_setup = dpaa2_dev_tx_queue_setup, - .tx_queue_release = dpaa2_dev_tx_queue_release, .rx_burst_mode_get = dpaa2_dev_rx_burst_mode_get, .tx_burst_mode_get = dpaa2_dev_tx_burst_mode_get, .flow_ctrl_get = dpaa2_flow_ctrl_get, diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c index 589d9fa5877..694435a4ae2 100644 --- a/drivers/net/ipn3ke/ipn3ke_representor.c +++ b/drivers/net/ipn3ke/ipn3ke_representor.c @@ -288,11 +288,6 @@ ipn3ke_rpst_rx_queue_setup(__rte_unused struct rte_eth_dev *dev, return 0; } -static void -ipn3ke_rpst_rx_queue_release(__rte_unused void *rxq) -{ -} - static int ipn3ke_rpst_tx_queue_setup(__rte_unused struct rte_eth_dev *dev, __rte_unused uint16_t queue_idx, __rte_unused uint16_t nb_desc, @@ -302,11 +297,6 @@ ipn3ke_rpst_tx_queue_setup(__rte_unused struct rte_eth_dev *dev, return 0; } -static void -ipn3ke_rpst_tx_queue_release(__rte_unused void *txq) -{ -} - /* Statistics collected by each port, VSI, VEB, and S-channel */ struct ipn3ke_rpst_eth_stats { uint64_t tx_bytes; /* gotc */ @@ -2865,9 +2855,7 @@ static const struct eth_dev_ops ipn3ke_rpst_dev_ops = { .tx_queue_start = ipn3ke_rpst_tx_queue_start, .tx_queue_stop = ipn3ke_rpst_tx_queue_stop, .rx_queue_setup = ipn3ke_rpst_rx_queue_setup, - .rx_queue_release = ipn3ke_rpst_rx_queue_release, .tx_queue_setup = ipn3ke_rpst_tx_queue_setup, - .tx_queue_release = ipn3ke_rpst_tx_queue_release, .dev_set_link_up = ipn3ke_rpst_dev_set_link_up, .dev_set_link_down = ipn3ke_rpst_dev_set_link_down, diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c index 871d11c4133..cb9f7c8e820 100644 --- a/drivers/net/kni/rte_eth_kni.c +++ b/drivers/net/kni/rte_eth_kni.c @@ -284,11 +284,6 @@ eth_kni_tx_queue_setup(struct rte_eth_dev *dev, return 0; } -static void -eth_kni_queue_release(void *q __rte_unused) -{ -} - static int eth_kni_link_update(struct rte_eth_dev *dev __rte_unused, int wait_to_complete __rte_unused) @@ -362,8 +357,6 @@ static const struct eth_dev_ops eth_kni_ops = { .dev_infos_get = eth_kni_dev_info, .rx_queue_setup = eth_kni_rx_queue_setup, .tx_queue_setup = eth_kni_tx_queue_setup, - .rx_queue_release = eth_kni_queue_release, - .tx_queue_release = eth_kni_queue_release, .link_update = eth_kni_link_update, .stats_get = eth_kni_stats_get, .stats_reset = eth_kni_stats_reset, diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c index 3566aea0105..d695c5eef7b 100644 --- a/drivers/net/pcap/pcap_ethdev.c +++ b/drivers/net/pcap/pcap_ethdev.c @@ -857,11 +857,6 @@ eth_dev_close(struct rte_eth_dev *dev) return 0; } -static void -eth_queue_release(void *q __rte_unused) -{ -} - static int eth_link_update(struct rte_eth_dev *dev __rte_unused, int wait_to_complete __rte_unused) @@ -1006,8 +1001,6 @@ static const struct eth_dev_ops ops = { .tx_queue_start = eth_tx_queue_start, .rx_queue_stop = eth_rx_queue_stop, .tx_queue_stop = eth_tx_queue_stop, - .rx_queue_release = eth_queue_release, - .tx_queue_release = eth_queue_release, .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c index feec4d10a26..4c7f568bf42 100644 --- a/drivers/net/pfe/pfe_ethdev.c +++ b/drivers/net/pfe/pfe_ethdev.c @@ -494,18 +494,6 @@ pfe_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return 0; } -static void -pfe_rx_queue_release(void *q __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); -} - -static void -pfe_tx_queue_release(void *q __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); -} - static int pfe_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, @@ -759,9 +747,7 @@ static const struct eth_dev_ops ops = { .dev_configure = pfe_eth_configure, .dev_infos_get = pfe_eth_info, .rx_queue_setup = pfe_rx_queue_setup, - .rx_queue_release = pfe_rx_queue_release, .tx_queue_setup = pfe_tx_queue_setup, - .tx_queue_release = pfe_tx_queue_release, .dev_supported_ptypes_get = pfe_supported_ptypes_get, .link_update = pfe_eth_link_update, .promiscuous_enable = pfe_promiscuous_enable, diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c index 1faf38a714c..0440019e07e 100644 --- a/drivers/net/ring/rte_eth_ring.c +++ b/drivers/net/ring/rte_eth_ring.c @@ -225,8 +225,6 @@ eth_mac_addr_add(struct rte_eth_dev *dev __rte_unused, return 0; } -static void -eth_queue_release(void *q __rte_unused) { ; } static int eth_link_update(struct rte_eth_dev *dev __rte_unused, int wait_to_complete __rte_unused) { return 0; } @@ -272,8 +270,6 @@ static const struct eth_dev_ops ops = { .dev_infos_get = eth_dev_info, .rx_queue_setup = eth_rx_queue_setup, .tx_queue_setup = eth_tx_queue_setup, - .rx_queue_release = eth_queue_release, - .tx_queue_release = eth_queue_release, .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index da1633d77e2..62c175a5d35 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -370,12 +370,6 @@ virtio_set_multiple_queues(struct rte_eth_dev *dev, uint16_t nb_queues) return 0; } -static void -virtio_dev_queue_release(void *queue __rte_unused) -{ - /* do nothing */ -} - static uint16_t virtio_get_nr_vq(struct virtio_hw *hw) { @@ -968,9 +962,7 @@ static const struct eth_dev_ops virtio_eth_dev_ops = { .rx_queue_setup = virtio_dev_rx_queue_setup, .rx_queue_intr_enable = virtio_dev_rx_queue_intr_enable, .rx_queue_intr_disable = virtio_dev_rx_queue_intr_disable, - .rx_queue_release = virtio_dev_queue_release, .tx_queue_setup = virtio_dev_tx_queue_setup, - .tx_queue_release = virtio_dev_queue_release, /* collect stats per queue */ .queue_stats_mapping_set = virtio_dev_queue_stats_mapping_set, .vlan_filter_set = virtio_vlan_filter_set, diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index daf5ca92422..4439ad336e2 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -889,6 +889,32 @@ eth_err(uint16_t port_id, int ret) return ret; } +static void +eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid) +{ + void **rxq = dev->data->rx_queues; + + if (rxq[qid] == NULL) + return; + + if (dev->dev_ops->rx_queue_release != NULL) + (*dev->dev_ops->rx_queue_release)(rxq[qid]); + rxq[qid] = NULL; +} + +static void +eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid) +{ + void **txq = dev->data->tx_queues; + + if (txq[qid] == NULL) + return; + + if (dev->dev_ops->tx_queue_release != NULL) + (*dev->dev_ops->tx_queue_release)(txq[qid]); + txq[qid] = NULL; +} + static int eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) { @@ -905,12 +931,10 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) return -(ENOMEM); } } else if (dev->data->rx_queues != NULL && nb_queues != 0) { /* re-configure */ - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP); + for (i = nb_queues; i < old_nb_queues; i++) + eth_dev_rxq_release(dev, i); rxq = dev->data->rx_queues; - - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->rx_queue_release)(rxq[i]); rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues, RTE_CACHE_LINE_SIZE); if (rxq == NULL) @@ -925,12 +949,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) dev->data->rx_queues = rxq; } else if (dev->data->rx_queues != NULL && nb_queues == 0) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP); - - rxq = dev->data->rx_queues; - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->rx_queue_release)(rxq[i]); + eth_dev_rxq_release(dev, i); rte_free(dev->data->rx_queues); dev->data->rx_queues = NULL; @@ -1145,12 +1165,10 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) return -(ENOMEM); } } else if (dev->data->tx_queues != NULL && nb_queues != 0) { /* re-configure */ - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP); + for (i = nb_queues; i < old_nb_queues; i++) + eth_dev_txq_release(dev, i); txq = dev->data->tx_queues; - - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->tx_queue_release)(txq[i]); txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues, RTE_CACHE_LINE_SIZE); if (txq == NULL) @@ -1165,12 +1183,8 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) dev->data->tx_queues = txq; } else if (dev->data->tx_queues != NULL && nb_queues == 0) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP); - - txq = dev->data->tx_queues; - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->tx_queue_release)(txq[i]); + eth_dev_txq_release(dev, i); rte_free(dev->data->tx_queues); dev->data->tx_queues = NULL; @@ -2006,7 +2020,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; struct rte_eth_rxconf local_conf; - void **rxq; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2110,13 +2123,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, RTE_ETH_QUEUE_STATE_STOPPED)) return -EBUSY; - rxq = dev->data->rx_queues; - if (rxq[rx_queue_id]) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, - -ENOTSUP); - (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]); - rxq[rx_queue_id] = NULL; - } + eth_dev_rxq_release(dev, rx_queue_id); if (rx_conf == NULL) rx_conf = &dev_info.default_rxconf; @@ -2189,7 +2196,6 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, int ret; struct rte_eth_dev *dev; struct rte_eth_hairpin_cap cap; - void **rxq; int i; int count; @@ -2246,13 +2252,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, } if (dev->data->dev_started) return -EBUSY; - rxq = dev->data->rx_queues; - if (rxq[rx_queue_id] != NULL) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, - -ENOTSUP); - (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]); - rxq[rx_queue_id] = NULL; - } + eth_dev_rxq_release(dev, rx_queue_id); ret = (*dev->dev_ops->rx_hairpin_queue_setup)(dev, rx_queue_id, nb_rx_desc, conf); if (ret == 0) @@ -2269,7 +2269,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; struct rte_eth_txconf local_conf; - void **txq; int ret; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); @@ -2314,13 +2313,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, RTE_ETH_QUEUE_STATE_STOPPED)) return -EBUSY; - txq = dev->data->tx_queues; - if (txq[tx_queue_id]) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, - -ENOTSUP); - (*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]); - txq[tx_queue_id] = NULL; - } + eth_dev_txq_release(dev, tx_queue_id); if (tx_conf == NULL) tx_conf = &dev_info.default_txconf; @@ -2368,7 +2361,6 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, { struct rte_eth_dev *dev; struct rte_eth_hairpin_cap cap; - void **txq; int i; int count; int ret; @@ -2426,13 +2418,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, } if (dev->data->dev_started) return -EBUSY; - txq = dev->data->tx_queues; - if (txq[tx_queue_id] != NULL) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, - -ENOTSUP); - (*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]); - txq[tx_queue_id] = NULL; - } + eth_dev_txq_release(dev, tx_queue_id); ret = (*dev->dev_ops->tx_hairpin_queue_setup) (dev, tx_queue_id, nb_tx_desc, conf); if (ret == 0) -- 2.33.0