From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E1B7C433DF for ; Tue, 19 May 2020 02:05:28 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 1FC3D20578 for ; Tue, 19 May 2020 02:05:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1FC3D20578 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 770E91D515; Tue, 19 May 2020 04:05:27 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id EFD5D1D509 for ; Tue, 19 May 2020 04:05:25 +0200 (CEST) IronPort-SDR: ZfVRmEwd232NAJHJxCZZ9M0ZUeFSRkl5LDopI7couQRJCykZm2JYHtW4CeXo73tIR0xYRUOT6Q n9dupC3k/KZw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2020 19:05:25 -0700 IronPort-SDR: Kjgd3drhQCA1YIsdEKWsSqY6N8GZ8tzD/Sb+jX/ckv/l0D6MVT6CH6LSxXQVzAQ34vrsrI23/S 6k+ZiUuLieeg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,408,1583222400"; d="scan'208";a="465964255" Received: from yexl-server.sh.intel.com (HELO localhost) ([10.67.116.183]) by fmsmga006.fm.intel.com with ESMTP; 18 May 2020 19:05:23 -0700 Date: Tue, 19 May 2020 09:56:53 +0800 From: Ye Xiaolong To: Qiming Yang Cc: dev@dpdk.org, beilei.xing@intel.com Message-ID: <20200519015653.GB38911@intel.com> References: <20200513075630.98139-1-qiming.yang@intel.com> <20200518054553.19306-1-qiming.yang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200518054553.19306-1-qiming.yang@intel.com> User-Agent: Mutt/1.9.4 (2018-02-28) Subject: Re: [dpdk-dev] [PATCH v2] net/i40e: fix queue related exception handling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 05/18, Qiming Yang wrote: >There should have different behavior in queue start fail and stop fail case. >When queue start fail, all the next actions should be terminated and then >started queues should be cleared. But for queue stop stage, one queue stop >fail should not end other queues stop. This patch fixed that issue in PF >and VF. > >Fixes: b6583ee40265 ("i40e: full VMDQ pools support") >Fixes: 3f6a696f1054 ("i40evf: queue start and stop") Need to cc stable. > >Signed-off-by: Qiming Yang >--- > drivers/net/i40e/i40e_ethdev.c | 116 ++++++++------------------------------ > drivers/net/i40e/i40e_ethdev_vf.c | 2 - > drivers/net/i40e/i40e_rxtx.c | 28 +++++++++ > 3 files changed, 53 insertions(+), 93 deletions(-) > >diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c >index 9fbda1c..024178d 100644 >--- a/drivers/net/i40e/i40e_ethdev.c >+++ b/drivers/net/i40e/i40e_ethdev.c >@@ -2276,6 +2276,7 @@ i40e_dev_start(struct rte_eth_dev *dev) > struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; > uint32_t intr_vector = 0; > struct i40e_vsi *vsi; >+ uint16_t nb_rxq, nb_txq; > > hw->adapter_stopped = 0; > >@@ -2307,7 +2308,7 @@ i40e_dev_start(struct rte_eth_dev *dev) > ret = i40e_dev_rxtx_init(pf); > if (ret != I40E_SUCCESS) { > PMD_DRV_LOG(ERR, "Failed to init rx/tx queues"); >- goto err_up; >+ return ret; > } > > /* Map queues with MSIX interrupt */ >@@ -2332,10 +2333,16 @@ i40e_dev_start(struct rte_eth_dev *dev) > } > > /* Enable all queues which have been configured */ >- ret = i40e_dev_switch_queues(pf, TRUE); >- if (ret != I40E_SUCCESS) { >- PMD_DRV_LOG(ERR, "Failed to enable VSI"); >- goto err_up; >+ for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { >+ ret = i40e_dev_rx_queue_start(dev, nb_rxq); >+ if (ret) >+ goto rx_err; >+ } >+ >+ for (nb_txq = 0; nb_txq < dev->data->nb_tx_queues; nb_txq++) { >+ ret = i40e_dev_tx_queue_start(dev, nb_txq); >+ if (ret) >+ goto tx_err; > } > > /* Enable receiving broadcast packets */ >@@ -2365,7 +2372,7 @@ i40e_dev_start(struct rte_eth_dev *dev) > ret = i40e_aq_set_lb_modes(hw, dev->data->dev_conf.lpbk_mode, NULL); > if (ret != I40E_SUCCESS) { > PMD_DRV_LOG(ERR, "fail to set loopback link"); >- goto err_up; >+ goto tx_err; > } > } > >@@ -2373,7 +2380,7 @@ i40e_dev_start(struct rte_eth_dev *dev) > ret = i40e_apply_link_speed(dev); > if (I40E_SUCCESS != ret) { > PMD_DRV_LOG(ERR, "Fail to apply link setting"); >- goto err_up; >+ goto tx_err; > } > > if (!rte_intr_allow_others(intr_handle)) { >@@ -2416,9 +2423,12 @@ i40e_dev_start(struct rte_eth_dev *dev) > > return I40E_SUCCESS; > >-err_up: >- i40e_dev_switch_queues(pf, FALSE); >- i40e_dev_clear_queues(dev); >+tx_err: >+ for (i = 0; i < nb_txq; i++) >+ i40e_dev_tx_queue_stop(dev, i); >+rx_err: >+ for (i = 0; i < nb_rxq; i++) >+ i40e_dev_rx_queue_stop(dev, i); > > return ret; > } >@@ -2442,7 +2452,11 @@ i40e_dev_stop(struct rte_eth_dev *dev) > } > > /* Disable all queues */ >- i40e_dev_switch_queues(pf, FALSE); >+ for (i = 0; i < dev->data->nb_tx_queues; i++) >+ i40e_dev_tx_queue_stop(dev, i); >+ >+ for (i = 0; i < dev->data->nb_rx_queues; i++) >+ i40e_dev_rx_queue_stop(dev, i); > > /* un-map queues with interrupt registers */ > i40e_vsi_disable_queues_intr(main_vsi); >@@ -6278,33 +6292,6 @@ i40e_switch_tx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on) > return I40E_SUCCESS; > } > >-/* Swith on or off the tx queues */ >-static int >-i40e_dev_switch_tx_queues(struct i40e_pf *pf, bool on) >-{ >- struct rte_eth_dev_data *dev_data = pf->dev_data; >- struct i40e_tx_queue *txq; >- struct rte_eth_dev *dev = pf->adapter->eth_dev; >- uint16_t i; >- int ret; >- >- for (i = 0; i < dev_data->nb_tx_queues; i++) { >- txq = dev_data->tx_queues[i]; >- /* Don't operate the queue if not configured or >- * if starting only per queue */ >- if (!txq || !txq->q_set || (on && txq->tx_deferred_start)) >- continue; >- if (on) >- ret = i40e_dev_tx_queue_start(dev, i); >- else >- ret = i40e_dev_tx_queue_stop(dev, i); >- if ( ret != I40E_SUCCESS) >- return ret; >- } >- >- return I40E_SUCCESS; >-} >- > int > i40e_switch_rx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on) > { >@@ -6356,59 +6343,6 @@ i40e_switch_rx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on) > > return I40E_SUCCESS; > } >-/* Switch on or off the rx queues */ >-static int >-i40e_dev_switch_rx_queues(struct i40e_pf *pf, bool on) >-{ >- struct rte_eth_dev_data *dev_data = pf->dev_data; >- struct i40e_rx_queue *rxq; >- struct rte_eth_dev *dev = pf->adapter->eth_dev; >- uint16_t i; >- int ret; >- >- for (i = 0; i < dev_data->nb_rx_queues; i++) { >- rxq = dev_data->rx_queues[i]; >- /* Don't operate the queue if not configured or >- * if starting only per queue */ >- if (!rxq || !rxq->q_set || (on && rxq->rx_deferred_start)) >- continue; >- if (on) >- ret = i40e_dev_rx_queue_start(dev, i); >- else >- ret = i40e_dev_rx_queue_stop(dev, i); >- if (ret != I40E_SUCCESS) >- return ret; >- } >- >- return I40E_SUCCESS; >-} >- >-/* Switch on or off all the rx/tx queues */ >-int >-i40e_dev_switch_queues(struct i40e_pf *pf, bool on) >-{ >- int ret; >- >- if (on) { >- /* enable rx queues before enabling tx queues */ >- ret = i40e_dev_switch_rx_queues(pf, on); >- if (ret) { >- PMD_DRV_LOG(ERR, "Failed to switch rx queues"); >- return ret; >- } >- ret = i40e_dev_switch_tx_queues(pf, on); >- } else { >- /* Stop tx queues before stopping rx queues */ >- ret = i40e_dev_switch_tx_queues(pf, on); >- if (ret) { >- PMD_DRV_LOG(ERR, "Failed to switch tx queues"); >- return ret; >- } >- ret = i40e_dev_switch_rx_queues(pf, on); >- } >- >- return ret; >-} > > /* Initialize VSI for TX */ > static int >diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c >index c34f520..2472610 100644 >--- a/drivers/net/i40e/i40e_ethdev_vf.c >+++ b/drivers/net/i40e/i40e_ethdev_vf.c >@@ -789,7 +789,6 @@ i40evf_stop_queues(struct rte_eth_dev *dev) > for (i = 0; i < dev->data->nb_tx_queues; i++) { > if (i40evf_dev_tx_queue_stop(dev, i) != 0) { > PMD_DRV_LOG(ERR, "Fail to stop queue %u", i); >- return -1; > } > } > >@@ -797,7 +796,6 @@ i40evf_stop_queues(struct rte_eth_dev *dev) > for (i = 0; i < dev->data->nb_rx_queues; i++) { > if (i40evf_dev_rx_queue_stop(dev, i) != 0) { > PMD_DRV_LOG(ERR, "Fail to stop queue %u", i); >- return -1; > } > } > >diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c >index f6d23c9..035eb35 100644 >--- a/drivers/net/i40e/i40e_rxtx.c >+++ b/drivers/net/i40e/i40e_rxtx.c >@@ -1570,6 +1570,15 @@ i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) > PMD_INIT_FUNC_TRACE(); > > rxq = dev->data->rx_queues[rx_queue_id]; >+ if (!rxq || !rxq->q_set) { >+ PMD_DRV_LOG(ERR, "RX queue %u not available or setup", >+ rx_queue_id); >+ return -EINVAL; >+ } >+ >+ if (rxq->rx_deferred_start) >+ PMD_DRV_LOG(WARNING, "RX queue %u is deferrd start", >+ rx_queue_id); > > err = i40e_alloc_rx_queue_mbufs(rxq); > if (err) { >@@ -1602,6 +1611,11 @@ i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) > struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); > > rxq = dev->data->rx_queues[rx_queue_id]; >+ if (!rxq || !rxq->q_set) { >+ PMD_DRV_LOG(ERR, "RX queue %u not available or setup", >+ rx_queue_id); >+ return -EINVAL; >+ } > > /* > * rx_queue_id is queue id application refers to, while >@@ -1630,6 +1644,15 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) > PMD_INIT_FUNC_TRACE(); > > txq = dev->data->tx_queues[tx_queue_id]; >+ if (!txq || !txq->q_set) { >+ PMD_DRV_LOG(ERR, "TX queue %u is not available or setup", >+ tx_queue_id); >+ return -EINVAL; >+ } >+ >+ if (txq->tx_deferred_start) >+ PMD_DRV_LOG(WARNING, "TX queue %u is deferrd start", >+ tx_queue_id); > > /* > * tx_queue_id is queue id application refers to, while >@@ -1654,6 +1677,11 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) > struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); > > txq = dev->data->tx_queues[tx_queue_id]; >+ if (!txq || !txq->q_set) { >+ PMD_DRV_LOG(ERR, "TX queue %u is not available or setup", >+ tx_queue_id); >+ return -EINVAL; >+ } > > /* > * tx_queue_id is queue id application refers to, while >-- >2.9.5 > Acked-by: Xiaolong Ye Applied to dpdk-next-net-intel, Thanks.