From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 681F5C3A5A2 for ; Tue, 10 Sep 2019 06:19:20 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id EA4E02089F for ; Tue, 10 Sep 2019 06:19:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA4E02089F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 012481EBBC; Tue, 10 Sep 2019 08:19:18 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id C93E01EBB9 for ; Tue, 10 Sep 2019 08:19:17 +0200 (CEST) X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Sep 2019 23:19:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,487,1559545200"; d="scan'208";a="200218269" Received: from dpdk-virtio-tbie-2.sh.intel.com (HELO ___) ([10.67.104.71]) by fmsmga001.fm.intel.com with ESMTP; 09 Sep 2019 23:19:15 -0700 Date: Tue, 10 Sep 2019 14:16:47 +0800 From: Tiwei Bie To: Marvin Liu Cc: dev@dpdk.org, maxime.coquelin@redhat.com, zhihong.wang@intel.com Message-ID: <20190910061647.GA13119@___> References: <20190827102407.65106-1-yong.liu@intel.com> <20190827102407.65106-2-yong.liu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20190827102407.65106-2-yong.liu@intel.com> User-Agent: Mutt/1.9.4 (2018-02-28) Subject: Re: [dpdk-dev] [PATCH 2/2] net/virtio: on demand cleanup when doing in order xmit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Aug 27, 2019 at 06:24:07PM +0800, Marvin Liu wrote: > Check whether freed descriptors are enough before enqueue operation. > If more space is needed, will try to cleanup used ring on demand. It > can give more chances to cleanup used ring, thus help RFC2544 perf. > > Signed-off-by: Marvin Liu > --- > drivers/net/virtio/virtio_rxtx.c | 73 +++++++++++++++++++++++--------- > 1 file changed, 54 insertions(+), 19 deletions(-) > > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c > index 5d4ed524e..550b0aa62 100644 > --- a/drivers/net/virtio/virtio_rxtx.c > +++ b/drivers/net/virtio/virtio_rxtx.c > @@ -317,7 +317,7 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num) > } > > /* Cleanup from completed inorder transmits. */ > -static void > +static __rte_always_inline void > virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num) > { > uint16_t i, idx = vq->vq_used_cons_idx; > @@ -2152,6 +2152,21 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) > return nb_tx; > } > > +static __rte_always_inline int > +virtio_xmit_try_cleanup_inorder(struct virtqueue *vq, uint16_t need) > +{ > + uint16_t nb_used; > + struct virtio_hw *hw = vq->hw; > + > + nb_used = VIRTQUEUE_NUSED(vq); > + virtio_rmb(hw->weak_barriers); > + need = RTE_MIN(need, (int)nb_used); > + > + virtio_xmit_cleanup_inorder(vq, need); > + > + return (need - vq->vq_free_cnt); It's possible that the `need` has been changed by need = RTE_MIN(need, (int)nb_used); So it can't reflect the actual needs. Besides, you are passing (nb_inorder_pkts - vq->vq_free_cnt) as the `need`, here you can't subtract vq->vq_free_cnt to see whether the needs have been met. > +} > + > uint16_t > virtio_xmit_pkts_inorder(void *tx_queue, > struct rte_mbuf **tx_pkts, > @@ -2161,8 +2176,9 @@ virtio_xmit_pkts_inorder(void *tx_queue, > struct virtqueue *vq = txvq->vq; > struct virtio_hw *hw = vq->hw; > uint16_t hdr_size = hw->vtnet_hdr_size; > - uint16_t nb_used, nb_avail, nb_tx = 0, nb_inorder_pkts = 0; > + uint16_t nb_used, nb_tx = 0, nb_inorder_pkts = 0; > struct rte_mbuf *inorder_pkts[nb_pkts]; > + int need, nb_left; > > if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) > return nb_tx; > @@ -2175,17 +2191,12 @@ virtio_xmit_pkts_inorder(void *tx_queue, > nb_used = VIRTQUEUE_NUSED(vq); > > virtio_rmb(hw->weak_barriers); > - if (likely(nb_used > vq->vq_nentries - vq->vq_free_thresh)) > - virtio_xmit_cleanup_inorder(vq, nb_used); > - > - if (unlikely(!vq->vq_free_cnt)) > + if (likely(nb_used > (vq->vq_nentries - vq->vq_free_thresh))) > virtio_xmit_cleanup_inorder(vq, nb_used); > > - nb_avail = RTE_MIN(vq->vq_free_cnt, nb_pkts); > - > - for (nb_tx = 0; nb_tx < nb_avail; nb_tx++) { > + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { > struct rte_mbuf *txm = tx_pkts[nb_tx]; > - int slots, need; > + int slots; > > /* optimize ring usage */ > if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || > @@ -2203,6 +2214,22 @@ virtio_xmit_pkts_inorder(void *tx_queue, > } > > if (nb_inorder_pkts) { > + need = nb_inorder_pkts - vq->vq_free_cnt; > + > + There is no need to add blank lines here. > + if (unlikely(need > 0)) { > + nb_left = virtio_xmit_try_cleanup_inorder(vq, > + need); > + > + if (unlikely(nb_left > 0)) { > + PMD_TX_LOG(ERR, > + "No free tx descriptors to " > + "transmit"); > + nb_inorder_pkts = vq->vq_free_cnt; You need to handle nb_tx as well. > + break; > + } > + } > + > virtqueue_enqueue_xmit_inorder(txvq, inorder_pkts, > nb_inorder_pkts); > nb_inorder_pkts = 0; > @@ -2211,15 +2238,9 @@ virtio_xmit_pkts_inorder(void *tx_queue, > slots = txm->nb_segs + 1; > need = slots - vq->vq_free_cnt; > if (unlikely(need > 0)) { > - nb_used = VIRTQUEUE_NUSED(vq); > - virtio_rmb(hw->weak_barriers); > - need = RTE_MIN(need, (int)nb_used); > + nb_left = virtio_xmit_try_cleanup_inorder(vq, need); > > - virtio_xmit_cleanup_inorder(vq, need); > - > - need = slots - vq->vq_free_cnt; > - > - if (unlikely(need > 0)) { > + if (unlikely(nb_left > 0)) { > PMD_TX_LOG(ERR, > "No free tx descriptors to transmit"); > break; > @@ -2232,9 +2253,23 @@ virtio_xmit_pkts_inorder(void *tx_queue, > } > > /* Transmit all inorder packets */ > - if (nb_inorder_pkts) > + if (nb_inorder_pkts) { > + need = nb_inorder_pkts - vq->vq_free_cnt; > + > + if (unlikely(need > 0)) { > + nb_left = virtio_xmit_try_cleanup_inorder(vq, need); > + > + if (unlikely(nb_left > 0)) { > + PMD_TX_LOG(ERR, > + "No free tx descriptors to transmit"); > + nb_inorder_pkts = vq->vq_free_cnt; > + nb_tx -= nb_left; > + } > + } > + > virtqueue_enqueue_xmit_inorder(txvq, inorder_pkts, > nb_inorder_pkts); > + } > > txvq->stats.packets += nb_tx; > > -- > 2.17.1 >