All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jiang, Cheng1" <cheng1.jiang@intel.com>
To: "Xia, Chenbo" <chenbo.xia@intel.com>,
	"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "Hu, Jiayu" <jiayu.hu@intel.com>,
	"Yang, YvonneX" <yvonnex.yang@intel.com>
Subject: Re: [dpdk-dev] [PATCH v4 1/5] vhost: fix async vhost ops return type
Date: Fri, 16 Jul 2021 05:58:08 +0000	[thread overview]
Message-ID: <SJ0PR11MB50060279E52DE539B193BB6FDC119@SJ0PR11MB5006.namprd11.prod.outlook.com> (raw)
In-Reply-To: <MN2PR11MB4063EFC973FA417EE902105A9C119@MN2PR11MB4063.namprd11.prod.outlook.com>

Hi Chenbo,

> -----Original Message-----
> From: Xia, Chenbo <chenbo.xia@intel.com>
> Sent: Friday, July 16, 2021 1:37 PM
> To: Jiang, Cheng1 <cheng1.jiang@intel.com>; maxime.coquelin@redhat.com
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Yang, YvonneX
> <yvonnex.yang@intel.com>
> Subject: RE: [PATCH v4 1/5] vhost: fix async vhost ops return type
> 
> Hi Cheng,
> 
> > -----Original Message-----
> > From: Jiang, Cheng1 <cheng1.jiang@intel.com>
> > Sent: Friday, July 16, 2021 10:59 AM
> > To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> > Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Yang, YvonneX
> > <yvonnex.yang@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>
> > Subject: [PATCH v4 1/5] vhost: fix async vhost ops return type
> >
> > The async vhost ops callback should return -1 when there are something
> 
> Ops callback -> callback ops
> 
> Since the return value is redefined. Let's update ops description of struct
> rte_vhost_async_channel_ops. And I suggest return negative value when
> error, rather than only -1.
> 
Sure, agreed.

> > wrong in the callback, so the return type should be changed into
> > int32_t. The issue in vhost example is also fixed in this patch.
> >
> > Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
> > ---
> >  examples/vhost/ioat.c       |  4 +--
> >  examples/vhost/ioat.h       |  4 +--
> >  lib/vhost/rte_vhost_async.h |  4 +--
> >  lib/vhost/virtio_net.c      | 58 ++++++++++++++++++++++++++++++++-----
> >  4 files changed, 56 insertions(+), 14 deletions(-)
> >
> > diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c index
> > 2a2c2d7202..457f8171f0 100644
> > --- a/examples/vhost/ioat.c
> > +++ b/examples/vhost/ioat.c
> > @@ -122,7 +122,7 @@ open_ioat(const char *value)
> >  	return ret;
> >  }
> >
> > -uint32_t
> > +int32_t
> >  ioat_transfer_data_cb(int vid, uint16_t queue_id,
> >  		struct rte_vhost_async_desc *descs,
> >  		struct rte_vhost_async_status *opaque_data, uint16_t count)
> @@
> > -168,7 +168,7 @@ ioat_transfer_data_cb(int vid, uint16_t queue_id,
> >  	return i_desc;
> >  }
> >
> > -uint32_t
> > +int32_t
> >  ioat_check_completed_copies_cb(int vid, uint16_t queue_id,
> >  		struct rte_vhost_async_status *opaque_data,
> >  		uint16_t max_packets)
> > diff --git a/examples/vhost/ioat.h b/examples/vhost/ioat.h index
> > 1aa28ed6a3..b57b5645b0 100644
> > --- a/examples/vhost/ioat.h
> > +++ b/examples/vhost/ioat.h
> > @@ -27,12 +27,12 @@ struct dma_for_vhost {  #ifdef RTE_RAW_IOAT  int
> > open_ioat(const char *value);
> >
> > -uint32_t
> > +int32_t
> >  ioat_transfer_data_cb(int vid, uint16_t queue_id,
> >  		struct rte_vhost_async_desc *descs,
> >  		struct rte_vhost_async_status *opaque_data, uint16_t
> count);
> >
> > -uint32_t
> > +int32_t
> >  ioat_check_completed_copies_cb(int vid, uint16_t queue_id,
> >  		struct rte_vhost_async_status *opaque_data,
> >  		uint16_t max_packets);
> > diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> > index 6faa31f5ad..bc81cd0caa 100644
> > --- a/lib/vhost/rte_vhost_async.h
> > +++ b/lib/vhost/rte_vhost_async.h
> > @@ -61,7 +61,7 @@ struct rte_vhost_async_channel_ops {
> >  	 * @return
> >  	 *  number of descs processed
> >  	 */
> > -	uint32_t (*transfer_data)(int vid, uint16_t queue_id,
> > +	int32_t (*transfer_data)(int vid, uint16_t queue_id,
> >  		struct rte_vhost_async_desc *descs,
> >  		struct rte_vhost_async_status *opaque_data,
> >  		uint16_t count);
> > @@ -78,7 +78,7 @@ struct rte_vhost_async_channel_ops {
> >  	 * @return
> >  	 *  number of async descs completed
> >  	 */
> > -	uint32_t (*check_completed_copies)(int vid, uint16_t queue_id,
> > +	int32_t (*check_completed_copies)(int vid, uint16_t queue_id,
> >  		struct rte_vhost_async_status *opaque_data,
> >  		uint16_t max_packets);
> >  };
> > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index
> > b93482587c..8156796a46 100644
> > --- a/lib/vhost/virtio_net.c
> > +++ b/lib/vhost/virtio_net.c
> > @@ -1528,6 +1528,7 @@ virtio_dev_rx_async_submit_split(struct
> virtio_net *dev,
> >  	struct async_inflight_info *pkts_info = vq->async_pkts_info;
> >  	uint32_t n_pkts = 0, pkt_err = 0;
> >  	uint32_t num_async_pkts = 0, num_done_pkts = 0;
> > +	int32_t n_enq;
> >  	struct {
> >  		uint16_t pkt_idx;
> >  		uint16_t last_avail_idx;
> > @@ -1608,8 +1609,16 @@ virtio_dev_rx_async_submit_split(struct
> virtio_net *dev,
> >  		if (unlikely(pkt_burst_idx >=
> VHOST_ASYNC_BATCH_THRESHOLD ||
> >  			((VHOST_MAX_ASYNC_VEC >> 1) - segs_await <
> >  			BUF_VECTOR_MAX))) {
> > -			n_pkts = vq->async_ops.transfer_data(dev->vid,
> > +			n_enq = vq->async_ops.transfer_data(dev->vid,
> >  					queue_id, tdes, 0, pkt_burst_idx);
> > +			if (n_enq >= 0) {
> > +				n_pkts = n_enq;
> > +			} else {
> > +				VHOST_LOG_DATA(ERR, "(%d) %s: wrong
> opaque data for
> > queue id %d.\n",
> 
> You can't assume the error is caused by wrong opaque data because of
> different implementation of the callback.
> 
> It's better to replace 'n_enq' with 'n_xfer' as we use the name 'transfer' in
> callback definition.
> 
> If you agree with above, please also change in other funcs below.

Sure, agreed. It will be fixed in the next version.

> 
> > +					dev->vid, __func__, queue_id);
> > +				n_pkts = 0;
> > +			}
> > +
> >  			iovec_idx = 0;
> >  			it_idx = 0;
> >
> > @@ -1632,8 +1641,15 @@ virtio_dev_rx_async_submit_split(struct
> virtio_net *dev,
> >  	}
> >
> >  	if (pkt_burst_idx) {
> > -		n_pkts = vq->async_ops.transfer_data(dev->vid,
> > -				queue_id, tdes, 0, pkt_burst_idx);
> > +		n_enq = vq->async_ops.transfer_data(dev->vid, queue_id,
> tdes, 0,
> > pkt_burst_idx);
> > +		if (n_enq >= 0) {
> > +			n_pkts = n_enq;
> > +		} else {
> > +			VHOST_LOG_DATA(ERR, "(%d) %s: wrong opaque
> data for queue
> > id %d.\n",
> > +				dev->vid, __func__, queue_id);
> > +			n_pkts = 0;
> > +		}
> > +
> >  		vq->async_pkts_inflight_n += n_pkts;
> >
> >  		if (unlikely(n_pkts < pkt_burst_idx)) @@ -1903,6 +1919,7
> @@
> > virtio_dev_rx_async_submit_packed(struct virtio_net *dev,
> >  	uint16_t async_descs_idx = 0;
> >  	uint16_t num_buffers;
> >  	uint16_t num_descs;
> > +	int32_t n_enq;
> >
> >  	struct rte_vhost_iov_iter *it_pool = vq->it_pool;
> >  	struct iovec *vec_pool = vq->vec_pool; @@ -1983,8 +2000,16 @@
> > virtio_dev_rx_async_submit_packed(struct virtio_net *dev,
> >  		 */
> >  		if (unlikely(pkt_burst_idx >=
> VHOST_ASYNC_BATCH_THRESHOLD ||
> >  			((VHOST_MAX_ASYNC_VEC >> 1) - segs_await <
> BUF_VECTOR_MAX))) {
> > -			n_pkts = vq->async_ops.transfer_data(dev->vid,
> queue_id,
> > -				tdes, 0, pkt_burst_idx);
> > +			n_enq = vq->async_ops.transfer_data(dev->vid,
> > +				queue_id, tdes, 0, pkt_burst_idx);
> > +			if (n_enq >= 0) {
> > +				n_pkts = n_enq;
> > +			} else {
> > +				VHOST_LOG_DATA(ERR, "(%d) %s: wrong
> opaque data for
> > queue id %d.\n",
> > +					dev->vid, __func__, queue_id);
> > +				n_pkts = 0;
> > +			}
> > +
> >  			iovec_idx = 0;
> >  			it_idx = 0;
> >  			segs_await = 0;
> > @@ -2006,7 +2031,15 @@ virtio_dev_rx_async_submit_packed(struct
> > virtio_net *dev,
> >  	} while (pkt_idx < count);
> >
> >  	if (pkt_burst_idx) {
> > -		n_pkts = vq->async_ops.transfer_data(dev->vid, queue_id,
> tdes, 0,
> > pkt_burst_idx);
> > +		n_enq = vq->async_ops.transfer_data(dev->vid, queue_id,
> tdes, 0,
> > pkt_burst_idx);
> > +		if (n_enq >= 0) {
> > +			n_pkts = n_enq;
> > +		} else {
> > +			VHOST_LOG_DATA(ERR, "(%d) %s: wrong opaque
> data for queue
> > id %d.\n",
> > +				dev->vid, __func__, queue_id);
> > +			n_pkts = 0;
> > +		}
> > +
> >  		vq->async_pkts_inflight_n += n_pkts;
> >
> >  		if (unlikely(n_pkts < pkt_burst_idx)) @@ -2091,6 +2124,7
> @@
> > uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
> >  	uint16_t start_idx, pkts_idx, vq_size;
> >  	struct async_inflight_info *pkts_info;
> >  	uint16_t from, i;
> > +	int32_t n_poll;
> >
> >  	if (!dev)
> >  		return 0;
> > @@ -2118,9 +2152,17 @@ uint16_t
> rte_vhost_poll_enqueue_completed(int
> > vid, uint16_t queue_id,
> >  	start_idx = virtio_dev_rx_async_get_info_idx(pkts_idx,
> >  		vq_size, vq->async_pkts_inflight_n);
> >
> > -	if (count > vq->async_last_pkts_n)
> > -		n_pkts_cpl = vq->async_ops.check_completed_copies(vid,
> > +	if (count > vq->async_last_pkts_n) {
> > +		n_poll = vq->async_ops.check_completed_copies(vid,
> >  			queue_id, 0, count - vq->async_last_pkts_n);
> 
> The name 'n_poll' is not related with the callback name. Maybe 'n_cpl'?
> 
> > +		if (n_poll >= 0) {
> > +			n_pkts_cpl = n_poll;
> > +		} else {
> > +			VHOST_LOG_DATA(ERR, "(%d) %s: wrong opaque
> data for queue
> > id %d.\n",
> 
> I suggest using different log for submit and check complete so that it's easier
> for users to know what's wrong.

Agreed, it will be fixed in the next version.

Thanks,
Cheng

> 
> Thanks,
> Chenbo
> 
> > +				dev->vid, __func__, queue_id);
> > +			n_pkts_cpl = 0;
> > +		}
> > +	}
> >  	n_pkts_cpl += vq->async_last_pkts_n;
> >
> >  	n_pkts_put = RTE_MIN(count, n_pkts_cpl);
> > --
> > 2.29.2


  reply	other threads:[~2021-07-16  5:58 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-02  4:28 [dpdk-dev] [PATCH 0/2] vhost: handle memory hotplug for async vhost Cheng Jiang
2021-06-02  4:28 ` [dpdk-dev] [PATCH 1/2] vhost: add unsafe API to drain pkts in " Cheng Jiang
2021-06-07 13:46   ` Maxime Coquelin
2021-06-08  5:26     ` Jiang, Cheng1
2021-06-02  4:28 ` [dpdk-dev] [PATCH 2/2] vhost: handle memory hotplug for " Cheng Jiang
2021-06-15 14:15 ` [dpdk-dev] [PATCH v2 0/3] " Cheng Jiang
2021-06-15 14:15   ` [dpdk-dev] [PATCH v2 1/3] vhost: add unsafe API to drain pkts in " Cheng Jiang
2021-07-05 14:58     ` Pai G, Sunil
2021-07-07 14:02       ` Jiang, Cheng1
2021-07-08  7:15         ` Pai G, Sunil
2021-07-12  6:31           ` Jiang, Cheng1
2021-07-06 14:08     ` Maxime Coquelin
2021-07-08 13:46       ` Hu, Jiayu
2021-06-15 14:15   ` [dpdk-dev] [PATCH v2 2/3] examples/vhost: handle memory hotplug for " Cheng Jiang
2021-06-15 14:15   ` [dpdk-dev] [PATCH v2 3/3] vhost: " Cheng Jiang
2021-07-14  9:01 ` [dpdk-dev] [PATCH v3 0/5] " Cheng Jiang
2021-07-14  9:01   ` [dpdk-dev] [PATCH v3 1/5] vhost: fix async vhost ops return type Cheng Jiang
2021-07-14  9:01   ` [dpdk-dev] [PATCH v3 2/5] vhost: add unsafe API to drain pkts in async vhost Cheng Jiang
2021-07-14  9:01   ` [dpdk-dev] [PATCH v3 3/5] vhost: handle memory hotplug for " Cheng Jiang
2021-07-14  9:01   ` [dpdk-dev] [PATCH v3 4/5] examples/vhost: " Cheng Jiang
2021-07-14  9:01   ` [dpdk-dev] [PATCH v3 5/5] doc: update doc for try drain API in vhost lib Cheng Jiang
2021-07-16  2:59 ` [dpdk-dev] [PATCH v4 0/5] vhost: handle memory hotplug for async vhost Cheng Jiang
2021-07-16  2:59   ` [dpdk-dev] [PATCH v4 1/5] vhost: fix async vhost ops return type Cheng Jiang
2021-07-16  5:36     ` Xia, Chenbo
2021-07-16  5:58       ` Jiang, Cheng1 [this message]
2021-07-16  2:59   ` [dpdk-dev] [PATCH v4 2/5] vhost: add unsafe API to drain pkts in async vhost Cheng Jiang
2021-07-16  8:56     ` Xia, Chenbo
2021-07-19  3:28       ` Jiang, Cheng1
2021-07-16  2:59   ` [dpdk-dev] [PATCH v4 3/5] vhost: handle memory hotplug for " Cheng Jiang
2021-07-16  2:59   ` [dpdk-dev] [PATCH v4 4/5] examples/vhost: " Cheng Jiang
2021-07-16  2:59   ` [dpdk-dev] [PATCH v4 5/5] doc: update doc for try drain API in vhost lib Cheng Jiang
2021-07-16  7:24 ` [dpdk-dev] [PATCH v5 0/5] vhost: handle memory hotplug for async vhost Cheng Jiang
2021-07-16  7:24   ` [dpdk-dev] [PATCH v5 1/5] vhost: fix async vhost ops return type Cheng Jiang
2021-07-16  7:24   ` [dpdk-dev] [PATCH v5 2/5] vhost: add unsafe API to drain pkts in async vhost Cheng Jiang
2021-07-16  7:24   ` [dpdk-dev] [PATCH v5 3/5] vhost: handle memory hotplug for " Cheng Jiang
2021-07-19  5:19     ` Xia, Chenbo
2021-07-19  7:56       ` Hu, Jiayu
2021-07-16  7:24   ` [dpdk-dev] [PATCH v5 4/5] examples/vhost: " Cheng Jiang
2021-07-16  7:24   ` [dpdk-dev] [PATCH v5 5/5] doc: update doc for try drain API in vhost lib Cheng Jiang
2021-07-19  8:10 ` [dpdk-dev] [PATCH v6 0/5] vhost: handle memory hotplug for async vhost Cheng Jiang
2021-07-19  8:10   ` [dpdk-dev] [PATCH v6 1/5] vhost: fix async vhost ops return type Cheng Jiang
2021-07-21 14:20     ` Maxime Coquelin
2021-07-19  8:10   ` [dpdk-dev] [PATCH v6 2/5] vhost: add unsafe API to clear packets in async vhost Cheng Jiang
2021-07-21 14:23     ` Maxime Coquelin
2021-07-19  8:10   ` [dpdk-dev] [PATCH v6 3/5] vhost: handle memory hotplug for " Cheng Jiang
2021-07-21 14:32     ` Maxime Coquelin
2021-07-19  8:10   ` [dpdk-dev] [PATCH v6 4/5] examples/vhost: " Cheng Jiang
2021-07-21 14:37     ` Maxime Coquelin
2021-07-19  8:10   ` [dpdk-dev] [PATCH v6 5/5] doc: update doc for inflight packets clear API in vhost lib Cheng Jiang
2021-07-21 14:37     ` Maxime Coquelin
2021-07-22  4:09 ` [dpdk-dev] [PATCH v7 0/5] vhost: handle memory hotplug for async vhost Cheng Jiang
2021-07-22  4:09   ` [dpdk-dev] [PATCH v7 1/5] vhost: fix async vhost ops return type Cheng Jiang
2021-07-22  4:09   ` [dpdk-dev] [PATCH v7 2/5] vhost: add unsafe API to clear packets in async vhost Cheng Jiang
2021-07-22  4:09   ` [dpdk-dev] [PATCH v7 3/5] vhost: handle memory hotplug for " Cheng Jiang
2021-07-22  4:09   ` [dpdk-dev] [PATCH v7 4/5] examples/vhost: " Cheng Jiang
2021-07-22  4:09   ` [dpdk-dev] [PATCH v7 5/5] doc: update doc for queue clear API in vhost lib Cheng Jiang
2021-07-22  5:07   ` [dpdk-dev] [PATCH v7 0/5] vhost: handle memory hotplug for async vhost Xia, Chenbo
2021-07-22 16:12     ` Thomas Monjalon
2021-07-23  5:06       ` Xia, Chenbo
2021-07-23  7:25         ` Thomas Monjalon
2021-07-23  7:34           ` Xia, Chenbo
2021-07-23  7:39             ` Thomas Monjalon
2021-07-23  8:03               ` Xia, Chenbo
2021-07-23  8:57                 ` Thomas Monjalon
2021-07-23  8:09 ` [dpdk-dev] [PATCH v8 0/4] " Cheng Jiang
2021-07-23  8:09   ` [dpdk-dev] [PATCH v8 1/4] vhost: fix async vhost ops return type Cheng Jiang
2021-07-23  8:09   ` [dpdk-dev] [PATCH v8 2/4] vhost: add unsafe API to clear packets in async vhost Cheng Jiang
2021-07-23  8:09   ` [dpdk-dev] [PATCH v8 3/4] vhost: handle memory hotplug for " Cheng Jiang
2021-07-23  8:09   ` [dpdk-dev] [PATCH v8 4/4] examples/vhost: " Cheng Jiang
2021-07-23  9:08   ` [dpdk-dev] [PATCH v8 0/4] vhost: " Xia, Chenbo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR11MB50060279E52DE539B193BB6FDC119@SJ0PR11MB5006.namprd11.prod.outlook.com \
    --to=cheng1.jiang@intel.com \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=jiayu.hu@intel.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=yvonnex.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.