netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, Jason Wang <jasowang@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	bpf@vger.kernel.org
Subject: Re: [PATCH v2 07/12] virtio: queue_reset: pci: support VIRTIO_F_RING_RESET
Date: Thu, 20 Jan 2022 05:55:14 -0500	[thread overview]
Message-ID: <20220120055227-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20220120064303.106639-8-xuanzhuo@linux.alibaba.com>

On Thu, Jan 20, 2022 at 02:42:58PM +0800, Xuan Zhuo wrote:
> This patch implements virtio pci support for QUEUE RESET.
> 
> Performing reset on a queue is divided into two steps:
> 
> 1. reset_vq: reset one vq
> 2. enable_reset_vq: re-enable the reset queue
> 
> In the first step, these tasks will be completed:
>    1. notify the hardware queue to reset
>    2. recycle the buffer from vq
>    3. delete the vq
> 
> When deleting a vq, vp_del_vq() will be called to release all the memory
> of the vq. But this does not affect the process of deleting vqs, because
> that is based on the queue to release all the vqs. During this process,
> the vq has been removed from the queue.
> 
> When deleting vq, info and vq will be released, and I save msix_vec in
> vp_dev->vqs[queue_index]. When re-enable, the current msix_vec can be
> reused. And based on intx_enabled to determine which method to use to
> enable this queue.
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

There's something I don't understand here. It looks like
you assume that when you reset a queue, you also
reset the mapping from queue to event vector.
The spec does not say it should, and I don't think it's
useful to extend spec to do it - we already have a simple
way to tweak the mapping.

Avoid doing that, and things will be much easier, with no need
to interact with a transport, won't they?


> ---
>  drivers/virtio/virtio_pci_common.c | 49 ++++++++++++++++++++
>  drivers/virtio/virtio_pci_common.h |  4 ++
>  drivers/virtio/virtio_pci_modern.c | 73 ++++++++++++++++++++++++++++++
>  3 files changed, 126 insertions(+)
> 
> diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
> index 5afe207ce28a..28b5ffde4621 100644
> --- a/drivers/virtio/virtio_pci_common.c
> +++ b/drivers/virtio/virtio_pci_common.c
> @@ -464,6 +464,55 @@ int vp_find_vqs(struct virtio_device *vdev, unsigned nvqs,
>  	return vp_find_vqs_intx(vdev, nvqs, vqs, callbacks, names, ctx);
>  }
>  
> +#define VQ_IS_DELETED(vp_dev, idx) ((unsigned long)vp_dev->vqs[idx] & 1)
> +#define VQ_RESET_MSIX_VEC(vp_dev, idx) ((unsigned long)vp_dev->vqs[idx] >> 2)
> +#define VQ_RESET_MARK(msix_vec) ((void *)(long)((msix_vec << 2) + 1))
> +
> +void vp_del_reset_vq(struct virtio_device *vdev, u16 queue_index)
> +{
> +	struct virtio_pci_device *vp_dev = to_vp_device(vdev);
> +	struct virtio_pci_vq_info *info;
> +	u16 msix_vec;
> +
> +	info = vp_dev->vqs[queue_index];
> +
> +	msix_vec = info->msix_vector;
> +
> +	/* delete vq */
> +	vp_del_vq(info->vq);
> +
> +	/* Mark the vq has been deleted, and save the msix_vec. */
> +	vp_dev->vqs[queue_index] = VQ_RESET_MARK(msix_vec);
> +}
> +
> +struct virtqueue *vp_enable_reset_vq(struct virtio_device *vdev,
> +				     int queue_index,
> +				     vq_callback_t *callback,
> +				     const char *name,
> +				     const bool ctx)
> +{
> +	struct virtio_pci_device *vp_dev = to_vp_device(vdev);
> +	struct virtqueue *vq;
> +	u16 msix_vec;
> +
> +	if (!VQ_IS_DELETED(vp_dev, queue_index))
> +		return ERR_PTR(-EPERM);
> +
> +	msix_vec = VQ_RESET_MSIX_VEC(vp_dev, queue_index);
> +
> +	if (vp_dev->intx_enabled)
> +		vq = vp_setup_vq(vdev, queue_index, callback, name, ctx,
> +				 VIRTIO_MSI_NO_VECTOR);
> +	else
> +		vq = vp_enable_vq_msix(vdev, queue_index, callback, name, ctx,
> +				       msix_vec);
> +
> +	if (IS_ERR(vq))
> +		vp_dev->vqs[queue_index] = VQ_RESET_MARK(msix_vec);
> +
> +	return vq;
> +}
> +
>  const char *vp_bus_name(struct virtio_device *vdev)
>  {
>  	struct virtio_pci_device *vp_dev = to_vp_device(vdev);
> diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h
> index 23f6c5c678d5..96c13b1398f8 100644
> --- a/drivers/virtio/virtio_pci_common.h
> +++ b/drivers/virtio/virtio_pci_common.h
> @@ -115,6 +115,10 @@ int vp_find_vqs(struct virtio_device *vdev, unsigned nvqs,
>  		struct virtqueue *vqs[], vq_callback_t *callbacks[],
>  		const char * const names[], const bool *ctx,
>  		struct irq_affinity *desc);
> +void vp_del_reset_vq(struct virtio_device *vdev, u16 queue_index);
> +struct virtqueue *vp_enable_reset_vq(struct virtio_device *vdev, int queue_index,
> +				     vq_callback_t *callback, const char *name,
> +				     const bool ctx);
>  const char *vp_bus_name(struct virtio_device *vdev);
>  
>  /* Setup the affinity for a virtqueue:
> diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
> index 5455bc041fb6..fbf87239c920 100644
> --- a/drivers/virtio/virtio_pci_modern.c
> +++ b/drivers/virtio/virtio_pci_modern.c
> @@ -34,6 +34,9 @@ static void vp_transport_features(struct virtio_device *vdev, u64 features)
>  	if ((features & BIT_ULL(VIRTIO_F_SR_IOV)) &&
>  			pci_find_ext_capability(pci_dev, PCI_EXT_CAP_ID_SRIOV))
>  		__virtio_set_bit(vdev, VIRTIO_F_SR_IOV);
> +
> +	if (features & BIT_ULL(VIRTIO_F_RING_RESET))
> +		__virtio_set_bit(vdev, VIRTIO_F_RING_RESET);
>  }
>  
>  /* virtio config->finalize_features() implementation */
> @@ -176,6 +179,72 @@ static void vp_reset(struct virtio_device *vdev)
>  	vp_disable_cbs(vdev);
>  }
>  
> +static int vp_modern_reset_vq(struct virtio_device *vdev, u16 queue_index,
> +			      vq_reset_callback_t *callback, void *data)
> +{
> +	struct virtio_pci_device *vp_dev = to_vp_device(vdev);
> +	struct virtio_pci_modern_device *mdev = &vp_dev->mdev;
> +	struct virtio_pci_vq_info *info;
> +	u16 msix_vec;
> +	void *buf;
> +
> +	if (!virtio_has_feature(vdev, VIRTIO_F_RING_RESET))
> +		return -ENOENT;
> +
> +	vp_modern_set_queue_reset(mdev, queue_index);
> +
> +	/* After write 1 to queue reset, the driver MUST wait for a read of
> +	 * queue reset to return 1.
> +	 */
> +	while (vp_modern_get_queue_reset(mdev, queue_index) != 1)
> +		msleep(1);
> +
> +	info = vp_dev->vqs[queue_index];
> +	msix_vec = info->msix_vector;
> +
> +	/* Disable VQ callback. */
> +	if (vp_dev->per_vq_vectors && msix_vec != VIRTIO_MSI_NO_VECTOR)
> +		disable_irq(pci_irq_vector(vp_dev->pci_dev, msix_vec));
> +
> +	while ((buf = virtqueue_detach_unused_buf(info->vq)) != NULL)
> +		callback(vdev, buf, data);
> +
> +	vp_del_reset_vq(vdev, queue_index);
> +
> +	return 0;
> +}
> +
> +static struct virtqueue *vp_modern_enable_reset_vq(struct virtio_device *vdev,
> +						   u16 queue_index,
> +						   vq_callback_t *callback,
> +						   const char *name,
> +						   const bool *ctx)
> +{
> +	struct virtio_pci_device *vp_dev = to_vp_device(vdev);
> +	struct virtio_pci_modern_device *mdev = &vp_dev->mdev;
> +	struct virtqueue *vq;
> +	u16 msix_vec;
> +
> +	if (!virtio_has_feature(vdev, VIRTIO_F_RING_RESET))
> +		return ERR_PTR(-ENOENT);
> +
> +	/* check queue reset status */
> +	if (vp_modern_get_queue_reset(mdev, queue_index) != 1)
> +		return ERR_PTR(-EBUSY);
> +
> +	vq = vp_enable_reset_vq(vdev, queue_index, callback, name, ctx);
> +	if (IS_ERR(vq))
> +		return vq;
> +
> +	vp_modern_set_queue_enable(&vp_dev->mdev, vq->index, true);
> +
> +	msix_vec = vp_dev->vqs[queue_index]->msix_vector;
> +	if (vp_dev->per_vq_vectors && msix_vec != VIRTIO_MSI_NO_VECTOR)
> +		enable_irq(pci_irq_vector(vp_dev->pci_dev, msix_vec));
> +
> +	return vq;
> +}
> +
>  static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
>  {
>  	return vp_modern_config_vector(&vp_dev->mdev, vector);
> @@ -395,6 +464,8 @@ static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
>  	.set_vq_affinity = vp_set_vq_affinity,
>  	.get_vq_affinity = vp_get_vq_affinity,
>  	.get_shm_region  = vp_get_shm_region,
> +	.reset_vq	 = vp_modern_reset_vq,
> +	.enable_reset_vq = vp_modern_enable_reset_vq,
>  };
>  
>  static const struct virtio_config_ops virtio_pci_config_ops = {
> @@ -413,6 +484,8 @@ static const struct virtio_config_ops virtio_pci_config_ops = {
>  	.set_vq_affinity = vp_set_vq_affinity,
>  	.get_vq_affinity = vp_get_vq_affinity,
>  	.get_shm_region  = vp_get_shm_region,
> +	.reset_vq	 = vp_modern_reset_vq,
> +	.enable_reset_vq = vp_modern_enable_reset_vq,
>  };
>  
>  /* the PCI probing function */
> -- 
> 2.31.0


  reply	other threads:[~2022-01-20 10:55 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-20  6:42 [PATCH v2 00/12] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
2022-01-20  6:42 ` [PATCH v2 01/12] virtio: pci: struct virtio_pci_common_cfg add queue_notify_data Xuan Zhuo
2022-01-20  6:42 ` [PATCH v2 02/12] virtio: queue_reset: add VIRTIO_F_RING_RESET Xuan Zhuo
2022-01-20  6:42 ` [PATCH v2 03/12] virtio: queue_reset: struct virtio_config_ops add callbacks for queue_reset Xuan Zhuo
2022-01-20  6:42 ` [PATCH v2 04/12] virtio: queue_reset: pci: update struct virtio_pci_common_cfg and option functions Xuan Zhuo
2022-01-20  6:42 ` [PATCH v2 05/12] virito: queue_reset: pci: move the per queue irq logic from vp_del_vqs to vp_del_vq Xuan Zhuo
2022-01-20  6:42 ` [PATCH v2 06/12] virtio: queue_reset: pci: add independent function to enable msix vq Xuan Zhuo
2022-01-20  6:42 ` [PATCH v2 07/12] virtio: queue_reset: pci: support VIRTIO_F_RING_RESET Xuan Zhuo
2022-01-20 10:55   ` Michael S. Tsirkin [this message]
     [not found]     ` <1642679180.4063296-1-xuanzhuo@linux.alibaba.com>
2022-01-20 15:03       ` Michael S. Tsirkin
     [not found]         ` <1642731779.2471316-1-xuanzhuo@linux.alibaba.com>
2022-01-21 10:22           ` Michael S. Tsirkin
2022-01-21 10:22             ` Michael S. Tsirkin
     [not found]               ` <1642760793.1188169-1-xuanzhuo@linux.alibaba.com>
2022-01-21 13:19                 ` Michael S. Tsirkin
     [not found]                   ` <1642774171.933696-1-xuanzhuo@linux.alibaba.com>
2022-01-21 15:28                     ` Michael S. Tsirkin
     [not found]                       ` <1642779265.2774203-1-xuanzhuo@linux.alibaba.com>
2022-01-21 15:45                         ` Michael S. Tsirkin
2022-01-20  6:42 ` [PATCH v2 08/12] virtio: queue_reset: add helper Xuan Zhuo
2022-01-20  6:43 ` [PATCH v2 09/12] virtio_net: virtnet_tx_timeout() fix style Xuan Zhuo
2022-01-20  6:43 ` [PATCH v2 10/12] virtio_net: virtnet_tx_timeout() stop ref sq->vq Xuan Zhuo
2022-01-20  6:43 ` [PATCH v2 11/12] virtio_net: split free_unused_bufs() Xuan Zhuo
2022-01-20  6:43 ` [PATCH v2 12/12] virtio-net: support pair disable/enable Xuan Zhuo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220120055227-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=hawk@kernel.org \
    --cc=jasowang@redhat.com \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).