virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Eli Cohen <elic@nvidia.com>,
	mst@redhat.com, parav@nvidia.com, si-wei.liu@oracle.com,
	virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org
Cc: stable@vger.kernel.org
Subject: Re: [PATCH 5/5] vdpa/mlx5: Fix suspend/resume index restoration
Date: Thu, 8 Apr 2021 17:45:54 +0800	[thread overview]
Message-ID: <a5356a13-6d7d-8086-bfff-ff869aec5449@redhat.com> (raw)
In-Reply-To: <20210408091047.4269-6-elic@nvidia.com>


在 2021/4/8 下午5:10, Eli Cohen 写道:
> When we suspend the VM, the VDPA interface will be reset. When the VM is
> resumed again, clear_virtqueues() will clear the available and used
> indices resulting in hardware virqtqueue objects becoming out of sync.
> We can avoid this function alltogether since qemu will clear them if
> required, e.g. when the VM went through a reboot.
>
> Moreover, since the hw available and used indices should always be
> identical on query and should be restored to the same value same value
> for virtqueues that complete in order, we set the single value provided
> by set_vq_state(). In get_vq_state() we return the value of hardware
> used index.
>
> Fixes: b35ccebe3ef7 ("vdpa/mlx5: Restore the hardware used index after change map")
> Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---


Acked-by: Jason Wang <jasowang@redhat.com>


>   drivers/vdpa/mlx5/net/mlx5_vnet.c | 21 ++++++++-------------
>   1 file changed, 8 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 6fe61fc57790..4d2809c7d4e3 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1169,6 +1169,7 @@ static void suspend_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
>   		return;
>   	}
>   	mvq->avail_idx = attr.available_index;
> +	mvq->used_idx = attr.used_index;
>   }
>   
>   static void suspend_vqs(struct mlx5_vdpa_net *ndev)
> @@ -1426,6 +1427,7 @@ static int mlx5_vdpa_set_vq_state(struct vdpa_device *vdev, u16 idx,
>   		return -EINVAL;
>   	}
>   
> +	mvq->used_idx = state->avail_index;
>   	mvq->avail_idx = state->avail_index;
>   	return 0;
>   }
> @@ -1443,7 +1445,11 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa
>   	 * that cares about emulating the index after vq is stopped.
>   	 */
>   	if (!mvq->initialized) {
> -		state->avail_index = mvq->avail_idx;
> +		/* Firmware returns a wrong value for the available index.
> +		 * Since both values should be identical, we take the value of
> +		 * used_idx which is reported correctly.
> +		 */
> +		state->avail_index = mvq->used_idx;
>   		return 0;
>   	}
>   
> @@ -1452,7 +1458,7 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa
>   		mlx5_vdpa_warn(mvdev, "failed to query virtqueue\n");
>   		return err;
>   	}
> -	state->avail_index = attr.available_index;
> +	state->avail_index = attr.used_index;
>   	return 0;
>   }
>   
> @@ -1540,16 +1546,6 @@ static void teardown_virtqueues(struct mlx5_vdpa_net *ndev)
>   	}
>   }
>   
> -static void clear_virtqueues(struct mlx5_vdpa_net *ndev)
> -{
> -	int i;
> -
> -	for (i = ndev->mvdev.max_vqs - 1; i >= 0; i--) {
> -		ndev->vqs[i].avail_idx = 0;
> -		ndev->vqs[i].used_idx = 0;
> -	}
> -}
> -
>   /* TODO: cross-endian support */
>   static inline bool mlx5_vdpa_is_little_endian(struct mlx5_vdpa_dev *mvdev)
>   {
> @@ -1785,7 +1781,6 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
>   	if (!status) {
>   		mlx5_vdpa_info(mvdev, "performing device reset\n");
>   		teardown_driver(ndev);
> -		clear_virtqueues(ndev);
>   		mlx5_vdpa_destroy_mr(&ndev->mvdev);
>   		ndev->mvdev.status = 0;
>   		ndev->mvdev.mlx_features = 0;

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2021-04-08  9:46 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20210408091047.4269-1-elic@nvidia.com>
2021-04-08  9:17 ` [PATCH 0/5] VDPA mlx5 fixes Greg KH
     [not found] ` <20210408091047.4269-3-elic@nvidia.com>
2021-04-08  9:44   ` [PATCH 2/5] vdpa/mlx5: Use the correct dma device when registering memory Jason Wang
     [not found] ` <20210408091047.4269-4-elic@nvidia.com>
2021-04-08  9:45   ` [PATCH 3/5] vdpa/mlx5: Retrieve BAR address suitable any function Jason Wang
     [not found] ` <20210408091047.4269-6-elic@nvidia.com>
2021-04-08  9:45   ` Jason Wang [this message]
     [not found] ` <20210408091047.4269-5-elic@nvidia.com>
2021-04-08  9:46   ` [PATCH 4/5] vdpa/mlx5: Fix wrong use of bit numbers Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a5356a13-6d7d-8086-bfff-ff869aec5449@redhat.com \
    --to=jasowang@redhat.com \
    --cc=elic@nvidia.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=parav@nvidia.com \
    --cc=si-wei.liu@oracle.com \
    --cc=stable@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).