All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
To: Yishai Hadas <yishaih@nvidia.com>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"bhelgaas@google.com" <bhelgaas@google.com>,
	"jgg@nvidia.com" <jgg@nvidia.com>,
	"saeedm@nvidia.com" <saeedm@nvidia.com>
Cc: "linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"kuba@kernel.org" <kuba@kernel.org>,
	"leonro@nvidia.com" <leonro@nvidia.com>,
	"kwankhede@nvidia.com" <kwankhede@nvidia.com>,
	"mgurtovoy@nvidia.com" <mgurtovoy@nvidia.com>,
	"maorg@nvidia.com" <maorg@nvidia.com>
Subject: RE: [PATCH V1 mlx5-next 11/13] vfio/mlx5: Implement vfio_pci driver for mlx5 devices
Date: Tue, 19 Oct 2021 11:26:29 +0000	[thread overview]
Message-ID: <4096a4cd9b3d4496892b815dd653166e@huawei.com> (raw)
In-Reply-To: <7e7880fe-bb1e-82db-8edb-271832d18827@nvidia.com>



> -----Original Message-----
> From: Yishai Hadas [mailto:yishaih@nvidia.com]
> Sent: 19 October 2021 11:30
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>;
> alex.williamson@redhat.com; bhelgaas@google.com; jgg@nvidia.com;
> saeedm@nvidia.com
> Cc: linux-pci@vger.kernel.org; kvm@vger.kernel.org; netdev@vger.kernel.org;
> kuba@kernel.org; leonro@nvidia.com; kwankhede@nvidia.com;
> mgurtovoy@nvidia.com; maorg@nvidia.com
> Subject: Re: [PATCH V1 mlx5-next 11/13] vfio/mlx5: Implement vfio_pci driver
> for mlx5 devices
> 
> On 10/19/2021 12:59 PM, Shameerali Kolothum Thodi wrote:
> >
> >> -----Original Message-----
> >> From: Yishai Hadas [mailto:yishaih@nvidia.com]
> >> Sent: 13 October 2021 10:47
> >> To: alex.williamson@redhat.com; bhelgaas@google.com; jgg@nvidia.com;
> >> saeedm@nvidia.com
> >> Cc: linux-pci@vger.kernel.org; kvm@vger.kernel.org;
> netdev@vger.kernel.org;
> >> kuba@kernel.org; leonro@nvidia.com; kwankhede@nvidia.com;
> >> mgurtovoy@nvidia.com; yishaih@nvidia.com; maorg@nvidia.com
> >> Subject: [PATCH V1 mlx5-next 11/13] vfio/mlx5: Implement vfio_pci driver
> for
> >> mlx5 devices
> >>
> >> This patch adds support for vfio_pci driver for mlx5 devices.
> >>
> >> It uses vfio_pci_core to register to the VFIO subsystem and then
> >> implements the mlx5 specific logic in the migration area.
> >>
> >> The migration implementation follows the definition from uapi/vfio.h and
> >> uses the mlx5 VF->PF command channel to achieve it.
> >>
> >> This patch implements the suspend/resume flows.
> >>
> >> Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
> >> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> >> ---
> >>   MAINTAINERS                    |   6 +
> >>   drivers/vfio/pci/Kconfig       |   3 +
> >>   drivers/vfio/pci/Makefile      |   2 +
> >>   drivers/vfio/pci/mlx5/Kconfig  |  11 +
> >>   drivers/vfio/pci/mlx5/Makefile |   4 +
> >>   drivers/vfio/pci/mlx5/main.c   | 692
> +++++++++++++++++++++++++++++++++
> >>   6 files changed, 718 insertions(+)
> >>   create mode 100644 drivers/vfio/pci/mlx5/Kconfig
> >>   create mode 100644 drivers/vfio/pci/mlx5/Makefile
> >>   create mode 100644 drivers/vfio/pci/mlx5/main.c
> >>
> >> diff --git a/MAINTAINERS b/MAINTAINERS
> >> index abdcbcfef73d..e824bfab4a01 100644
> >> --- a/MAINTAINERS
> >> +++ b/MAINTAINERS
> >> @@ -19699,6 +19699,12 @@ L:	kvm@vger.kernel.org
> >>   S:	Maintained
> >>   F:	drivers/vfio/platform/
> >>
> >> +VFIO MLX5 PCI DRIVER
> >> +M:	Yishai Hadas <yishaih@nvidia.com>
> >> +L:	kvm@vger.kernel.org
> >> +S:	Maintained
> >> +F:	drivers/vfio/pci/mlx5/
> >> +
> >>   VGA_SWITCHEROO
> >>   R:	Lukas Wunner <lukas@wunner.de>
> >>   S:	Maintained
> >> diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig
> >> index 860424ccda1b..187b9c259944 100644
> >> --- a/drivers/vfio/pci/Kconfig
> >> +++ b/drivers/vfio/pci/Kconfig
> >> @@ -43,4 +43,7 @@ config VFIO_PCI_IGD
> >>
> >>   	  To enable Intel IGD assignment through vfio-pci, say Y.
> >>   endif
> >> +
> >> +source "drivers/vfio/pci/mlx5/Kconfig"
> >> +
> >>   endif
> >> diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
> >> index 349d68d242b4..ed9d6f2e0555 100644
> >> --- a/drivers/vfio/pci/Makefile
> >> +++ b/drivers/vfio/pci/Makefile
> >> @@ -7,3 +7,5 @@ obj-$(CONFIG_VFIO_PCI_CORE) += vfio-pci-core.o
> >>   vfio-pci-y := vfio_pci.o
> >>   vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
> >>   obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
> >> +
> >> +obj-$(CONFIG_MLX5_VFIO_PCI)           += mlx5/
> >> diff --git a/drivers/vfio/pci/mlx5/Kconfig b/drivers/vfio/pci/mlx5/Kconfig
> >> new file mode 100644
> >> index 000000000000..a3ce00add4fe
> >> --- /dev/null
> >> +++ b/drivers/vfio/pci/mlx5/Kconfig
> >> @@ -0,0 +1,11 @@
> >> +# SPDX-License-Identifier: GPL-2.0-only
> >> +config MLX5_VFIO_PCI
> >> +	tristate "VFIO support for MLX5 PCI devices"
> >> +	depends on MLX5_CORE
> >> +	select VFIO_PCI_CORE
> >> +	help
> >> +	  This provides a PCI support for MLX5 devices using the VFIO
> >> +	  framework. The device specific driver supports suspend/resume
> >> +	  of the MLX5 device.
> >> +
> >> +	  If you don't know what to do here, say N.
> >> diff --git a/drivers/vfio/pci/mlx5/Makefile b/drivers/vfio/pci/mlx5/Makefile
> >> new file mode 100644
> >> index 000000000000..689627da7ff5
> >> --- /dev/null
> >> +++ b/drivers/vfio/pci/mlx5/Makefile
> >> @@ -0,0 +1,4 @@
> >> +# SPDX-License-Identifier: GPL-2.0-only
> >> +obj-$(CONFIG_MLX5_VFIO_PCI) += mlx5-vfio-pci.o
> >> +mlx5-vfio-pci-y := main.o cmd.o
> >> +
> >> diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c
> >> new file mode 100644
> >> index 000000000000..e36302b444a6
> >> --- /dev/null
> >> +++ b/drivers/vfio/pci/mlx5/main.c
> >> @@ -0,0 +1,692 @@
> >> +// SPDX-License-Identifier: GPL-2.0-only
> >> +/*
> >> + * Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights
> reserved
> >> + */
> >> +
> >> +#include <linux/device.h>
> >> +#include <linux/eventfd.h>
> >> +#include <linux/file.h>
> >> +#include <linux/interrupt.h>
> >> +#include <linux/iommu.h>
> >> +#include <linux/module.h>
> >> +#include <linux/mutex.h>
> >> +#include <linux/notifier.h>
> >> +#include <linux/pci.h>
> >> +#include <linux/pm_runtime.h>
> >> +#include <linux/types.h>
> >> +#include <linux/uaccess.h>
> >> +#include <linux/vfio.h>
> >> +#include <linux/sched/mm.h>
> >> +#include <linux/vfio_pci_core.h>
> >> +
> >> +#include "cmd.h"
> >> +
> >> +enum {
> >> +	MLX5VF_PCI_FREEZED = 1 << 0,
> >> +};
> >> +
> >> +enum {
> >> +	MLX5VF_REGION_PENDING_BYTES = 1 << 0,
> >> +	MLX5VF_REGION_DATA_SIZE = 1 << 1,
> >> +};
> >> +
> >> +#define MLX5VF_MIG_REGION_DATA_SIZE SZ_128K
> >> +/* Data section offset from migration region */
> >> +#define MLX5VF_MIG_REGION_DATA_OFFSET
> >> \
> >> +	(sizeof(struct vfio_device_migration_info))
> >> +
> >> +#define VFIO_DEVICE_MIGRATION_OFFSET(x)
> >> \
> >> +	(offsetof(struct vfio_device_migration_info, x))
> >> +
> >> +struct mlx5vf_pci_migration_info {
> >> +	u32 vfio_dev_state; /* VFIO_DEVICE_STATE_XXX */
> >> +	u32 dev_state; /* device migration state */
> >> +	u32 region_state; /* Use MLX5VF_REGION_XXX */
> >> +	u16 vhca_id;
> >> +	struct mlx5_vhca_state_data vhca_state_data;
> >> +};
> >> +
> >> +struct mlx5vf_pci_core_device {
> >> +	struct vfio_pci_core_device core_device;
> >> +	u8 migrate_cap:1;
> >> +	/* protect migartion state */
> >> +	struct mutex state_mutex;
> >> +	struct mlx5vf_pci_migration_info vmig;
> >> +};
> >> +
> >> +static int mlx5vf_pci_unquiesce_device(struct mlx5vf_pci_core_device
> >> *mvdev)
> >> +{
> >> +	return mlx5vf_cmd_resume_vhca(mvdev->core_device.pdev,
> >> +				      mvdev->vmig.vhca_id,
> >> +
> >> MLX5_RESUME_VHCA_IN_OP_MOD_RESUME_MASTER);
> >> +}
> >> +
> >> +static int mlx5vf_pci_quiesce_device(struct mlx5vf_pci_core_device
> *mvdev)
> >> +{
> >> +	return mlx5vf_cmd_suspend_vhca(
> >> +		mvdev->core_device.pdev, mvdev->vmig.vhca_id,
> >> +		MLX5_SUSPEND_VHCA_IN_OP_MOD_SUSPEND_MASTER);
> >> +}
> >> +
> >> +static int mlx5vf_pci_unfreeze_device(struct mlx5vf_pci_core_device
> >> *mvdev)
> >> +{
> >> +	int ret;
> >> +
> >> +	ret = mlx5vf_cmd_resume_vhca(mvdev->core_device.pdev,
> >> +				     mvdev->vmig.vhca_id,
> >> +
> >> MLX5_RESUME_VHCA_IN_OP_MOD_RESUME_SLAVE);
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	mvdev->vmig.dev_state &= ~MLX5VF_PCI_FREEZED;
> >> +	return 0;
> >> +}
> >> +
> >> +static int mlx5vf_pci_freeze_device(struct mlx5vf_pci_core_device
> *mvdev)
> >> +{
> >> +	int ret;
> >> +
> >> +	ret = mlx5vf_cmd_suspend_vhca(
> >> +		mvdev->core_device.pdev, mvdev->vmig.vhca_id,
> >> +		MLX5_SUSPEND_VHCA_IN_OP_MOD_SUSPEND_SLAVE);
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	mvdev->vmig.dev_state |= MLX5VF_PCI_FREEZED;
> >> +	return 0;
> >> +}
> >> +
> >> +static int mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device
> >> *mvdev)
> >> +{
> >> +	u32 state_size = 0;
> >> +	int ret;
> >> +
> >> +	if (!(mvdev->vmig.dev_state & MLX5VF_PCI_FREEZED))
> >> +		return -EFAULT;
> >> +
> >> +	/* If we already read state no reason to re-read */
> >> +	if (mvdev->vmig.vhca_state_data.state_size)
> >> +		return 0;
> >> +
> >> +	ret = mlx5vf_cmd_query_vhca_migration_state(
> >> +		mvdev->core_device.pdev, mvdev->vmig.vhca_id, &state_size);
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	return mlx5vf_cmd_save_vhca_state(mvdev->core_device.pdev,
> >> +					  mvdev->vmig.vhca_id, state_size,
> >> +					  &mvdev->vmig.vhca_state_data);
> >> +}
> >> +
> >> +static int mlx5vf_pci_new_write_window(struct mlx5vf_pci_core_device
> >> *mvdev)
> >> +{
> >> +	struct mlx5_vhca_state_data *state_data =
> >> &mvdev->vmig.vhca_state_data;
> >> +	u32 num_pages_needed;
> >> +	u64 allocated_ready;
> >> +	u32 bytes_needed;
> >> +
> >> +	/* Check how many bytes are available from previous flows */
> >> +	WARN_ON(state_data->num_pages * PAGE_SIZE <
> >> +		state_data->win_start_offset);
> >> +	allocated_ready = (state_data->num_pages * PAGE_SIZE) -
> >> +			  state_data->win_start_offset;
> >> +	WARN_ON(allocated_ready > MLX5VF_MIG_REGION_DATA_SIZE);
> >> +
> >> +	bytes_needed = MLX5VF_MIG_REGION_DATA_SIZE - allocated_ready;
> >> +	if (!bytes_needed)
> >> +		return 0;
> >> +
> >> +	num_pages_needed = DIV_ROUND_UP_ULL(bytes_needed, PAGE_SIZE);
> >> +	return mlx5vf_add_migration_pages(state_data, num_pages_needed);
> >> +}
> >> +
> >> +static ssize_t
> >> +mlx5vf_pci_handle_migration_data_size(struct mlx5vf_pci_core_device
> >> *mvdev,
> >> +				      char __user *buf, bool iswrite)
> >> +{
> >> +	struct mlx5vf_pci_migration_info *vmig = &mvdev->vmig;
> >> +	u64 data_size;
> >> +	int ret;
> >> +
> >> +	if (iswrite) {
> >> +		/* data_size is writable only during resuming state */
> >> +		if (vmig->vfio_dev_state != VFIO_DEVICE_STATE_RESUMING)
> >> +			return -EINVAL;
> >> +
> >> +		ret = copy_from_user(&data_size, buf, sizeof(data_size));
> >> +		if (ret)
> >> +			return -EFAULT;
> >> +
> >> +		vmig->vhca_state_data.state_size += data_size;
> >> +		vmig->vhca_state_data.win_start_offset += data_size;
> >> +		ret = mlx5vf_pci_new_write_window(mvdev);
> >> +		if (ret)
> >> +			return ret;
> >> +
> >> +	} else {
> >> +		if (vmig->vfio_dev_state != VFIO_DEVICE_STATE_SAVING)
> >> +			return -EINVAL;
> >> +
> >> +		data_size = min_t(u64, MLX5VF_MIG_REGION_DATA_SIZE,
> >> +				  vmig->vhca_state_data.state_size -
> >> +				  vmig->vhca_state_data.win_start_offset);
> >> +		ret = copy_to_user(buf, &data_size, sizeof(data_size));
> >> +		if (ret)
> >> +			return -EFAULT;
> >> +	}
> >> +
> >> +	vmig->region_state |= MLX5VF_REGION_DATA_SIZE;
> >> +	return sizeof(data_size);
> >> +}
> >> +
> >> +static ssize_t
> >> +mlx5vf_pci_handle_migration_data_offset(struct mlx5vf_pci_core_device
> >> *mvdev,
> >> +					char __user *buf, bool iswrite)
> >> +{
> >> +	static const u64 data_offset = MLX5VF_MIG_REGION_DATA_OFFSET;
> >> +	int ret;
> >> +
> >> +	/* RO field */
> >> +	if (iswrite)
> >> +		return -EFAULT;
> >> +
> >> +	ret = copy_to_user(buf, &data_offset, sizeof(data_offset));
> >> +	if (ret)
> >> +		return -EFAULT;
> >> +
> >> +	return sizeof(data_offset);
> >> +}
> >> +
> >> +static ssize_t
> >> +mlx5vf_pci_handle_migration_pending_bytes(struct
> mlx5vf_pci_core_device
> >> *mvdev,
> >> +					  char __user *buf, bool iswrite)
> >> +{
> >> +	struct mlx5vf_pci_migration_info *vmig = &mvdev->vmig;
> >> +	u64 pending_bytes;
> >> +	int ret;
> >> +
> >> +	/* RO field */
> >> +	if (iswrite)
> >> +		return -EFAULT;
> >> +
> >> +	if (vmig->vfio_dev_state == (VFIO_DEVICE_STATE_SAVING |
> >> +				     VFIO_DEVICE_STATE_RUNNING)) {
> >> +		/* In pre-copy state we have no data to return for now,
> >> +		 * return 0 pending bytes
> >> +		 */
> >> +		pending_bytes = 0;
> >> +	} else {
> >> +		if (!vmig->vhca_state_data.state_size)
> >> +			return 0;
> >> +		pending_bytes = vmig->vhca_state_data.state_size -
> >> +				vmig->vhca_state_data.win_start_offset;
> >> +	}
> >> +
> >> +	ret = copy_to_user(buf, &pending_bytes, sizeof(pending_bytes));
> >> +	if (ret)
> >> +		return -EFAULT;
> >> +
> >> +	/* Window moves forward once data from previous iteration was read */
> >> +	if (vmig->region_state & MLX5VF_REGION_DATA_SIZE)
> >> +		vmig->vhca_state_data.win_start_offset +=
> >> +			min_t(u64, MLX5VF_MIG_REGION_DATA_SIZE, pending_bytes);
> >> +
> >> +	WARN_ON(vmig->vhca_state_data.win_start_offset >
> >> +		vmig->vhca_state_data.state_size);
> >> +
> >> +	/* New iteration started */
> >> +	vmig->region_state = MLX5VF_REGION_PENDING_BYTES;
> >> +	return sizeof(pending_bytes);
> >> +}
> >> +
> >> +static int mlx5vf_load_state(struct mlx5vf_pci_core_device *mvdev)
> >> +{
> >> +	if (!mvdev->vmig.vhca_state_data.state_size)
> >> +		return 0;
> >> +
> >> +	return mlx5vf_cmd_load_vhca_state(mvdev->core_device.pdev,
> >> +					  mvdev->vmig.vhca_id,
> >> +					  &mvdev->vmig.vhca_state_data);
> >> +}
> >> +
> >> +static void mlx5vf_reset_mig_state(struct mlx5vf_pci_core_device
> *mvdev)
> >> +{
> >> +	struct mlx5vf_pci_migration_info *vmig = &mvdev->vmig;
> >> +
> >> +	vmig->region_state = 0;
> >> +	mlx5vf_reset_vhca_state(&vmig->vhca_state_data);
> >> +}
> >> +
> >> +static int mlx5vf_pci_set_device_state(struct mlx5vf_pci_core_device
> >> *mvdev,
> >> +				       u32 state)
> >> +{
> >> +	struct mlx5vf_pci_migration_info *vmig = &mvdev->vmig;
> >> +	u32 old_state = vmig->vfio_dev_state;
> >> +	int ret = 0;
> >> +
> >> +	if (vfio_is_state_invalid(state) || vfio_is_state_invalid(old_state))
> >> +		return -EINVAL;
> >> +
> >> +	/* Running switches off */
> >> +	if ((old_state & VFIO_DEVICE_STATE_RUNNING) !=
> >> +	    (state & VFIO_DEVICE_STATE_RUNNING) &&
> >> +	    (old_state & VFIO_DEVICE_STATE_RUNNING)) {
> >> +		ret = mlx5vf_pci_quiesce_device(mvdev);
> >> +		if (ret)
> >> +			return ret;
> >> +		ret = mlx5vf_pci_freeze_device(mvdev);
> >> +		if (ret) {
> >> +			vmig->vfio_dev_state = VFIO_DEVICE_STATE_INVALID;
> >> +			return ret;
> >> +		}
> >> +	}
> >> +
> >> +	/* Resuming switches off */
> >> +	if ((old_state & VFIO_DEVICE_STATE_RESUMING) !=
> >> +	    (state & VFIO_DEVICE_STATE_RESUMING) &&
> >> +	    (old_state & VFIO_DEVICE_STATE_RESUMING)) {
> >> +		/* deserialize state into the device */
> >> +		ret = mlx5vf_load_state(mvdev);
> >> +		if (ret) {
> >> +			vmig->vfio_dev_state = VFIO_DEVICE_STATE_INVALID;
> >> +			return ret;
> >> +		}
> >> +	}
> >> +
> >> +	/* Resuming switches on */
> >> +	if ((old_state & VFIO_DEVICE_STATE_RESUMING) !=
> >> +	    (state & VFIO_DEVICE_STATE_RESUMING) &&
> >> +	    (state & VFIO_DEVICE_STATE_RESUMING)) {
> >> +		mlx5vf_reset_mig_state(mvdev);
> >> +		ret = mlx5vf_pci_new_write_window(mvdev);
> >> +		if (ret)
> >> +			return ret;
> >> +	}
> >> +
> >> +	/* Saving switches on */
> >> +	if ((old_state & VFIO_DEVICE_STATE_SAVING) !=
> >> +	    (state & VFIO_DEVICE_STATE_SAVING) &&
> >> +	    (state & VFIO_DEVICE_STATE_SAVING)) {
> >> +		if (!(state & VFIO_DEVICE_STATE_RUNNING)) {
> >> +			/* serialize post copy */
> >> +			ret = mlx5vf_pci_save_device_data(mvdev);
> > Does it actually get into post-copy here? The pre-copy state(old_state)
> > has the _SAVING bit set already and post-copy state( new state) also
> > has _SAVING set. It looks like we need to handle the post copy in the above
> > "Running switches off" and check for (state & _SAVING).
> >
> > Or Am I missing something?
> >
> 
> The above checks for a change in the SAVING bit, if it was turned on and
> we are not RUNNING it means post copy.
> 
> Turning on SAVING when we are RUNNING will end-up with returning zero
> bytes upon pending bytes as we don't support for now dirty pages.
> 
> see mlx5vf_pci_handle_migration_pending_bytes().

So what you are saying is Qemu won't set a pre-copy state prior to post copy here.
IIRC, that was not the case in our setup and Qemu does set the state to pre-copy
(_RUNNING | _SAVING) , reads the pending_bytes and then set it to post copy
(_SAVING). 

Thanks,
Shameer

> 
> Yishai


  reply	other threads:[~2021-10-19 11:26 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-13  9:46 [PATCH V1 mlx5-next 00/13] Add mlx5 live migration driver Yishai Hadas
2021-10-13  9:46 ` [PATCH V1 mlx5-next 01/13] PCI/IOV: Provide internal VF index Yishai Hadas
2021-10-13 18:14   ` Bjorn Helgaas
2021-10-14  9:08     ` Yishai Hadas
2021-10-13  9:46 ` [PATCH V1 mlx5-next 02/13] net/mlx5: Reuse exported virtfn index function call Yishai Hadas
2021-10-13  9:46 ` [PATCH V1 mlx5-next 03/13] net/mlx5: Disable SRIOV before PF removal Yishai Hadas
2021-10-13  9:46 ` [PATCH V1 mlx5-next 04/13] PCI/IOV: Allow SRIOV VF drivers to reach the drvdata of a PF Yishai Hadas
2021-10-13 18:27   ` Bjorn Helgaas
2021-10-14 22:11   ` Alex Williamson
2021-10-17 13:43     ` Yishai Hadas
2021-10-13  9:46 ` [PATCH V1 mlx5-next 05/13] net/mlx5: Expose APIs to get/put the mlx5 core device Yishai Hadas
2021-10-13  9:47 ` [PATCH V1 mlx5-next 06/13] vdpa/mlx5: Use mlx5_vf_get_core_dev() to get PF device Yishai Hadas
2021-10-13  9:47 ` [PATCH V1 mlx5-next 07/13] vfio: Add 'invalid' state definitions Yishai Hadas
2021-10-15 16:38   ` Alex Williamson
2021-10-17 14:07     ` Yishai Hadas
2021-10-13  9:47 ` [PATCH V1 mlx5-next 08/13] vfio/pci_core: Make the region->release() function optional Yishai Hadas
2021-10-13  9:47 ` [PATCH V1 mlx5-next 09/13] net/mlx5: Introduce migration bits and structures Yishai Hadas
2021-10-13  9:47 ` [PATCH V1 mlx5-next 10/13] vfio/mlx5: Expose migration commands over mlx5 device Yishai Hadas
2021-10-13  9:47 ` [PATCH V1 mlx5-next 11/13] vfio/mlx5: Implement vfio_pci driver for mlx5 devices Yishai Hadas
2021-10-15 19:48   ` Alex Williamson
2021-10-15 19:59     ` Jason Gunthorpe
2021-10-15 20:12       ` Alex Williamson
2021-10-15 20:16         ` Jason Gunthorpe
2021-10-15 20:59           ` Alex Williamson
2021-10-17 14:03             ` Yishai Hadas
2021-10-18 11:51               ` Jason Gunthorpe
2021-10-18 13:26                 ` Yishai Hadas
2021-10-18 13:42                   ` Alex Williamson
2021-10-18 13:46                     ` Yishai Hadas
2021-10-19  9:59   ` Shameerali Kolothum Thodi
2021-10-19 10:30     ` Yishai Hadas
2021-10-19 11:26       ` Shameerali Kolothum Thodi [this message]
2021-10-19 11:24     ` Jason Gunthorpe
2021-10-13  9:47 ` [PATCH V1 mlx5-next 12/13] vfio/pci: Add infrastructure to let vfio_pci_core drivers trap device RESET Yishai Hadas
2021-10-15 19:52   ` Alex Williamson
2021-10-15 20:03     ` Jason Gunthorpe
2021-10-15 21:12       ` Alex Williamson
2021-10-17 14:29         ` Yishai Hadas
2021-10-18 12:02           ` Jason Gunthorpe
2021-10-18 13:41             ` Yishai Hadas
2021-10-13  9:47 ` [PATCH V1 mlx5-next 13/13] vfio/mlx5: Trap device RESET and update state accordingly Yishai Hadas
2021-10-13 18:06   ` Jason Gunthorpe
2021-10-14  9:18     ` Yishai Hadas
2021-10-15 19:54       ` Alex Williamson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4096a4cd9b3d4496892b815dd653166e@huawei.com \
    --to=shameerali.kolothum.thodi@huawei.com \
    --cc=alex.williamson@redhat.com \
    --cc=bhelgaas@google.com \
    --cc=jgg@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=leonro@nvidia.com \
    --cc=linux-pci@vger.kernel.org \
    --cc=maorg@nvidia.com \
    --cc=mgurtovoy@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@nvidia.com \
    --cc=yishaih@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.