All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC v2 virtio 0/7] pds_vdpa driver
@ 2023-03-09  1:30 Shannon Nelson
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 1/7] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC Shannon Nelson
                   ` (6 more replies)
  0 siblings, 7 replies; 36+ messages in thread
From: Shannon Nelson @ 2023-03-09  1:30 UTC (permalink / raw)
  To: jasowang, mst, virtualization, shannon.nelson, brett.creeley,
	davem, netdev, kuba
  Cc: drivers

This patchset implements a new module for the AMD/Pensando DSC that
supports vDPA services on PDS Core VF devices.  The pds_core driver
described here[0] creates the auxiliary_bus devices that this module
connects to, and this creates vdpa devices for use by the vdpa module.

The first version of this driver was a part of the original pds_core RFC
[1] but has since been reworked to pull out the PCI driver and to make
better use of the virtio and virtio_net configuration spaces made available
by the DSC's PCI configuration.  As the device development has progressed,
the ability to rely on the virtio config spaces has grown.

To use this module, enable the VFs and turn on the vDPA services in the
pre_core PF, then use the 'vdpa' utility to create devices for use by
virtio_vdpa or vhost_vdpa:
   echo 1 > /sys/bus/pci/drivers/pds_core/$PF_BDF/sriov_numvfs
   devlink dev param set pci/$PF_BDF name enable_vnet value true cmode runtime
   PDS_VDPA_MGMT=`vdpa mgmtdev show | grep vDPA | head -1 | cut -d: -f1`
   vdpa dev add name vdpa1 mgmtdev $PDS_VDPA_MGMT mac 00:11:22:33:44:55

[0]: https://lore.kernel.org/netdev/20230308051310.12544-1-shannon.nelson@amd.com/
[1]: https://lore.kernel.org/netdev/20221118225656.48309-1-snelson@pensando.io/

Changes:
 v2:
 - removed PCI driver code
 - replaced home-grown event listener with notifier
 - replaced many adminq uses with direct virtio_net config access
 - reworked irqs to follow virtio layout
 - removed local_mac_bit logic
 - replaced uses of devm_ interfaces as suggested in pds_core reviews
 - updated copyright strings to reflect the new owner

Shannon Nelson (7):
  pds_vdpa: Add new vDPA driver for AMD/Pensando DSC
  pds_vdpa: get vdpa management info
  pds_vdpa: virtio bar setup for vdpa
  pds_vdpa: add vdpa config client commands
  pds_vdpa: add support for vdpa and vdpamgmt interfaces
  pds_vdpa: subscribe to the pds_core events
  pds_vdpa: pds_vdps.rst and Kconfig

 .../ethernet/pensando/pds_vdpa.rst            |  84 ++
 MAINTAINERS                                   |   4 +
 drivers/vdpa/Kconfig                          |   8 +
 drivers/vdpa/Makefile                         |   1 +
 drivers/vdpa/pds/Makefile                     |  11 +
 drivers/vdpa/pds/aux_drv.c                    | 141 ++++
 drivers/vdpa/pds/aux_drv.h                    |  24 +
 drivers/vdpa/pds/cmds.c                       | 207 +++++
 drivers/vdpa/pds/cmds.h                       |  16 +
 drivers/vdpa/pds/debugfs.c                    | 201 +++++
 drivers/vdpa/pds/debugfs.h                    |  26 +
 drivers/vdpa/pds/vdpa_dev.c                   | 723 ++++++++++++++++++
 drivers/vdpa/pds/vdpa_dev.h                   |  50 ++
 drivers/vdpa/pds/virtio_pci.c                 | 281 +++++++
 drivers/vdpa/pds/virtio_pci.h                 |   8 +
 include/linux/pds/pds_vdpa.h                  | 279 +++++++
 16 files changed, 2064 insertions(+)
 create mode 100644 Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
 create mode 100644 drivers/vdpa/pds/Makefile
 create mode 100644 drivers/vdpa/pds/aux_drv.c
 create mode 100644 drivers/vdpa/pds/aux_drv.h
 create mode 100644 drivers/vdpa/pds/cmds.c
 create mode 100644 drivers/vdpa/pds/cmds.h
 create mode 100644 drivers/vdpa/pds/debugfs.c
 create mode 100644 drivers/vdpa/pds/debugfs.h
 create mode 100644 drivers/vdpa/pds/vdpa_dev.c
 create mode 100644 drivers/vdpa/pds/vdpa_dev.h
 create mode 100644 drivers/vdpa/pds/virtio_pci.c
 create mode 100644 drivers/vdpa/pds/virtio_pci.h
 create mode 100644 include/linux/pds/pds_vdpa.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH RFC v2 virtio 1/7] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC
  2023-03-09  1:30 [PATCH RFC v2 virtio 0/7] pds_vdpa driver Shannon Nelson
@ 2023-03-09  1:30 ` Shannon Nelson
  2023-03-12 14:06   ` Simon Horman
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 2/7] pds_vdpa: get vdpa management info Shannon Nelson
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 36+ messages in thread
From: Shannon Nelson @ 2023-03-09  1:30 UTC (permalink / raw)
  To: jasowang, mst, virtualization, shannon.nelson, brett.creeley,
	davem, netdev, kuba
  Cc: drivers

This is the initial auxiliary driver framework for a new vDPA
device driver, an auxiliary_bus client of the pds_core driver.
The pds_core driver supplies the PCI services for the VF device
and for accessing the adminq in the PF device.

This patch adds the very basics of registering for the auxiliary
device, setting up debugfs entries, and registering with devlink.

Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
---
 drivers/vdpa/Makefile        |  1 +
 drivers/vdpa/pds/Makefile    |  8 +++
 drivers/vdpa/pds/aux_drv.c   | 99 ++++++++++++++++++++++++++++++++++++
 drivers/vdpa/pds/aux_drv.h   | 15 ++++++
 drivers/vdpa/pds/debugfs.c   | 25 +++++++++
 drivers/vdpa/pds/debugfs.h   | 18 +++++++
 include/linux/pds/pds_vdpa.h | 12 +++++
 7 files changed, 178 insertions(+)
 create mode 100644 drivers/vdpa/pds/Makefile
 create mode 100644 drivers/vdpa/pds/aux_drv.c
 create mode 100644 drivers/vdpa/pds/aux_drv.h
 create mode 100644 drivers/vdpa/pds/debugfs.c
 create mode 100644 drivers/vdpa/pds/debugfs.h
 create mode 100644 include/linux/pds/pds_vdpa.h

diff --git a/drivers/vdpa/Makefile b/drivers/vdpa/Makefile
index 59396ff2a318..8f53c6f3cca7 100644
--- a/drivers/vdpa/Makefile
+++ b/drivers/vdpa/Makefile
@@ -7,3 +7,4 @@ obj-$(CONFIG_MLX5_VDPA) += mlx5/
 obj-$(CONFIG_VP_VDPA)    += virtio_pci/
 obj-$(CONFIG_ALIBABA_ENI_VDPA) += alibaba/
 obj-$(CONFIG_SNET_VDPA) += solidrun/
+obj-$(CONFIG_PDS_VDPA) += pds/
diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
new file mode 100644
index 000000000000..a9cd2f450ae1
--- /dev/null
+++ b/drivers/vdpa/pds/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0-only
+# Copyright(c) 2023 Advanced Micro Devices, Inc
+
+obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
+
+pds_vdpa-y := aux_drv.o
+
+pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
new file mode 100644
index 000000000000..b3f36170253c
--- /dev/null
+++ b/drivers/vdpa/pds/aux_drv.c
@@ -0,0 +1,99 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Advanced Micro Devices, Inc */
+
+#include <linux/auxiliary_bus.h>
+
+#include <linux/pds/pds_core.h>
+#include <linux/pds/pds_auxbus.h>
+#include <linux/pds/pds_vdpa.h>
+
+#include "aux_drv.h"
+#include "debugfs.h"
+
+static const struct auxiliary_device_id pds_vdpa_id_table[] = {
+	{ .name = PDS_VDPA_DEV_NAME, },
+	{},
+};
+
+static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
+			  const struct auxiliary_device_id *id)
+
+{
+	struct pds_auxiliary_dev *padev =
+		container_of(aux_dev, struct pds_auxiliary_dev, aux_dev);
+	struct device *dev = &aux_dev->dev;
+	struct pds_vdpa_aux *vdpa_aux;
+	int err;
+
+	vdpa_aux = kzalloc(sizeof(*vdpa_aux), GFP_KERNEL);
+	if (!vdpa_aux)
+		return -ENOMEM;
+
+	vdpa_aux->padev = padev;
+	auxiliary_set_drvdata(aux_dev, vdpa_aux);
+
+	/* Register our PDS client with the pds_core */
+	err = padev->ops->register_client(padev);
+	if (err) {
+		dev_err(dev, "%s: Failed to register as client: %pe\n",
+			__func__, ERR_PTR(err));
+		goto err_free_mem;
+	}
+
+	return 0;
+
+err_free_mem:
+	kfree(vdpa_aux);
+	auxiliary_set_drvdata(aux_dev, NULL);
+
+	return err;
+}
+
+static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
+{
+	struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
+	struct device *dev = &aux_dev->dev;
+
+	vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
+
+	kfree(vdpa_aux);
+	auxiliary_set_drvdata(aux_dev, NULL);
+
+	dev_info(dev, "Removed\n");
+}
+
+static struct auxiliary_driver pds_vdpa_driver = {
+	.name = PDS_DEV_TYPE_VDPA_STR,
+	.probe = pds_vdpa_probe,
+	.remove = pds_vdpa_remove,
+	.id_table = pds_vdpa_id_table,
+};
+
+static void __exit pds_vdpa_cleanup(void)
+{
+	auxiliary_driver_unregister(&pds_vdpa_driver);
+
+	pds_vdpa_debugfs_destroy();
+}
+module_exit(pds_vdpa_cleanup);
+
+static int __init pds_vdpa_init(void)
+{
+	int err;
+
+	pds_vdpa_debugfs_create();
+
+	err = auxiliary_driver_register(&pds_vdpa_driver);
+	if (err) {
+		pr_err("%s: aux driver register failed: %pe\n",
+		       PDS_VDPA_DRV_NAME, ERR_PTR(err));
+		pds_vdpa_debugfs_destroy();
+	}
+
+	return err;
+}
+module_init(pds_vdpa_init);
+
+MODULE_DESCRIPTION(PDS_VDPA_DRV_DESCRIPTION);
+MODULE_AUTHOR("AMD/Pensando Systems, Inc");
+MODULE_LICENSE("GPL");
diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
new file mode 100644
index 000000000000..14e465944dfd
--- /dev/null
+++ b/drivers/vdpa/pds/aux_drv.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Advanced Micro Devices, Inc */
+
+#ifndef _AUX_DRV_H_
+#define _AUX_DRV_H_
+
+#define PDS_VDPA_DRV_DESCRIPTION    "AMD/Pensando vDPA VF Device Driver"
+#define PDS_VDPA_DRV_NAME           "pds_vdpa"
+
+struct pds_vdpa_aux {
+	struct pds_auxiliary_dev *padev;
+
+	struct dentry *dentry;
+};
+#endif /* _AUX_DRV_H_ */
diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
new file mode 100644
index 000000000000..3c163dc7b66f
--- /dev/null
+++ b/drivers/vdpa/pds/debugfs.c
@@ -0,0 +1,25 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Advanced Micro Devices, Inc */
+
+#include <linux/pds/pds_core.h>
+#include <linux/pds/pds_auxbus.h>
+
+#include "aux_drv.h"
+#include "debugfs.h"
+
+#ifdef CONFIG_DEBUG_FS
+
+static struct dentry *dbfs_dir;
+
+void pds_vdpa_debugfs_create(void)
+{
+	dbfs_dir = debugfs_create_dir(PDS_VDPA_DRV_NAME, NULL);
+}
+
+void pds_vdpa_debugfs_destroy(void)
+{
+	debugfs_remove_recursive(dbfs_dir);
+	dbfs_dir = NULL;
+}
+
+#endif /* CONFIG_DEBUG_FS */
diff --git a/drivers/vdpa/pds/debugfs.h b/drivers/vdpa/pds/debugfs.h
new file mode 100644
index 000000000000..fff078a869e5
--- /dev/null
+++ b/drivers/vdpa/pds/debugfs.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2023 Advanced Micro Devices, Inc */
+
+#ifndef _PDS_VDPA_DEBUGFS_H_
+#define _PDS_VDPA_DEBUGFS_H_
+
+#include <linux/debugfs.h>
+
+#ifdef CONFIG_DEBUG_FS
+
+void pds_vdpa_debugfs_create(void);
+void pds_vdpa_debugfs_destroy(void);
+#else
+static inline void pds_vdpa_debugfs_create(void) { }
+static inline void pds_vdpa_debugfs_destroy(void) { }
+#endif
+
+#endif /* _PDS_VDPA_DEBUGFS_H_ */
diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h
new file mode 100644
index 000000000000..b5154e3b298e
--- /dev/null
+++ b/include/linux/pds/pds_vdpa.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Advanced Micro Devices, Inc */
+
+#ifndef _PDS_VDPA_IF_H_
+#define _PDS_VDPA_IF_H_
+
+#include <linux/pds/pds_common.h>
+
+#define PDS_DEV_TYPE_VDPA_STR	"vDPA"
+#define PDS_VDPA_DEV_NAME	PDS_CORE_DRV_NAME "." PDS_DEV_TYPE_VDPA_STR
+
+#endif /* _PDS_VDPA_IF_H_ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH RFC v2 virtio 2/7] pds_vdpa: get vdpa management info
  2023-03-09  1:30 [PATCH RFC v2 virtio 0/7] pds_vdpa driver Shannon Nelson
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 1/7] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC Shannon Nelson
@ 2023-03-09  1:30 ` Shannon Nelson
  2023-03-15  7:05     ` Jason Wang
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 3/7] pds_vdpa: virtio bar setup for vdpa Shannon Nelson
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 36+ messages in thread
From: Shannon Nelson @ 2023-03-09  1:30 UTC (permalink / raw)
  To: jasowang, mst, virtualization, shannon.nelson, brett.creeley,
	davem, netdev, kuba
  Cc: drivers

Find the vDPA management information from the DSC in order to
advertise it to the vdpa subsystem.

Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
---
 drivers/vdpa/pds/Makefile    |   3 +-
 drivers/vdpa/pds/aux_drv.c   |  13 ++++
 drivers/vdpa/pds/aux_drv.h   |   7 +++
 drivers/vdpa/pds/debugfs.c   |   3 +
 drivers/vdpa/pds/vdpa_dev.c  | 113 +++++++++++++++++++++++++++++++++++
 drivers/vdpa/pds/vdpa_dev.h  |  15 +++++
 include/linux/pds/pds_vdpa.h |  92 ++++++++++++++++++++++++++++
 7 files changed, 245 insertions(+), 1 deletion(-)
 create mode 100644 drivers/vdpa/pds/vdpa_dev.c
 create mode 100644 drivers/vdpa/pds/vdpa_dev.h

diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
index a9cd2f450ae1..13b50394ec64 100644
--- a/drivers/vdpa/pds/Makefile
+++ b/drivers/vdpa/pds/Makefile
@@ -3,6 +3,7 @@
 
 obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
 
-pds_vdpa-y := aux_drv.o
+pds_vdpa-y := aux_drv.o \
+	      vdpa_dev.o
 
 pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
index b3f36170253c..63e40ae68211 100644
--- a/drivers/vdpa/pds/aux_drv.c
+++ b/drivers/vdpa/pds/aux_drv.c
@@ -2,6 +2,8 @@
 /* Copyright(c) 2023 Advanced Micro Devices, Inc */
 
 #include <linux/auxiliary_bus.h>
+#include <linux/pci.h>
+#include <linux/vdpa.h>
 
 #include <linux/pds/pds_core.h>
 #include <linux/pds/pds_auxbus.h>
@@ -9,6 +11,7 @@
 
 #include "aux_drv.h"
 #include "debugfs.h"
+#include "vdpa_dev.h"
 
 static const struct auxiliary_device_id pds_vdpa_id_table[] = {
 	{ .name = PDS_VDPA_DEV_NAME, },
@@ -30,6 +33,7 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
 		return -ENOMEM;
 
 	vdpa_aux->padev = padev;
+	vdpa_aux->vf_id = pci_iov_vf_id(padev->vf->pdev);
 	auxiliary_set_drvdata(aux_dev, vdpa_aux);
 
 	/* Register our PDS client with the pds_core */
@@ -40,8 +44,15 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
 		goto err_free_mem;
 	}
 
+	/* Get device ident info and set up the vdpa_mgmt_dev */
+	err = pds_vdpa_get_mgmt_info(vdpa_aux);
+	if (err)
+		goto err_aux_unreg;
+
 	return 0;
 
+err_aux_unreg:
+	padev->ops->unregister_client(padev);
 err_free_mem:
 	kfree(vdpa_aux);
 	auxiliary_set_drvdata(aux_dev, NULL);
@@ -54,6 +65,8 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
 	struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
 	struct device *dev = &aux_dev->dev;
 
+	pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
+
 	vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
 
 	kfree(vdpa_aux);
diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
index 14e465944dfd..94ba7abcaa43 100644
--- a/drivers/vdpa/pds/aux_drv.h
+++ b/drivers/vdpa/pds/aux_drv.h
@@ -10,6 +10,13 @@
 struct pds_vdpa_aux {
 	struct pds_auxiliary_dev *padev;
 
+	struct vdpa_mgmt_dev vdpa_mdev;
+
+	struct pds_vdpa_ident ident;
+
+	int vf_id;
 	struct dentry *dentry;
+
+	int nintrs;
 };
 #endif /* _AUX_DRV_H_ */
diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
index 3c163dc7b66f..7b7e90fd6578 100644
--- a/drivers/vdpa/pds/debugfs.c
+++ b/drivers/vdpa/pds/debugfs.c
@@ -1,7 +1,10 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /* Copyright(c) 2023 Advanced Micro Devices, Inc */
 
+#include <linux/vdpa.h>
+
 #include <linux/pds/pds_core.h>
+#include <linux/pds/pds_vdpa.h>
 #include <linux/pds/pds_auxbus.h>
 
 #include "aux_drv.h"
diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
new file mode 100644
index 000000000000..bd840688503c
--- /dev/null
+++ b/drivers/vdpa/pds/vdpa_dev.c
@@ -0,0 +1,113 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Advanced Micro Devices, Inc */
+
+#include <linux/pci.h>
+#include <linux/vdpa.h>
+#include <uapi/linux/vdpa.h>
+
+#include <linux/pds/pds_core.h>
+#include <linux/pds/pds_adminq.h>
+#include <linux/pds/pds_auxbus.h>
+#include <linux/pds/pds_vdpa.h>
+
+#include "vdpa_dev.h"
+#include "aux_drv.h"
+
+static struct virtio_device_id pds_vdpa_id_table[] = {
+	{VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID},
+	{0},
+};
+
+static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
+			    const struct vdpa_dev_set_config *add_config)
+{
+	return -EOPNOTSUPP;
+}
+
+static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev,
+			     struct vdpa_device *vdpa_dev)
+{
+}
+
+static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = {
+	.dev_add = pds_vdpa_dev_add,
+	.dev_del = pds_vdpa_dev_del
+};
+
+int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux)
+{
+	struct pds_vdpa_ident_cmd ident_cmd = {
+		.opcode = PDS_VDPA_CMD_IDENT,
+		.vf_id = cpu_to_le16(vdpa_aux->vf_id),
+	};
+	struct pds_vdpa_comp ident_comp = {0};
+	struct vdpa_mgmt_dev *mgmt;
+	struct device *pf_dev;
+	struct pci_dev *pdev;
+	dma_addr_t ident_pa;
+	struct device *dev;
+	u16 max_vqs;
+	int err;
+
+	dev = &vdpa_aux->padev->aux_dev.dev;
+	pdev = vdpa_aux->padev->vf->pdev;
+	mgmt = &vdpa_aux->vdpa_mdev;
+
+	/* Get resource info through the PF's adminq.  It is a block of info,
+	 * so we need to map some memory for PF to make available to the
+	 * firmware for writing the data.
+	 */
+	pf_dev = vdpa_aux->padev->pf->dev;
+	ident_pa = dma_map_single(pf_dev, &vdpa_aux->ident,
+				  sizeof(vdpa_aux->ident), DMA_FROM_DEVICE);
+	if (dma_mapping_error(pf_dev, ident_pa)) {
+		dev_err(dev, "Failed to map ident space\n");
+		return -ENOMEM;
+	}
+
+	ident_cmd.ident_pa = cpu_to_le64(ident_pa);
+	ident_cmd.len = cpu_to_le32(sizeof(vdpa_aux->ident));
+	err = vdpa_aux->padev->ops->adminq_cmd(vdpa_aux->padev,
+					       (union pds_core_adminq_cmd *)&ident_cmd,
+					       sizeof(ident_cmd),
+					       (union pds_core_adminq_comp *)&ident_comp,
+					       0);
+	dma_unmap_single(pf_dev, ident_pa,
+			 sizeof(vdpa_aux->ident), DMA_FROM_DEVICE);
+	if (err) {
+		dev_err(dev, "Failed to ident hw, status %d: %pe\n",
+			ident_comp.status, ERR_PTR(err));
+		return err;
+	}
+
+	max_vqs = le16_to_cpu(vdpa_aux->ident.max_vqs);
+	mgmt->max_supported_vqs = min_t(u16, PDS_VDPA_MAX_QUEUES, max_vqs);
+	if (max_vqs > PDS_VDPA_MAX_QUEUES)
+		dev_info(dev, "FYI - Device supports more vqs (%d) than driver (%d)\n",
+			 max_vqs, PDS_VDPA_MAX_QUEUES);
+
+	mgmt->ops = &pds_vdpa_mgmt_dev_ops;
+	mgmt->id_table = pds_vdpa_id_table;
+	mgmt->device = dev;
+	mgmt->supported_features = le64_to_cpu(vdpa_aux->ident.hw_features);
+	mgmt->config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR);
+	mgmt->config_attr_mask |= BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP);
+
+	/* Set up interrupts now that we know how many we might want
+	 * each gets one, than add another for a control queue if supported
+	 */
+	vdpa_aux->nintrs = mgmt->max_supported_vqs;
+	if (mgmt->supported_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ))
+		vdpa_aux->nintrs++;
+
+	err = pci_alloc_irq_vectors(pdev, vdpa_aux->nintrs, vdpa_aux->nintrs,
+				    PCI_IRQ_MSIX);
+	if (err < 0) {
+		dev_err(dev, "Couldn't get %d msix vectors: %pe\n",
+			vdpa_aux->nintrs, ERR_PTR(err));
+		return err;
+	}
+	vdpa_aux->nintrs = err;
+
+	return 0;
+}
diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h
new file mode 100644
index 000000000000..97fab833a0aa
--- /dev/null
+++ b/drivers/vdpa/pds/vdpa_dev.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Advanced Micro Devices, Inc */
+
+#ifndef _VDPA_DEV_H_
+#define _VDPA_DEV_H_
+
+#define PDS_VDPA_MAX_QUEUES	65
+
+struct pds_vdpa_device {
+	struct vdpa_device vdpa_dev;
+	struct pds_vdpa_aux *vdpa_aux;
+};
+
+int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux);
+#endif /* _VDPA_DEV_H_ */
diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h
index b5154e3b298e..3f7c08551163 100644
--- a/include/linux/pds/pds_vdpa.h
+++ b/include/linux/pds/pds_vdpa.h
@@ -9,4 +9,96 @@
 #define PDS_DEV_TYPE_VDPA_STR	"vDPA"
 #define PDS_VDPA_DEV_NAME	PDS_CORE_DRV_NAME "." PDS_DEV_TYPE_VDPA_STR
 
+/*
+ * enum pds_vdpa_cmd_opcode - vDPA Device commands
+ */
+enum pds_vdpa_cmd_opcode {
+	PDS_VDPA_CMD_INIT		= 48,
+	PDS_VDPA_CMD_IDENT		= 49,
+	PDS_VDPA_CMD_RESET		= 51,
+	PDS_VDPA_CMD_VQ_RESET		= 52,
+	PDS_VDPA_CMD_VQ_INIT		= 53,
+	PDS_VDPA_CMD_STATUS_UPDATE	= 54,
+	PDS_VDPA_CMD_SET_FEATURES	= 55,
+	PDS_VDPA_CMD_SET_ATTR		= 56,
+	PDS_VDPA_CMD_VQ_SET_STATE	= 57,
+	PDS_VDPA_CMD_VQ_GET_STATE	= 58,
+};
+
+/**
+ * struct pds_vdpa_cmd - generic command
+ * @opcode:	Opcode
+ * @vdpa_index:	Index for vdpa subdevice
+ * @vf_id:	VF id
+ */
+struct pds_vdpa_cmd {
+	u8     opcode;
+	u8     vdpa_index;
+	__le16 vf_id;
+};
+
+/**
+ * struct pds_vdpa_comp - generic command completion
+ * @status:	Status of the command (enum pds_core_status_code)
+ * @rsvd:	Word boundary padding
+ * @color:	Color bit
+ */
+struct pds_vdpa_comp {
+	u8 status;
+	u8 rsvd[14];
+	u8 color;
+};
+
+/**
+ * struct pds_vdpa_init_cmd - INIT command
+ * @opcode:	Opcode PDS_VDPA_CMD_INIT
+ * @vdpa_index: Index for vdpa subdevice
+ * @vf_id:	VF id
+ * @len:	length of config info DMA space
+ * @config_pa:	address for DMA of virtio config struct
+ */
+struct pds_vdpa_init_cmd {
+	u8     opcode;
+	u8     vdpa_index;
+	__le16 vf_id;
+	__le32 len;
+	__le64 config_pa;
+};
+
+/**
+ * struct pds_vdpa_ident - vDPA identification data
+ * @hw_features:	vDPA features supported by device
+ * @max_vqs:		max queues available (2 queues for a single queuepair)
+ * @max_qlen:		log(2) of maximum number of descriptors
+ * @min_qlen:		log(2) of minimum number of descriptors
+ *
+ * This struct is used in a DMA block that is set up for the PDS_VDPA_CMD_IDENT
+ * transaction.  Set up the DMA block and send the address in the IDENT cmd
+ * data, the DSC will write the ident information, then we can remove the DMA
+ * block after reading the answer.  If the completion status is 0, then there
+ * is valid information, else there was an error and the data should be invalid.
+ */
+struct pds_vdpa_ident {
+	__le64 hw_features;
+	__le16 max_vqs;
+	__le16 max_qlen;
+	__le16 min_qlen;
+};
+
+/**
+ * struct pds_vdpa_ident_cmd - IDENT command
+ * @opcode:	Opcode PDS_VDPA_CMD_IDENT
+ * @rsvd:       Word boundary padding
+ * @vf_id:	VF id
+ * @len:	length of ident info DMA space
+ * @ident_pa:	address for DMA of ident info (struct pds_vdpa_ident)
+ *			only used for this transaction, then forgotten by DSC
+ */
+struct pds_vdpa_ident_cmd {
+	u8     opcode;
+	u8     rsvd;
+	__le16 vf_id;
+	__le32 len;
+	__le64 ident_pa;
+};
 #endif /* _PDS_VDPA_IF_H_ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH RFC v2 virtio 3/7] pds_vdpa: virtio bar setup for vdpa
  2023-03-09  1:30 [PATCH RFC v2 virtio 0/7] pds_vdpa driver Shannon Nelson
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 1/7] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC Shannon Nelson
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 2/7] pds_vdpa: get vdpa management info Shannon Nelson
@ 2023-03-09  1:30 ` Shannon Nelson
  2023-03-15  7:05     ` Jason Wang
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 4/7] pds_vdpa: add vdpa config client commands Shannon Nelson
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 36+ messages in thread
From: Shannon Nelson @ 2023-03-09  1:30 UTC (permalink / raw)
  To: jasowang, mst, virtualization, shannon.nelson, brett.creeley,
	davem, netdev, kuba
  Cc: drivers

The PDS vDPA device has a virtio BAR for describing itself, and
the pds_vdpa driver needs to access it.  Here we copy liberally
from the existing drivers/virtio/virtio_pci_modern_dev.c as it
has what we need, but we need to modify it so that it can work
with our device id and so we can use our own DMA mask.

We suspect there is room for discussion here about making the
existing code a little more flexible, but we thought we'd at
least start the discussion here.

Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
---
 drivers/vdpa/pds/Makefile     |   1 +
 drivers/vdpa/pds/aux_drv.c    |  14 ++
 drivers/vdpa/pds/aux_drv.h    |   1 +
 drivers/vdpa/pds/debugfs.c    |   1 +
 drivers/vdpa/pds/vdpa_dev.c   |   1 +
 drivers/vdpa/pds/virtio_pci.c | 281 ++++++++++++++++++++++++++++++++++
 drivers/vdpa/pds/virtio_pci.h |   8 +
 7 files changed, 307 insertions(+)
 create mode 100644 drivers/vdpa/pds/virtio_pci.c
 create mode 100644 drivers/vdpa/pds/virtio_pci.h

diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
index 13b50394ec64..ca2efa8c6eb5 100644
--- a/drivers/vdpa/pds/Makefile
+++ b/drivers/vdpa/pds/Makefile
@@ -4,6 +4,7 @@
 obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
 
 pds_vdpa-y := aux_drv.o \
+	      virtio_pci.o \
 	      vdpa_dev.o
 
 pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
index 63e40ae68211..28158d0d98a5 100644
--- a/drivers/vdpa/pds/aux_drv.c
+++ b/drivers/vdpa/pds/aux_drv.c
@@ -4,6 +4,7 @@
 #include <linux/auxiliary_bus.h>
 #include <linux/pci.h>
 #include <linux/vdpa.h>
+#include <linux/virtio_pci_modern.h>
 
 #include <linux/pds/pds_core.h>
 #include <linux/pds/pds_auxbus.h>
@@ -12,6 +13,7 @@
 #include "aux_drv.h"
 #include "debugfs.h"
 #include "vdpa_dev.h"
+#include "virtio_pci.h"
 
 static const struct auxiliary_device_id pds_vdpa_id_table[] = {
 	{ .name = PDS_VDPA_DEV_NAME, },
@@ -49,8 +51,19 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
 	if (err)
 		goto err_aux_unreg;
 
+	/* Find the virtio configuration */
+	vdpa_aux->vd_mdev.pci_dev = padev->vf->pdev;
+	err = pds_vdpa_probe_virtio(&vdpa_aux->vd_mdev);
+	if (err) {
+		dev_err(dev, "Unable to probe for virtio configuration: %pe\n",
+			ERR_PTR(err));
+		goto err_free_mgmt_info;
+	}
+
 	return 0;
 
+err_free_mgmt_info:
+	pci_free_irq_vectors(padev->vf->pdev);
 err_aux_unreg:
 	padev->ops->unregister_client(padev);
 err_free_mem:
@@ -65,6 +78,7 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
 	struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
 	struct device *dev = &aux_dev->dev;
 
+	pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
 	pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
 
 	vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
index 94ba7abcaa43..87ac3c01c476 100644
--- a/drivers/vdpa/pds/aux_drv.h
+++ b/drivers/vdpa/pds/aux_drv.h
@@ -16,6 +16,7 @@ struct pds_vdpa_aux {
 
 	int vf_id;
 	struct dentry *dentry;
+	struct virtio_pci_modern_device vd_mdev;
 
 	int nintrs;
 };
diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
index 7b7e90fd6578..aa5e9677fe74 100644
--- a/drivers/vdpa/pds/debugfs.c
+++ b/drivers/vdpa/pds/debugfs.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /* Copyright(c) 2023 Advanced Micro Devices, Inc */
 
+#include <linux/virtio_pci_modern.h>
 #include <linux/vdpa.h>
 
 #include <linux/pds/pds_core.h>
diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
index bd840688503c..15d623297203 100644
--- a/drivers/vdpa/pds/vdpa_dev.c
+++ b/drivers/vdpa/pds/vdpa_dev.c
@@ -4,6 +4,7 @@
 #include <linux/pci.h>
 #include <linux/vdpa.h>
 #include <uapi/linux/vdpa.h>
+#include <linux/virtio_pci_modern.h>
 
 #include <linux/pds/pds_core.h>
 #include <linux/pds/pds_adminq.h>
diff --git a/drivers/vdpa/pds/virtio_pci.c b/drivers/vdpa/pds/virtio_pci.c
new file mode 100644
index 000000000000..cb879619dac3
--- /dev/null
+++ b/drivers/vdpa/pds/virtio_pci.c
@@ -0,0 +1,281 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+/*
+ * adapted from drivers/virtio/virtio_pci_modern_dev.c, v6.0-rc1
+ */
+
+#include <linux/virtio_pci_modern.h>
+#include <linux/pci.h>
+
+#include "virtio_pci.h"
+
+/*
+ * pds_vdpa_map_capability - map a part of virtio pci capability
+ * @mdev: the modern virtio-pci device
+ * @off: offset of the capability
+ * @minlen: minimal length of the capability
+ * @align: align requirement
+ * @start: start from the capability
+ * @size: map size
+ * @len: the length that is actually mapped
+ * @pa: physical address of the capability
+ *
+ * Returns the io address of for the part of the capability
+ */
+static void __iomem *
+pds_vdpa_map_capability(struct virtio_pci_modern_device *mdev, int off,
+			size_t minlen, u32 align, u32 start, u32 size,
+			size_t *len, resource_size_t *pa)
+{
+	struct pci_dev *dev = mdev->pci_dev;
+	u8 bar;
+	u32 offset, length;
+	void __iomem *p;
+
+	pci_read_config_byte(dev, off + offsetof(struct virtio_pci_cap,
+						 bar),
+			     &bar);
+	pci_read_config_dword(dev, off + offsetof(struct virtio_pci_cap, offset),
+			      &offset);
+	pci_read_config_dword(dev, off + offsetof(struct virtio_pci_cap, length),
+			      &length);
+
+	/* Check if the BAR may have changed since we requested the region. */
+	if (bar >= PCI_STD_NUM_BARS || !(mdev->modern_bars & (1 << bar))) {
+		dev_err(&dev->dev,
+			"virtio_pci: bar unexpectedly changed to %u\n", bar);
+		return NULL;
+	}
+
+	if (length <= start) {
+		dev_err(&dev->dev,
+			"virtio_pci: bad capability len %u (>%u expected)\n",
+			length, start);
+		return NULL;
+	}
+
+	if (length - start < minlen) {
+		dev_err(&dev->dev,
+			"virtio_pci: bad capability len %u (>=%zu expected)\n",
+			length, minlen);
+		return NULL;
+	}
+
+	length -= start;
+
+	if (start + offset < offset) {
+		dev_err(&dev->dev,
+			"virtio_pci: map wrap-around %u+%u\n",
+			start, offset);
+		return NULL;
+	}
+
+	offset += start;
+
+	if (offset & (align - 1)) {
+		dev_err(&dev->dev,
+			"virtio_pci: offset %u not aligned to %u\n",
+			offset, align);
+		return NULL;
+	}
+
+	if (length > size)
+		length = size;
+
+	if (len)
+		*len = length;
+
+	if (minlen + offset < minlen ||
+	    minlen + offset > pci_resource_len(dev, bar)) {
+		dev_err(&dev->dev,
+			"virtio_pci: map virtio %zu@%u out of range on bar %i length %lu\n",
+			minlen, offset,
+			bar, (unsigned long)pci_resource_len(dev, bar));
+		return NULL;
+	}
+
+	p = pci_iomap_range(dev, bar, offset, length);
+	if (!p)
+		dev_err(&dev->dev,
+			"virtio_pci: unable to map virtio %u@%u on bar %i\n",
+			length, offset, bar);
+	else if (pa)
+		*pa = pci_resource_start(dev, bar) + offset;
+
+	return p;
+}
+
+/**
+ * virtio_pci_find_capability - walk capabilities to find device info.
+ * @dev: the pci device
+ * @cfg_type: the VIRTIO_PCI_CAP_* value we seek
+ * @ioresource_types: IORESOURCE_MEM and/or IORESOURCE_IO.
+ * @bars: the bitmask of BARs
+ *
+ * Returns offset of the capability, or 0.
+ */
+static inline int virtio_pci_find_capability(struct pci_dev *dev, u8 cfg_type,
+					     u32 ioresource_types, int *bars)
+{
+	int pos;
+
+	for (pos = pci_find_capability(dev, PCI_CAP_ID_VNDR);
+	     pos > 0;
+	     pos = pci_find_next_capability(dev, pos, PCI_CAP_ID_VNDR)) {
+		u8 type, bar;
+
+		pci_read_config_byte(dev, pos + offsetof(struct virtio_pci_cap,
+							 cfg_type),
+				     &type);
+		pci_read_config_byte(dev, pos + offsetof(struct virtio_pci_cap,
+							 bar),
+				     &bar);
+
+		/* Ignore structures with reserved BAR values */
+		if (bar >= PCI_STD_NUM_BARS)
+			continue;
+
+		if (type == cfg_type) {
+			if (pci_resource_len(dev, bar) &&
+			    pci_resource_flags(dev, bar) & ioresource_types) {
+				*bars |= (1 << bar);
+				return pos;
+			}
+		}
+	}
+	return 0;
+}
+
+/*
+ * pds_vdpa_probe_virtio: probe the modern virtio pci device, note that the
+ * caller is required to enable PCI device before calling this function.
+ * @mdev: the modern virtio-pci device
+ *
+ * Return 0 on succeed otherwise fail
+ */
+int pds_vdpa_probe_virtio(struct virtio_pci_modern_device *mdev)
+{
+	struct pci_dev *pci_dev = mdev->pci_dev;
+	int err, common, isr, notify, device;
+	u32 notify_length;
+	u32 notify_offset;
+
+	/* check for a common config: if not, use legacy mode (bar 0). */
+	common = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_COMMON_CFG,
+					    IORESOURCE_IO | IORESOURCE_MEM,
+					    &mdev->modern_bars);
+	if (!common) {
+		dev_info(&pci_dev->dev,
+			 "virtio_pci: missing common config\n");
+		return -ENODEV;
+	}
+
+	/* If common is there, these should be too... */
+	isr = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_ISR_CFG,
+					 IORESOURCE_IO | IORESOURCE_MEM,
+					 &mdev->modern_bars);
+	notify = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_NOTIFY_CFG,
+					    IORESOURCE_IO | IORESOURCE_MEM,
+					    &mdev->modern_bars);
+	if (!isr || !notify) {
+		dev_err(&pci_dev->dev,
+			"virtio_pci: missing capabilities %i/%i/%i\n",
+			common, isr, notify);
+		return -EINVAL;
+	}
+
+	/* Device capability is only mandatory for devices that have
+	 * device-specific configuration.
+	 */
+	device = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_DEVICE_CFG,
+					    IORESOURCE_IO | IORESOURCE_MEM,
+					    &mdev->modern_bars);
+
+	err = pci_request_selected_regions(pci_dev, mdev->modern_bars,
+					   "virtio-pci-modern");
+	if (err)
+		return err;
+
+	err = -EINVAL;
+	mdev->common = pds_vdpa_map_capability(mdev, common,
+					       sizeof(struct virtio_pci_common_cfg),
+					       4, 0,
+					       sizeof(struct virtio_pci_common_cfg),
+					       NULL, NULL);
+	if (!mdev->common)
+		goto err_map_common;
+	mdev->isr = pds_vdpa_map_capability(mdev, isr, sizeof(u8), 1,
+					    0, 1, NULL, NULL);
+	if (!mdev->isr)
+		goto err_map_isr;
+
+	/* Read notify_off_multiplier from config space. */
+	pci_read_config_dword(pci_dev,
+			      notify + offsetof(struct virtio_pci_notify_cap,
+						notify_off_multiplier),
+			      &mdev->notify_offset_multiplier);
+	/* Read notify length and offset from config space. */
+	pci_read_config_dword(pci_dev,
+			      notify + offsetof(struct virtio_pci_notify_cap,
+						cap.length),
+			      &notify_length);
+
+	pci_read_config_dword(pci_dev,
+			      notify + offsetof(struct virtio_pci_notify_cap,
+						cap.offset),
+			      &notify_offset);
+
+	/* We don't know how many VQs we'll map, ahead of the time.
+	 * If notify length is small, map it all now.
+	 * Otherwise, map each VQ individually later.
+	 */
+	if ((u64)notify_length + (notify_offset % PAGE_SIZE) <= PAGE_SIZE) {
+		mdev->notify_base = pds_vdpa_map_capability(mdev, notify,
+							    2, 2,
+							    0, notify_length,
+							    &mdev->notify_len,
+							    &mdev->notify_pa);
+		if (!mdev->notify_base)
+			goto err_map_notify;
+	} else {
+		mdev->notify_map_cap = notify;
+	}
+
+	/* Again, we don't know how much we should map, but PAGE_SIZE
+	 * is more than enough for all existing devices.
+	 */
+	if (device) {
+		mdev->device = pds_vdpa_map_capability(mdev, device, 0, 4,
+						       0, PAGE_SIZE,
+						       &mdev->device_len,
+						       NULL);
+		if (!mdev->device)
+			goto err_map_device;
+	}
+
+	return 0;
+
+err_map_device:
+	if (mdev->notify_base)
+		pci_iounmap(pci_dev, mdev->notify_base);
+err_map_notify:
+	pci_iounmap(pci_dev, mdev->isr);
+err_map_isr:
+	pci_iounmap(pci_dev, mdev->common);
+err_map_common:
+	pci_release_selected_regions(pci_dev, mdev->modern_bars);
+	return err;
+}
+
+void pds_vdpa_remove_virtio(struct virtio_pci_modern_device *mdev)
+{
+	struct pci_dev *pci_dev = mdev->pci_dev;
+
+	if (mdev->device)
+		pci_iounmap(pci_dev, mdev->device);
+	if (mdev->notify_base)
+		pci_iounmap(pci_dev, mdev->notify_base);
+	pci_iounmap(pci_dev, mdev->isr);
+	pci_iounmap(pci_dev, mdev->common);
+	pci_release_selected_regions(pci_dev, mdev->modern_bars);
+}
diff --git a/drivers/vdpa/pds/virtio_pci.h b/drivers/vdpa/pds/virtio_pci.h
new file mode 100644
index 000000000000..f017cfa1173c
--- /dev/null
+++ b/drivers/vdpa/pds/virtio_pci.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Advanced Micro Devices, Inc */
+
+#ifndef _PDS_VIRTIO_PCI_H_
+#define _PDS_VIRTIO_PCI_H_
+int pds_vdpa_probe_virtio(struct virtio_pci_modern_device *mdev);
+void pds_vdpa_remove_virtio(struct virtio_pci_modern_device *mdev);
+#endif /* _PDS_VIRTIO_PCI_H_ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH RFC v2 virtio 4/7] pds_vdpa: add vdpa config client commands
  2023-03-09  1:30 [PATCH RFC v2 virtio 0/7] pds_vdpa driver Shannon Nelson
                   ` (2 preceding siblings ...)
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 3/7] pds_vdpa: virtio bar setup for vdpa Shannon Nelson
@ 2023-03-09  1:30 ` Shannon Nelson
  2023-03-15  7:05     ` Jason Wang
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 5/7] pds_vdpa: add support for vdpa and vdpamgmt interfaces Shannon Nelson
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 36+ messages in thread
From: Shannon Nelson @ 2023-03-09  1:30 UTC (permalink / raw)
  To: jasowang, mst, virtualization, shannon.nelson, brett.creeley,
	davem, netdev, kuba
  Cc: drivers

These are the adminq commands that will be needed for
setting up and using the vDPA device.

Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
---
 drivers/vdpa/pds/Makefile    |   1 +
 drivers/vdpa/pds/cmds.c      | 207 +++++++++++++++++++++++++++++++++++
 drivers/vdpa/pds/cmds.h      |  16 +++
 drivers/vdpa/pds/vdpa_dev.h  |  36 +++++-
 include/linux/pds/pds_vdpa.h | 175 +++++++++++++++++++++++++++++
 5 files changed, 434 insertions(+), 1 deletion(-)
 create mode 100644 drivers/vdpa/pds/cmds.c
 create mode 100644 drivers/vdpa/pds/cmds.h

diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
index ca2efa8c6eb5..7211eba3d942 100644
--- a/drivers/vdpa/pds/Makefile
+++ b/drivers/vdpa/pds/Makefile
@@ -4,6 +4,7 @@
 obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
 
 pds_vdpa-y := aux_drv.o \
+	      cmds.o \
 	      virtio_pci.o \
 	      vdpa_dev.o
 
diff --git a/drivers/vdpa/pds/cmds.c b/drivers/vdpa/pds/cmds.c
new file mode 100644
index 000000000000..45410739107c
--- /dev/null
+++ b/drivers/vdpa/pds/cmds.c
@@ -0,0 +1,207 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Advanced Micro Devices, Inc */
+
+#include <linux/vdpa.h>
+#include <linux/virtio_pci_modern.h>
+
+#include <linux/pds/pds_core_if.h>
+#include <linux/pds/pds_adminq.h>
+#include <linux/pds/pds_auxbus.h>
+#include <linux/pds/pds_vdpa.h>
+
+#include "vdpa_dev.h"
+#include "aux_drv.h"
+#include "cmds.h"
+
+int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv)
+{
+	struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
+	struct device *dev = &padev->aux_dev.dev;
+	struct pds_vdpa_init_cmd init_cmd = {
+		.opcode = PDS_VDPA_CMD_INIT,
+		.vdpa_index = pdsv->vdpa_index,
+		.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
+		.len = cpu_to_le32(sizeof(struct virtio_net_config)),
+		.config_pa = 0,   /* we use the PCI space, not an alternate space */
+	};
+	struct pds_vdpa_comp init_comp = {0};
+	int err;
+
+	/* Initialize the vdpa/virtio device */
+	err = padev->ops->adminq_cmd(padev,
+				     (union pds_core_adminq_cmd *)&init_cmd,
+				     sizeof(init_cmd),
+				     (union pds_core_adminq_comp *)&init_comp,
+				     0);
+	if (err)
+		dev_err(dev, "Failed to init hw, status %d: %pe\n",
+			init_comp.status, ERR_PTR(err));
+
+	return err;
+}
+
+int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv)
+{
+	struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
+	struct device *dev = &padev->aux_dev.dev;
+	struct pds_vdpa_cmd cmd = {
+		.opcode = PDS_VDPA_CMD_RESET,
+		.vdpa_index = pdsv->vdpa_index,
+		.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
+	};
+	struct pds_vdpa_comp comp = {0};
+	int err;
+
+	err = padev->ops->adminq_cmd(padev,
+				     (union pds_core_adminq_cmd *)&cmd,
+				     sizeof(cmd),
+				     (union pds_core_adminq_comp *)&comp,
+				     0);
+	if (err)
+		dev_err(dev, "Failed to reset hw, status %d: %pe\n",
+			comp.status, ERR_PTR(err));
+
+	return err;
+}
+
+int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac)
+{
+	struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
+	struct device *dev = &padev->aux_dev.dev;
+	struct pds_vdpa_setattr_cmd cmd = {
+		.opcode = PDS_VDPA_CMD_SET_ATTR,
+		.vdpa_index = pdsv->vdpa_index,
+		.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
+		.attr = PDS_VDPA_ATTR_MAC,
+	};
+	struct pds_vdpa_comp comp = {0};
+	int err;
+
+	ether_addr_copy(cmd.mac, mac);
+	err = padev->ops->adminq_cmd(padev,
+				     (union pds_core_adminq_cmd *)&cmd,
+				     sizeof(cmd),
+				     (union pds_core_adminq_comp *)&comp,
+				     0);
+	if (err)
+		dev_err(dev, "Failed to set mac address %pM, status %d: %pe\n",
+			mac, comp.status, ERR_PTR(err));
+
+	return err;
+}
+
+int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp)
+{
+	struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
+	struct device *dev = &padev->aux_dev.dev;
+	struct pds_vdpa_setattr_cmd cmd = {
+		.opcode = PDS_VDPA_CMD_SET_ATTR,
+		.vdpa_index = pdsv->vdpa_index,
+		.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
+		.attr = PDS_VDPA_ATTR_MAX_VQ_PAIRS,
+		.max_vq_pairs = cpu_to_le16(max_vqp),
+	};
+	struct pds_vdpa_comp comp = {0};
+	int err;
+
+	err = padev->ops->adminq_cmd(padev,
+				     (union pds_core_adminq_cmd *)&cmd,
+				     sizeof(cmd),
+				     (union pds_core_adminq_comp *)&comp,
+				     0);
+	if (err)
+		dev_err(dev, "Failed to set max vq pairs %u, status %d: %pe\n",
+			max_vqp, comp.status, ERR_PTR(err));
+
+	return err;
+}
+
+int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid,
+			 struct pds_vdpa_vq_info *vq_info)
+{
+	struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
+	struct device *dev = &padev->aux_dev.dev;
+	struct pds_vdpa_vq_init_comp comp = {0};
+	struct pds_vdpa_vq_init_cmd cmd = {
+		.opcode = PDS_VDPA_CMD_VQ_INIT,
+		.vdpa_index = pdsv->vdpa_index,
+		.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
+		.qid = cpu_to_le16(qid),
+		.len = cpu_to_le16(ilog2(vq_info->q_len)),
+		.desc_addr = cpu_to_le64(vq_info->desc_addr),
+		.avail_addr = cpu_to_le64(vq_info->avail_addr),
+		.used_addr = cpu_to_le64(vq_info->used_addr),
+		.intr_index = cpu_to_le16(qid),
+	};
+	int err;
+
+	dev_dbg(dev, "%s: qid %d len %d desc_addr %#llx avail_addr %#llx used_addr %#llx\n",
+		__func__, qid, ilog2(vq_info->q_len),
+		vq_info->desc_addr, vq_info->avail_addr, vq_info->used_addr);
+
+	err = padev->ops->adminq_cmd(padev,
+				     (union pds_core_adminq_cmd *)&cmd,
+				     sizeof(cmd),
+				     (union pds_core_adminq_comp *)&comp,
+				     0);
+	if (err) {
+		dev_err(dev, "Failed to init vq %d, status %d: %pe\n",
+			qid, comp.status, ERR_PTR(err));
+		return err;
+	}
+
+	vq_info->hw_qtype = comp.hw_qtype;
+	vq_info->hw_qindex = le16_to_cpu(comp.hw_qindex);
+
+	return 0;
+}
+
+int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid)
+{
+	struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
+	struct device *dev = &padev->aux_dev.dev;
+	struct pds_vdpa_vq_reset_cmd cmd = {
+		.opcode = PDS_VDPA_CMD_VQ_RESET,
+		.vdpa_index = pdsv->vdpa_index,
+		.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
+		.qid = cpu_to_le16(qid),
+	};
+	struct pds_vdpa_comp comp = {0};
+	int err;
+
+	err = padev->ops->adminq_cmd(padev,
+				     (union pds_core_adminq_cmd *)&cmd,
+				     sizeof(cmd),
+				     (union pds_core_adminq_comp *)&comp,
+				     0);
+	if (err)
+		dev_err(dev, "Failed to reset vq %d, status %d: %pe\n",
+			qid, comp.status, ERR_PTR(err));
+
+	return err;
+}
+
+int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features)
+{
+	struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
+	struct device *dev = &padev->aux_dev.dev;
+	struct pds_vdpa_set_features_cmd cmd = {
+		.opcode = PDS_VDPA_CMD_SET_FEATURES,
+		.vdpa_index = pdsv->vdpa_index,
+		.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
+		.features = cpu_to_le64(features),
+	};
+	struct pds_vdpa_comp comp = {0};
+	int err;
+
+	err = padev->ops->adminq_cmd(padev,
+				     (union pds_core_adminq_cmd *)&cmd,
+				     sizeof(cmd),
+				     (union pds_core_adminq_comp *)&comp,
+				     0);
+	if (err)
+		dev_err(dev, "Failed to set features %#llx, status %d: %pe\n",
+			features, comp.status, ERR_PTR(err));
+
+	return err;
+}
diff --git a/drivers/vdpa/pds/cmds.h b/drivers/vdpa/pds/cmds.h
new file mode 100644
index 000000000000..72e19f4efde6
--- /dev/null
+++ b/drivers/vdpa/pds/cmds.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Advanced Micro Devices, Inc */
+
+#ifndef _VDPA_CMDS_H_
+#define _VDPA_CMDS_H_
+
+int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv);
+
+int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv);
+int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac);
+int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp);
+int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid,
+			 struct pds_vdpa_vq_info *vq_info);
+int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid);
+int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features);
+#endif /* _VDPA_CMDS_H_ */
diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h
index 97fab833a0aa..33284ebe538c 100644
--- a/drivers/vdpa/pds/vdpa_dev.h
+++ b/drivers/vdpa/pds/vdpa_dev.h
@@ -4,11 +4,45 @@
 #ifndef _VDPA_DEV_H_
 #define _VDPA_DEV_H_
 
-#define PDS_VDPA_MAX_QUEUES	65
+#include <linux/pci.h>
+#include <linux/vdpa.h>
+
+struct pds_vdpa_vq_info {
+	bool ready;
+	u64 desc_addr;
+	u64 avail_addr;
+	u64 used_addr;
+	u32 q_len;
+	u16 qid;
+	int irq;
+	char irq_name[32];
+
+	void __iomem *notify;
+	dma_addr_t notify_pa;
+
+	u64 doorbell;
+	u16 avail_idx;
+	u16 used_idx;
+
+	u8 hw_qtype;
+	u16 hw_qindex;
 
+	struct vdpa_callback event_cb;
+	struct pds_vdpa_device *pdsv;
+};
+
+#define PDS_VDPA_MAX_QUEUES	65
+#define PDS_VDPA_MAX_QLEN	32768
 struct pds_vdpa_device {
 	struct vdpa_device vdpa_dev;
 	struct pds_vdpa_aux *vdpa_aux;
+
+	struct pds_vdpa_vq_info vqs[PDS_VDPA_MAX_QUEUES];
+	u64 req_features;		/* features requested by vdpa */
+	u64 actual_features;		/* features negotiated and in use */
+	u8 vdpa_index;			/* rsvd for future subdevice use */
+	u8 num_vqs;			/* num vqs in use */
+	struct vdpa_callback config_cb;
 };
 
 int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux);
diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h
index 3f7c08551163..b6a4cb4d3c6b 100644
--- a/include/linux/pds/pds_vdpa.h
+++ b/include/linux/pds/pds_vdpa.h
@@ -101,4 +101,179 @@ struct pds_vdpa_ident_cmd {
 	__le32 len;
 	__le64 ident_pa;
 };
+
+/**
+ * struct pds_vdpa_status_cmd - STATUS_UPDATE command
+ * @opcode:	Opcode PDS_VDPA_CMD_STATUS_UPDATE
+ * @vdpa_index: Index for vdpa subdevice
+ * @vf_id:	VF id
+ * @status:	new status bits
+ */
+struct pds_vdpa_status_cmd {
+	u8     opcode;
+	u8     vdpa_index;
+	__le16 vf_id;
+	u8     status;
+};
+
+/**
+ * enum pds_vdpa_attr - List of VDPA device attributes
+ * @PDS_VDPA_ATTR_MAC:          MAC address
+ * @PDS_VDPA_ATTR_MAX_VQ_PAIRS: Max virtqueue pairs
+ */
+enum pds_vdpa_attr {
+	PDS_VDPA_ATTR_MAC          = 1,
+	PDS_VDPA_ATTR_MAX_VQ_PAIRS = 2,
+};
+
+/**
+ * struct pds_vdpa_setattr_cmd - SET_ATTR command
+ * @opcode:		Opcode PDS_VDPA_CMD_SET_ATTR
+ * @vdpa_index:		Index for vdpa subdevice
+ * @vf_id:		VF id
+ * @attr:		attribute to be changed (enum pds_vdpa_attr)
+ * @pad:		Word boundary padding
+ * @mac:		new mac address to be assigned as vdpa device address
+ * @max_vq_pairs:	new limit of virtqueue pairs
+ */
+struct pds_vdpa_setattr_cmd {
+	u8     opcode;
+	u8     vdpa_index;
+	__le16 vf_id;
+	u8     attr;
+	u8     pad[3];
+	union {
+		u8 mac[6];
+		__le16 max_vq_pairs;
+	} __packed;
+};
+
+/**
+ * struct pds_vdpa_vq_init_cmd - queue init command
+ * @opcode: Opcode PDS_VDPA_CMD_VQ_INIT
+ * @vdpa_index:	Index for vdpa subdevice
+ * @vf_id:	VF id
+ * @qid:	Queue id (bit0 clear = rx, bit0 set = tx, qid=N is ctrlq)
+ * @len:	log(2) of max descriptor count
+ * @desc_addr:	DMA address of descriptor area
+ * @avail_addr:	DMA address of available descriptors (aka driver area)
+ * @used_addr:	DMA address of used descriptors (aka device area)
+ * @intr_index:	interrupt index
+ */
+struct pds_vdpa_vq_init_cmd {
+	u8     opcode;
+	u8     vdpa_index;
+	__le16 vf_id;
+	__le16 qid;
+	__le16 len;
+	__le64 desc_addr;
+	__le64 avail_addr;
+	__le64 used_addr;
+	__le16 intr_index;
+};
+
+/**
+ * struct pds_vdpa_vq_init_comp - queue init completion
+ * @status:	Status of the command (enum pds_core_status_code)
+ * @hw_qtype:	HW queue type, used in doorbell selection
+ * @hw_qindex:	HW queue index, used in doorbell selection
+ * @rsvd:	Word boundary padding
+ * @color:	Color bit
+ */
+struct pds_vdpa_vq_init_comp {
+	u8     status;
+	u8     hw_qtype;
+	__le16 hw_qindex;
+	u8     rsvd[11];
+	u8     color;
+};
+
+/**
+ * struct pds_vdpa_vq_reset_cmd - queue reset command
+ * @opcode:	Opcode PDS_VDPA_CMD_VQ_RESET
+ * @vdpa_index:	Index for vdpa subdevice
+ * @vf_id:	VF id
+ * @qid:	Queue id
+ */
+struct pds_vdpa_vq_reset_cmd {
+	u8     opcode;
+	u8     vdpa_index;
+	__le16 vf_id;
+	__le16 qid;
+};
+
+/**
+ * struct pds_vdpa_set_features_cmd - set hw features
+ * @opcode: Opcode PDS_VDPA_CMD_SET_FEATURES
+ * @vdpa_index:	Index for vdpa subdevice
+ * @vf_id:	VF id
+ * @rsvd:       Word boundary padding
+ * @features:	Feature bit mask
+ */
+struct pds_vdpa_set_features_cmd {
+	u8     opcode;
+	u8     vdpa_index;
+	__le16 vf_id;
+	__le32 rsvd;
+	__le64 features;
+};
+
+/**
+ * struct pds_vdpa_vq_set_state_cmd - set vq state
+ * @opcode:	Opcode PDS_VDPA_CMD_VQ_SET_STATE
+ * @vdpa_index:	Index for vdpa subdevice
+ * @vf_id:	VF id
+ * @qid:	Queue id
+ * @avail:	Device avail index.
+ * @used:	Device used index.
+ *
+ * If the virtqueue uses packed descriptor format, then the avail and used
+ * index must have a wrap count.  The bits should be arranged like the upper
+ * 16 bits in the device available notification data: 15 bit index, 1 bit wrap.
+ */
+struct pds_vdpa_vq_set_state_cmd {
+	u8     opcode;
+	u8     vdpa_index;
+	__le16 vf_id;
+	__le16 qid;
+	__le16 avail;
+	__le16 used;
+};
+
+/**
+ * struct pds_vdpa_vq_get_state_cmd - get vq state
+ * @opcode:	Opcode PDS_VDPA_CMD_VQ_GET_STATE
+ * @vdpa_index:	Index for vdpa subdevice
+ * @vf_id:	VF id
+ * @qid:	Queue id
+ */
+struct pds_vdpa_vq_get_state_cmd {
+	u8     opcode;
+	u8     vdpa_index;
+	__le16 vf_id;
+	__le16 qid;
+};
+
+/**
+ * struct pds_vdpa_vq_get_state_comp - get vq state completion
+ * @status:	Status of the command (enum pds_core_status_code)
+ * @rsvd0:      Word boundary padding
+ * @avail:	Device avail index.
+ * @used:	Device used index.
+ * @rsvd:       Word boundary padding
+ * @color:	Color bit
+ *
+ * If the virtqueue uses packed descriptor format, then the avail and used
+ * index will have a wrap count.  The bits will be arranged like the "next"
+ * part of device available notification data: 15 bit index, 1 bit wrap.
+ */
+struct pds_vdpa_vq_get_state_comp {
+	u8     status;
+	u8     rsvd0;
+	__le16 avail;
+	__le16 used;
+	u8     rsvd[9];
+	u8     color;
+};
+
 #endif /* _PDS_VDPA_IF_H_ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH RFC v2 virtio 5/7] pds_vdpa: add support for vdpa and vdpamgmt interfaces
  2023-03-09  1:30 [PATCH RFC v2 virtio 0/7] pds_vdpa driver Shannon Nelson
                   ` (3 preceding siblings ...)
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 4/7] pds_vdpa: add vdpa config client commands Shannon Nelson
@ 2023-03-09  1:30 ` Shannon Nelson
  2023-03-15  7:05     ` Jason Wang
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 6/7] pds_vdpa: subscribe to the pds_core events Shannon Nelson
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig Shannon Nelson
  6 siblings, 1 reply; 36+ messages in thread
From: Shannon Nelson @ 2023-03-09  1:30 UTC (permalink / raw)
  To: jasowang, mst, virtualization, shannon.nelson, brett.creeley,
	davem, netdev, kuba
  Cc: drivers

This is the vDPA device support, where we advertise that we can
support the virtio queues and deal with the configuration work
through the pds_core's adminq.

Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
---
 drivers/vdpa/pds/aux_drv.c  |  15 +
 drivers/vdpa/pds/aux_drv.h  |   1 +
 drivers/vdpa/pds/debugfs.c  | 172 ++++++++++++
 drivers/vdpa/pds/debugfs.h  |   8 +
 drivers/vdpa/pds/vdpa_dev.c | 545 +++++++++++++++++++++++++++++++++++-
 5 files changed, 740 insertions(+), 1 deletion(-)

diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
index 28158d0d98a5..d706f06f7400 100644
--- a/drivers/vdpa/pds/aux_drv.c
+++ b/drivers/vdpa/pds/aux_drv.c
@@ -60,8 +60,21 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
 		goto err_free_mgmt_info;
 	}
 
+	/* Let vdpa know that we can provide devices */
+	err = vdpa_mgmtdev_register(&vdpa_aux->vdpa_mdev);
+	if (err) {
+		dev_err(dev, "%s: Failed to initialize vdpa_mgmt interface: %pe\n",
+			__func__, ERR_PTR(err));
+		goto err_free_virtio;
+	}
+
+	pds_vdpa_debugfs_add_pcidev(vdpa_aux);
+	pds_vdpa_debugfs_add_ident(vdpa_aux);
+
 	return 0;
 
+err_free_virtio:
+	pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
 err_free_mgmt_info:
 	pci_free_irq_vectors(padev->vf->pdev);
 err_aux_unreg:
@@ -78,11 +91,13 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
 	struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
 	struct device *dev = &aux_dev->dev;
 
+	vdpa_mgmtdev_unregister(&vdpa_aux->vdpa_mdev);
 	pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
 	pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
 
 	vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
 
+	pds_vdpa_debugfs_del_vdpadev(vdpa_aux);
 	kfree(vdpa_aux);
 	auxiliary_set_drvdata(aux_dev, NULL);
 
diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
index 87ac3c01c476..1ab1ce64da7c 100644
--- a/drivers/vdpa/pds/aux_drv.h
+++ b/drivers/vdpa/pds/aux_drv.h
@@ -11,6 +11,7 @@ struct pds_vdpa_aux {
 	struct pds_auxiliary_dev *padev;
 
 	struct vdpa_mgmt_dev vdpa_mdev;
+	struct pds_vdpa_device *pdsv;
 
 	struct pds_vdpa_ident ident;
 
diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
index aa5e9677fe74..b3ee4f42f3b6 100644
--- a/drivers/vdpa/pds/debugfs.c
+++ b/drivers/vdpa/pds/debugfs.c
@@ -9,6 +9,7 @@
 #include <linux/pds/pds_auxbus.h>
 
 #include "aux_drv.h"
+#include "vdpa_dev.h"
 #include "debugfs.h"
 
 #ifdef CONFIG_DEBUG_FS
@@ -26,4 +27,175 @@ void pds_vdpa_debugfs_destroy(void)
 	dbfs_dir = NULL;
 }
 
+#define PRINT_SBIT_NAME(__seq, __f, __name)                     \
+	do {                                                    \
+		if ((__f) & (__name))                               \
+			seq_printf(__seq, " %s", &#__name[16]); \
+	} while (0)
+
+static void print_status_bits(struct seq_file *seq, u16 status)
+{
+	seq_puts(seq, "status:");
+	PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_ACKNOWLEDGE);
+	PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER);
+	PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER_OK);
+	PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FEATURES_OK);
+	PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_NEEDS_RESET);
+	PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FAILED);
+	seq_puts(seq, "\n");
+}
+
+#define PRINT_FBIT_NAME(__seq, __f, __name)                \
+	do {                                               \
+		if ((__f) & BIT_ULL(__name))                 \
+			seq_printf(__seq, " %s", #__name); \
+	} while (0)
+
+static void print_feature_bits(struct seq_file *seq, u64 features)
+{
+	seq_puts(seq, "features:");
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CSUM);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_CSUM);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MTU);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MAC);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_TSO4);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_TSO6);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_ECN);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_UFO);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_TSO4);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_TSO6);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_ECN);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_UFO);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MRG_RXBUF);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_STATUS);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_VQ);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_RX);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_VLAN);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_RX_EXTRA);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_ANNOUNCE);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MQ);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_MAC_ADDR);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HASH_REPORT);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_RSS);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_RSC_EXT);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_STANDBY);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_SPEED_DUPLEX);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_F_NOTIFY_ON_EMPTY);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_F_ANY_LAYOUT);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_F_VERSION_1);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_F_ACCESS_PLATFORM);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_F_RING_PACKED);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_F_ORDER_PLATFORM);
+	PRINT_FBIT_NAME(seq, features, VIRTIO_F_SR_IOV);
+	seq_puts(seq, "\n");
+}
+
+void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux)
+{
+	vdpa_aux->dentry = debugfs_create_dir(pci_name(vdpa_aux->padev->vf->pdev), dbfs_dir);
+}
+
+static int identity_show(struct seq_file *seq, void *v)
+{
+	struct pds_vdpa_aux *vdpa_aux = seq->private;
+	struct vdpa_mgmt_dev *mgmt;
+
+	seq_printf(seq, "aux_dev:            %s\n",
+		   dev_name(&vdpa_aux->padev->aux_dev.dev));
+
+	mgmt = &vdpa_aux->vdpa_mdev;
+	seq_printf(seq, "max_vqs:            %d\n", mgmt->max_supported_vqs);
+	seq_printf(seq, "config_attr_mask:   %#llx\n", mgmt->config_attr_mask);
+	seq_printf(seq, "supported_features: %#llx\n", mgmt->supported_features);
+	print_feature_bits(seq, mgmt->supported_features);
+
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(identity);
+
+void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux)
+{
+	debugfs_create_file("identity", 0400, vdpa_aux->dentry,
+			    vdpa_aux, &identity_fops);
+}
+
+static int config_show(struct seq_file *seq, void *v)
+{
+	struct pds_vdpa_device *pdsv = seq->private;
+	struct virtio_net_config vc;
+
+	memcpy_fromio(&vc, pdsv->vdpa_aux->vd_mdev.device,
+		      sizeof(struct virtio_net_config));
+
+	seq_printf(seq, "mac:                  %pM\n", vc.mac);
+	seq_printf(seq, "max_virtqueue_pairs:  %d\n",
+		   __virtio16_to_cpu(true, vc.max_virtqueue_pairs));
+	seq_printf(seq, "mtu:                  %d\n", __virtio16_to_cpu(true, vc.mtu));
+	seq_printf(seq, "speed:                %d\n", le32_to_cpu(vc.speed));
+	seq_printf(seq, "duplex:               %d\n", vc.duplex);
+	seq_printf(seq, "rss_max_key_size:     %d\n", vc.rss_max_key_size);
+	seq_printf(seq, "rss_max_indirection_table_length: %d\n",
+		   le16_to_cpu(vc.rss_max_indirection_table_length));
+	seq_printf(seq, "supported_hash_types: %#x\n",
+		   le32_to_cpu(vc.supported_hash_types));
+	seq_printf(seq, "vn_status:            %#x\n",
+		   __virtio16_to_cpu(true, vc.status));
+	print_status_bits(seq, __virtio16_to_cpu(true, vc.status));
+
+	seq_printf(seq, "req_features:         %#llx\n", pdsv->req_features);
+	print_feature_bits(seq, pdsv->req_features);
+	seq_printf(seq, "actual_features:      %#llx\n", pdsv->actual_features);
+	print_feature_bits(seq, pdsv->actual_features);
+	seq_printf(seq, "vdpa_index:           %d\n", pdsv->vdpa_index);
+	seq_printf(seq, "num_vqs:              %d\n", pdsv->num_vqs);
+
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(config);
+
+static int vq_show(struct seq_file *seq, void *v)
+{
+	struct pds_vdpa_vq_info *vq = seq->private;
+
+	seq_printf(seq, "ready:      %d\n", vq->ready);
+	seq_printf(seq, "desc_addr:  %#llx\n", vq->desc_addr);
+	seq_printf(seq, "avail_addr: %#llx\n", vq->avail_addr);
+	seq_printf(seq, "used_addr:  %#llx\n", vq->used_addr);
+	seq_printf(seq, "q_len:      %d\n", vq->q_len);
+	seq_printf(seq, "qid:        %d\n", vq->qid);
+
+	seq_printf(seq, "doorbell:   %#llx\n", vq->doorbell);
+	seq_printf(seq, "avail_idx:  %d\n", vq->avail_idx);
+	seq_printf(seq, "used_idx:   %d\n", vq->used_idx);
+	seq_printf(seq, "irq:        %d\n", vq->irq);
+	seq_printf(seq, "irq-name:   %s\n", vq->irq_name);
+
+	seq_printf(seq, "hw_qtype:   %d\n", vq->hw_qtype);
+	seq_printf(seq, "hw_qindex:  %d\n", vq->hw_qindex);
+
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(vq);
+
+void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux)
+{
+	int i;
+
+	debugfs_create_file("config", 0400, vdpa_aux->dentry, vdpa_aux->pdsv, &config_fops);
+
+	for (i = 0; i < vdpa_aux->pdsv->num_vqs; i++) {
+		char name[8];
+
+		snprintf(name, sizeof(name), "vq%02d", i);
+		debugfs_create_file(name, 0400, vdpa_aux->dentry,
+				    &vdpa_aux->pdsv->vqs[i], &vq_fops);
+	}
+}
+
+void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux)
+{
+	debugfs_remove_recursive(vdpa_aux->dentry);
+	vdpa_aux->dentry = NULL;
+}
 #endif /* CONFIG_DEBUG_FS */
diff --git a/drivers/vdpa/pds/debugfs.h b/drivers/vdpa/pds/debugfs.h
index fff078a869e5..23e8345add0d 100644
--- a/drivers/vdpa/pds/debugfs.h
+++ b/drivers/vdpa/pds/debugfs.h
@@ -10,9 +10,17 @@
 
 void pds_vdpa_debugfs_create(void);
 void pds_vdpa_debugfs_destroy(void);
+void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux);
+void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux);
+void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux);
+void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux);
 #else
 static inline void pds_vdpa_debugfs_create(void) { }
 static inline void pds_vdpa_debugfs_destroy(void) { }
+static inline void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux) { }
+static inline void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux) { }
+static inline void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux) { }
+static inline void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux) { }
 #endif
 
 #endif /* _PDS_VDPA_DEBUGFS_H_ */
diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
index 15d623297203..2e0a5078d379 100644
--- a/drivers/vdpa/pds/vdpa_dev.c
+++ b/drivers/vdpa/pds/vdpa_dev.c
@@ -5,6 +5,7 @@
 #include <linux/vdpa.h>
 #include <uapi/linux/vdpa.h>
 #include <linux/virtio_pci_modern.h>
+#include <uapi/linux/virtio_pci.h>
 
 #include <linux/pds/pds_core.h>
 #include <linux/pds/pds_adminq.h>
@@ -13,7 +14,426 @@
 
 #include "vdpa_dev.h"
 #include "aux_drv.h"
+#include "cmds.h"
+#include "debugfs.h"
 
+static struct pds_vdpa_device *vdpa_to_pdsv(struct vdpa_device *vdpa_dev)
+{
+	return container_of(vdpa_dev, struct pds_vdpa_device, vdpa_dev);
+}
+
+static int pds_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid,
+				   u64 desc_addr, u64 driver_addr, u64 device_addr)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	pdsv->vqs[qid].desc_addr = desc_addr;
+	pdsv->vqs[qid].avail_addr = driver_addr;
+	pdsv->vqs[qid].used_addr = device_addr;
+
+	return 0;
+}
+
+static void pds_vdpa_set_vq_num(struct vdpa_device *vdpa_dev, u16 qid, u32 num)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	pdsv->vqs[qid].q_len = num;
+}
+
+static void pds_vdpa_kick_vq(struct vdpa_device *vdpa_dev, u16 qid)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	iowrite16(qid, pdsv->vqs[qid].notify);
+}
+
+static void pds_vdpa_set_vq_cb(struct vdpa_device *vdpa_dev, u16 qid,
+			       struct vdpa_callback *cb)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	pdsv->vqs[qid].event_cb = *cb;
+}
+
+static irqreturn_t pds_vdpa_isr(int irq, void *data)
+{
+	struct pds_vdpa_vq_info *vq;
+
+	vq = data;
+	if (vq->event_cb.callback)
+		vq->event_cb.callback(vq->event_cb.private);
+
+	return IRQ_HANDLED;
+}
+
+static void pds_vdpa_release_irq(struct pds_vdpa_device *pdsv, int qid)
+{
+	if (pdsv->vqs[qid].irq == VIRTIO_MSI_NO_VECTOR)
+		return;
+
+	free_irq(pdsv->vqs[qid].irq, &pdsv->vqs[qid]);
+	pdsv->vqs[qid].irq = VIRTIO_MSI_NO_VECTOR;
+}
+
+static void pds_vdpa_set_vq_ready(struct vdpa_device *vdpa_dev, u16 qid, bool ready)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+	struct pci_dev *pdev = pdsv->vdpa_aux->padev->vf->pdev;
+	struct device *dev = &pdsv->vdpa_dev.dev;
+	int irq;
+	int err;
+
+	dev_dbg(dev, "%s: qid %d ready %d => %d\n",
+		__func__, qid, pdsv->vqs[qid].ready, ready);
+	if (ready == pdsv->vqs[qid].ready)
+		return;
+
+	if (ready) {
+		irq = pci_irq_vector(pdev, qid);
+		snprintf(pdsv->vqs[qid].irq_name, sizeof(pdsv->vqs[qid].irq_name),
+			 "vdpa-%s-%d", dev_name(dev), qid);
+
+		err = request_irq(irq, pds_vdpa_isr, 0,
+				  pdsv->vqs[qid].irq_name, &pdsv->vqs[qid]);
+		if (err) {
+			dev_err(dev, "%s: no irq for qid %d: %pe\n",
+				__func__, qid, ERR_PTR(err));
+			return;
+		}
+		pdsv->vqs[qid].irq = irq;
+
+		/* Pass vq setup info to DSC */
+		err = pds_vdpa_cmd_init_vq(pdsv, qid, &pdsv->vqs[qid]);
+		if (err) {
+			pds_vdpa_release_irq(pdsv, qid);
+			ready = false;
+		}
+	} else {
+		err = pds_vdpa_cmd_reset_vq(pdsv, qid);
+		if (err)
+			dev_err(dev, "%s: reset_vq failed qid %d: %pe\n",
+				__func__, qid, ERR_PTR(err));
+		pds_vdpa_release_irq(pdsv, qid);
+	}
+
+	pdsv->vqs[qid].ready = ready;
+}
+
+static bool pds_vdpa_get_vq_ready(struct vdpa_device *vdpa_dev, u16 qid)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	return pdsv->vqs[qid].ready;
+}
+
+static int pds_vdpa_set_vq_state(struct vdpa_device *vdpa_dev, u16 qid,
+				 const struct vdpa_vq_state *state)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+	struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
+	struct device *dev = &padev->aux_dev.dev;
+	struct pds_vdpa_vq_set_state_cmd cmd = {
+		.opcode = PDS_VDPA_CMD_VQ_SET_STATE,
+		.vdpa_index = pdsv->vdpa_index,
+		.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
+		.qid = cpu_to_le16(qid),
+	};
+	struct pds_vdpa_comp comp = {0};
+	int err;
+
+	dev_dbg(dev, "%s: qid %d avail %#x\n",
+		__func__, qid, state->packed.last_avail_idx);
+
+	if (pdsv->actual_features & VIRTIO_F_RING_PACKED) {
+		cmd.avail = cpu_to_le16(state->packed.last_avail_idx |
+					(state->packed.last_avail_counter << 15));
+		cmd.used = cpu_to_le16(state->packed.last_used_idx |
+				       (state->packed.last_used_counter << 15));
+	} else {
+		cmd.avail = cpu_to_le16(state->split.avail_index);
+		/* state->split does not provide a used_index:
+		 * the vq will be set to "empty" here, and the vq will read
+		 * the current used index the next time the vq is kicked.
+		 */
+		cmd.used = cpu_to_le16(state->split.avail_index);
+	}
+
+	err = padev->ops->adminq_cmd(padev,
+				     (union pds_core_adminq_cmd *)&cmd,
+				     sizeof(cmd),
+				     (union pds_core_adminq_comp *)&comp,
+				     0);
+	if (err)
+		dev_err(dev, "Failed to set vq state qid %u, status %d: %pe\n",
+			qid, comp.status, ERR_PTR(err));
+
+	return err;
+}
+
+static int pds_vdpa_get_vq_state(struct vdpa_device *vdpa_dev, u16 qid,
+				 struct vdpa_vq_state *state)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+	struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
+	struct device *dev = &padev->aux_dev.dev;
+	struct pds_vdpa_vq_get_state_cmd cmd = {
+		.opcode = PDS_VDPA_CMD_VQ_GET_STATE,
+		.vdpa_index = pdsv->vdpa_index,
+		.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
+		.qid = cpu_to_le16(qid),
+	};
+	struct pds_vdpa_vq_get_state_comp comp = {0};
+	int err;
+
+	dev_dbg(dev, "%s: qid %d\n", __func__, qid);
+
+	err = padev->ops->adminq_cmd(padev,
+				     (union pds_core_adminq_cmd *)&cmd,
+				     sizeof(cmd),
+				     (union pds_core_adminq_comp *)&comp,
+				     0);
+	if (err) {
+		dev_err(dev, "Failed to get vq state qid %u, status %d: %pe\n",
+			qid, comp.status, ERR_PTR(err));
+		return err;
+	}
+
+	if (pdsv->actual_features & VIRTIO_F_RING_PACKED) {
+		state->packed.last_avail_idx = le16_to_cpu(comp.avail) & 0x7fff;
+		state->packed.last_avail_counter = le16_to_cpu(comp.avail) >> 15;
+	} else {
+		state->split.avail_index = le16_to_cpu(comp.avail);
+		/* state->split does not provide a used_index. */
+	}
+
+	return err;
+}
+
+static struct vdpa_notification_area
+pds_vdpa_get_vq_notification(struct vdpa_device *vdpa_dev, u16 qid)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+	struct virtio_pci_modern_device *vd_mdev;
+	struct vdpa_notification_area area;
+
+	area.addr = pdsv->vqs[qid].notify_pa;
+
+	vd_mdev = &pdsv->vdpa_aux->vd_mdev;
+	if (!vd_mdev->notify_offset_multiplier)
+		area.size = PAGE_SIZE;
+	else
+		area.size = vd_mdev->notify_offset_multiplier;
+
+	return area;
+}
+
+static int pds_vdpa_get_vq_irq(struct vdpa_device *vdpa_dev, u16 qid)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	return pdsv->vqs[qid].irq;
+}
+
+static u32 pds_vdpa_get_vq_align(struct vdpa_device *vdpa_dev)
+{
+	return PAGE_SIZE;
+}
+
+static u32 pds_vdpa_get_vq_group(struct vdpa_device *vdpa_dev, u16 idx)
+{
+	return 0;
+}
+
+static u64 pds_vdpa_get_device_features(struct vdpa_device *vdpa_dev)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	return le64_to_cpu(pdsv->vdpa_aux->ident.hw_features);
+}
+
+static int pds_vdpa_set_driver_features(struct vdpa_device *vdpa_dev, u64 features)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+	struct device *dev = &pdsv->vdpa_dev.dev;
+	u64 nego_features;
+	u64 missing;
+	int err;
+
+	if (!(features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM)) && features) {
+		dev_err(dev, "VIRTIO_F_ACCESS_PLATFORM is not negotiated\n");
+		return -EOPNOTSUPP;
+	}
+
+	pdsv->req_features = features;
+
+	/* Check for valid feature bits */
+	nego_features = features & le64_to_cpu(pdsv->vdpa_aux->ident.hw_features);
+	missing = pdsv->req_features & ~nego_features;
+	if (missing) {
+		dev_err(dev, "Can't support all requested features in %#llx, missing %#llx features\n",
+			pdsv->req_features, missing);
+		return -EOPNOTSUPP;
+	}
+
+	dev_dbg(dev, "%s: %#llx => %#llx\n",
+		__func__, pdsv->actual_features, nego_features);
+
+	if (pdsv->actual_features == nego_features)
+		return 0;
+
+	err = pds_vdpa_cmd_set_features(pdsv, nego_features);
+	if (!err)
+		pdsv->actual_features = nego_features;
+
+	return err;
+}
+
+static u64 pds_vdpa_get_driver_features(struct vdpa_device *vdpa_dev)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	return pdsv->actual_features;
+}
+
+static void pds_vdpa_set_config_cb(struct vdpa_device *vdpa_dev,
+				   struct vdpa_callback *cb)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	pdsv->config_cb.callback = cb->callback;
+	pdsv->config_cb.private = cb->private;
+}
+
+static u16 pds_vdpa_get_vq_num_max(struct vdpa_device *vdpa_dev)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	/* qemu has assert() that vq_num_max <= VIRTQUEUE_MAX_SIZE (1024) */
+	return min_t(u16, 1024, BIT(le16_to_cpu(pdsv->vdpa_aux->ident.max_qlen)));
+}
+
+static u32 pds_vdpa_get_device_id(struct vdpa_device *vdpa_dev)
+{
+	return VIRTIO_ID_NET;
+}
+
+static u32 pds_vdpa_get_vendor_id(struct vdpa_device *vdpa_dev)
+{
+	return PCI_VENDOR_ID_PENSANDO;
+}
+
+static u8 pds_vdpa_get_status(struct vdpa_device *vdpa_dev)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	return vp_modern_get_status(&pdsv->vdpa_aux->vd_mdev);
+}
+
+static void pds_vdpa_set_status(struct vdpa_device *vdpa_dev, u8 status)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+
+	vp_modern_set_status(&pdsv->vdpa_aux->vd_mdev, status);
+}
+
+static int pds_vdpa_reset(struct vdpa_device *vdpa_dev)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+	struct device *dev = pdsv->vdpa_aux->padev->vf->dev;
+	int err = 0;
+	u8 status;
+	int i;
+
+	status = pds_vdpa_get_status(vdpa_dev);
+
+	if (status == 0)
+		return 0;
+
+	if (status & VIRTIO_CONFIG_S_DRIVER_OK) {
+		/* Reset the vqs */
+		for (i = 0; i < pdsv->num_vqs && !err; i++) {
+			err = pds_vdpa_cmd_reset_vq(pdsv, i);
+			if (err)
+				dev_err(dev, "%s: reset_vq failed qid %d: %pe\n",
+					__func__, i, ERR_PTR(err));
+			pds_vdpa_release_irq(pdsv, i);
+			memset(&pdsv->vqs[i], 0, sizeof(pdsv->vqs[0]));
+			pdsv->vqs[i].ready = false;
+		}
+	}
+
+	if (err != -ETIMEDOUT && err != -ENXIO)
+		pds_vdpa_set_status(vdpa_dev, 0);
+
+	return 0;
+}
+
+static size_t pds_vdpa_get_config_size(struct vdpa_device *vdpa_dev)
+{
+	return sizeof(struct virtio_net_config);
+}
+
+static void pds_vdpa_get_config(struct vdpa_device *vdpa_dev,
+				unsigned int offset,
+				void *buf, unsigned int len)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+	void __iomem *device;
+
+	if (offset + len > sizeof(struct virtio_net_config)) {
+		WARN(true, "%s: bad read, offset %d len %d\n", __func__, offset, len);
+		return;
+	}
+
+	device = pdsv->vdpa_aux->vd_mdev.device;
+	memcpy_fromio(buf, device + offset, len);
+}
+
+static void pds_vdpa_set_config(struct vdpa_device *vdpa_dev,
+				unsigned int offset, const void *buf,
+				unsigned int len)
+{
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
+	void __iomem *device;
+
+	if (offset + len > sizeof(struct virtio_net_config)) {
+		WARN(true, "%s: bad read, offset %d len %d\n", __func__, offset, len);
+		return;
+	}
+
+	device = pdsv->vdpa_aux->vd_mdev.device;
+	memcpy_toio(device + offset, buf, len);
+}
+
+static const struct vdpa_config_ops pds_vdpa_ops = {
+	.set_vq_address		= pds_vdpa_set_vq_address,
+	.set_vq_num		= pds_vdpa_set_vq_num,
+	.kick_vq		= pds_vdpa_kick_vq,
+	.set_vq_cb		= pds_vdpa_set_vq_cb,
+	.set_vq_ready		= pds_vdpa_set_vq_ready,
+	.get_vq_ready		= pds_vdpa_get_vq_ready,
+	.set_vq_state		= pds_vdpa_set_vq_state,
+	.get_vq_state		= pds_vdpa_get_vq_state,
+	.get_vq_notification	= pds_vdpa_get_vq_notification,
+	.get_vq_irq		= pds_vdpa_get_vq_irq,
+	.get_vq_align		= pds_vdpa_get_vq_align,
+	.get_vq_group		= pds_vdpa_get_vq_group,
+
+	.get_device_features	= pds_vdpa_get_device_features,
+	.set_driver_features	= pds_vdpa_set_driver_features,
+	.get_driver_features	= pds_vdpa_get_driver_features,
+	.set_config_cb		= pds_vdpa_set_config_cb,
+	.get_vq_num_max		= pds_vdpa_get_vq_num_max,
+	.get_device_id		= pds_vdpa_get_device_id,
+	.get_vendor_id		= pds_vdpa_get_vendor_id,
+	.get_status		= pds_vdpa_get_status,
+	.set_status		= pds_vdpa_set_status,
+	.reset			= pds_vdpa_reset,
+	.get_config_size	= pds_vdpa_get_config_size,
+	.get_config		= pds_vdpa_get_config,
+	.set_config		= pds_vdpa_set_config,
+};
 static struct virtio_device_id pds_vdpa_id_table[] = {
 	{VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID},
 	{0},
@@ -22,12 +442,135 @@ static struct virtio_device_id pds_vdpa_id_table[] = {
 static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
 			    const struct vdpa_dev_set_config *add_config)
 {
-	return -EOPNOTSUPP;
+	struct pds_vdpa_aux *vdpa_aux;
+	struct pds_vdpa_device *pdsv;
+	struct vdpa_mgmt_dev *mgmt;
+	u16 fw_max_vqs, vq_pairs;
+	struct device *dma_dev;
+	struct pci_dev *pdev;
+	struct device *dev;
+	u8 mac[ETH_ALEN];
+	int err;
+	int i;
+
+	vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev);
+	dev = &vdpa_aux->padev->aux_dev.dev;
+	mgmt = &vdpa_aux->vdpa_mdev;
+
+	if (vdpa_aux->pdsv) {
+		dev_warn(dev, "Multiple vDPA devices on a VF is not supported.\n");
+		return -EOPNOTSUPP;
+	}
+
+	pdsv = vdpa_alloc_device(struct pds_vdpa_device, vdpa_dev,
+				 dev, &pds_vdpa_ops, 1, 1, name, false);
+	if (IS_ERR(pdsv)) {
+		dev_err(dev, "Failed to allocate vDPA structure: %pe\n", pdsv);
+		return PTR_ERR(pdsv);
+	}
+
+	vdpa_aux->pdsv = pdsv;
+	vdpa_aux->padev->priv = pdsv;
+	pdsv->vdpa_aux = vdpa_aux;
+
+	pdev = vdpa_aux->padev->vf->pdev;
+	dma_dev = &pdev->dev;
+	pdsv->vdpa_dev.dma_dev = dma_dev;
+
+	err = pds_vdpa_init_hw(pdsv);
+	if (err) {
+		dev_err(dev, "Failed to init hw: %pe\n", ERR_PTR(err));
+		goto err_unmap;
+	}
+
+	fw_max_vqs = le16_to_cpu(pdsv->vdpa_aux->ident.max_vqs);
+	vq_pairs = fw_max_vqs / 2;
+
+	/* Make sure we have the queues being requested */
+	if (add_config->mask & (1 << VDPA_ATTR_DEV_NET_CFG_MAX_VQP))
+		vq_pairs = add_config->net.max_vq_pairs;
+
+	pdsv->num_vqs = 2 * vq_pairs;
+	if (mgmt->supported_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ))
+		pdsv->num_vqs++;
+
+	if (pdsv->num_vqs > fw_max_vqs) {
+		dev_err(dev, "%s: queue count requested %u greater than max %u\n",
+			__func__, pdsv->num_vqs, fw_max_vqs);
+		err = -ENOSPC;
+		goto err_unmap;
+	}
+
+	if (pdsv->num_vqs != fw_max_vqs) {
+		err = pds_vdpa_cmd_set_max_vq_pairs(pdsv, vq_pairs);
+		if (err) {
+			dev_err(dev, "Failed to set max_vq_pairs: %pe\n",
+				ERR_PTR(err));
+			goto err_unmap;
+		}
+	}
+
+	/* Set a mac, either from the user config if provided
+	 * or set a random mac if default is 00:..:00
+	 */
+	if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR)) {
+		ether_addr_copy(mac, add_config->net.mac);
+		pds_vdpa_cmd_set_mac(pdsv, mac);
+	} else {
+		struct virtio_net_config __iomem *vc;
+
+		vc = pdsv->vdpa_aux->vd_mdev.device;
+		memcpy_fromio(mac, vc->mac, sizeof(mac));
+		if (is_zero_ether_addr(mac)) {
+			eth_random_addr(mac);
+			dev_info(dev, "setting random mac %pM\n", mac);
+			pds_vdpa_cmd_set_mac(pdsv, mac);
+		}
+	}
+
+	for (i = 0; i < pdsv->num_vqs; i++) {
+		pdsv->vqs[i].qid = i;
+		pdsv->vqs[i].pdsv = pdsv;
+		pdsv->vqs[i].irq = VIRTIO_MSI_NO_VECTOR;
+		pdsv->vqs[i].notify = vp_modern_map_vq_notify(&pdsv->vdpa_aux->vd_mdev,
+							      i, &pdsv->vqs[i].notify_pa);
+	}
+
+	pdsv->vdpa_dev.mdev = &vdpa_aux->vdpa_mdev;
+
+	/* We use the _vdpa_register_device() call rather than the
+	 * vdpa_register_device() to avoid a deadlock because our
+	 * dev_add() is called with the vdpa_dev_lock already set
+	 * by vdpa_nl_cmd_dev_add_set_doit()
+	 */
+	err = _vdpa_register_device(&pdsv->vdpa_dev, pdsv->num_vqs);
+	if (err) {
+		dev_err(dev, "Failed to register to vDPA bus: %pe\n", ERR_PTR(err));
+		goto err_unmap;
+	}
+
+	pds_vdpa_debugfs_add_vdpadev(vdpa_aux);
+
+	return 0;
+
+err_unmap:
+	put_device(&pdsv->vdpa_dev.dev);
+	vdpa_aux->pdsv = NULL;
+	return err;
 }
 
 static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev,
 			     struct vdpa_device *vdpa_dev)
 {
+	struct pds_vdpa_aux *vdpa_aux;
+
+	vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev);
+	_vdpa_unregister_device(vdpa_dev);
+	pds_vdpa_debugfs_del_vdpadev(vdpa_aux);
+
+	vdpa_aux->pdsv = NULL;
+
+	dev_info(vdpa_aux->padev->vf->dev, "Removed vdpa device\n");
 }
 
 static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH RFC v2 virtio 6/7] pds_vdpa: subscribe to the pds_core events
  2023-03-09  1:30 [PATCH RFC v2 virtio 0/7] pds_vdpa driver Shannon Nelson
                   ` (4 preceding siblings ...)
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 5/7] pds_vdpa: add support for vdpa and vdpamgmt interfaces Shannon Nelson
@ 2023-03-09  1:30 ` Shannon Nelson
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig Shannon Nelson
  6 siblings, 0 replies; 36+ messages in thread
From: Shannon Nelson @ 2023-03-09  1:30 UTC (permalink / raw)
  To: jasowang, mst, virtualization, shannon.nelson, brett.creeley,
	davem, netdev, kuba
  Cc: drivers

Register for the pds_core's notification events, primarily to
find out when the FW has been reset so we can pass this on
back up the chain.

Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
---
 drivers/vdpa/pds/vdpa_dev.c | 68 ++++++++++++++++++++++++++++++++++++-
 drivers/vdpa/pds/vdpa_dev.h |  1 +
 2 files changed, 68 insertions(+), 1 deletion(-)

diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
index 2e0a5078d379..d99adb4f9fb1 100644
--- a/drivers/vdpa/pds/vdpa_dev.c
+++ b/drivers/vdpa/pds/vdpa_dev.c
@@ -22,6 +22,61 @@ static struct pds_vdpa_device *vdpa_to_pdsv(struct vdpa_device *vdpa_dev)
 	return container_of(vdpa_dev, struct pds_vdpa_device, vdpa_dev);
 }
 
+static int pds_vdpa_notify_handler(struct notifier_block *nb,
+				   unsigned long ecode,
+				   void *data)
+{
+	struct pds_vdpa_device *pdsv = container_of(nb, struct pds_vdpa_device, nb);
+	struct device *dev = pdsv->vdpa_aux->padev->vf->dev;
+
+	dev_dbg(dev, "%s: event code %lu\n", __func__, ecode);
+
+	/* Give the upper layers a hint that something interesting
+	 * may have happened.  It seems that the only thing this
+	 * triggers in the virtio-net drivers above us is a check
+	 * of link status.
+	 *
+	 * We don't set the NEEDS_RESET flag for EVENT_RESET
+	 * because we're likely going through a recovery or
+	 * fw_update and will be back up and running soon.
+	 */
+	if (ecode == PDS_EVENT_RESET || ecode == PDS_EVENT_LINK_CHANGE) {
+		if (pdsv->config_cb.callback)
+			pdsv->config_cb.callback(pdsv->config_cb.private);
+	}
+
+	return 0;
+}
+
+static int pds_vdpa_register_event_handler(struct pds_vdpa_device *pdsv)
+{
+	struct device *dev = pdsv->vdpa_aux->padev->vf->dev;
+	struct notifier_block *nb = &pdsv->nb;
+	int err;
+
+	if (!nb->notifier_call) {
+		nb->notifier_call = pds_vdpa_notify_handler;
+		err = pdsc_register_notify(nb);
+		if (err) {
+			nb->notifier_call = NULL;
+			dev_err(dev, "failed to register pds event handler: %ps\n",
+				ERR_PTR(err));
+			return -EINVAL;
+		}
+		dev_dbg(dev, "pds event handler registered\n");
+	}
+
+	return 0;
+}
+
+static void pds_vdpa_unregister_event_handler(struct pds_vdpa_device *pdsv)
+{
+	if (pdsv->nb.notifier_call) {
+		pdsc_unregister_notify(&pdsv->nb);
+		pdsv->nb.notifier_call = NULL;
+	}
+}
+
 static int pds_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid,
 				   u64 desc_addr, u64 driver_addr, u64 device_addr)
 {
@@ -538,6 +593,12 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
 
 	pdsv->vdpa_dev.mdev = &vdpa_aux->vdpa_mdev;
 
+	err = pds_vdpa_register_event_handler(pdsv);
+	if (err) {
+		dev_err(dev, "Failed to register for PDS events: %pe\n", ERR_PTR(err));
+		goto err_unmap;
+	}
+
 	/* We use the _vdpa_register_device() call rather than the
 	 * vdpa_register_device() to avoid a deadlock because our
 	 * dev_add() is called with the vdpa_dev_lock already set
@@ -546,13 +607,15 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
 	err = _vdpa_register_device(&pdsv->vdpa_dev, pdsv->num_vqs);
 	if (err) {
 		dev_err(dev, "Failed to register to vDPA bus: %pe\n", ERR_PTR(err));
-		goto err_unmap;
+		goto err_unevent;
 	}
 
 	pds_vdpa_debugfs_add_vdpadev(vdpa_aux);
 
 	return 0;
 
+err_unevent:
+	pds_vdpa_unregister_event_handler(pdsv);
 err_unmap:
 	put_device(&pdsv->vdpa_dev.dev);
 	vdpa_aux->pdsv = NULL;
@@ -562,8 +625,11 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
 static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev,
 			     struct vdpa_device *vdpa_dev)
 {
+	struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
 	struct pds_vdpa_aux *vdpa_aux;
 
+	pds_vdpa_unregister_event_handler(pdsv);
+
 	vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev);
 	_vdpa_unregister_device(vdpa_dev);
 	pds_vdpa_debugfs_del_vdpadev(vdpa_aux);
diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h
index 33284ebe538c..4e7a1b04a12a 100644
--- a/drivers/vdpa/pds/vdpa_dev.h
+++ b/drivers/vdpa/pds/vdpa_dev.h
@@ -43,6 +43,7 @@ struct pds_vdpa_device {
 	u8 vdpa_index;			/* rsvd for future subdevice use */
 	u8 num_vqs;			/* num vqs in use */
 	struct vdpa_callback config_cb;
+	struct notifier_block nb;
 };
 
 int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig
  2023-03-09  1:30 [PATCH RFC v2 virtio 0/7] pds_vdpa driver Shannon Nelson
                   ` (5 preceding siblings ...)
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 6/7] pds_vdpa: subscribe to the pds_core events Shannon Nelson
@ 2023-03-09  1:30 ` Shannon Nelson
  2023-03-15  7:05     ` Jason Wang
  2023-03-15 18:10   ` kernel test robot
  6 siblings, 2 replies; 36+ messages in thread
From: Shannon Nelson @ 2023-03-09  1:30 UTC (permalink / raw)
  To: jasowang, mst, virtualization, shannon.nelson, brett.creeley,
	davem, netdev, kuba
  Cc: drivers

Add the documentation and Kconfig entry for pds_vdpa driver.

Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
---
 .../ethernet/pensando/pds_vdpa.rst            | 84 +++++++++++++++++++
 MAINTAINERS                                   |  4 +
 drivers/vdpa/Kconfig                          |  8 ++
 3 files changed, 96 insertions(+)
 create mode 100644 Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst

diff --git a/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
new file mode 100644
index 000000000000..d41f6dd66e3e
--- /dev/null
+++ b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
@@ -0,0 +1,84 @@
+.. SPDX-License-Identifier: GPL-2.0+
+.. note: can be edited and viewed with /usr/bin/formiko-vim
+
+==========================================================
+PCI vDPA driver for the AMD/Pensando(R) DSC adapter family
+==========================================================
+
+AMD/Pensando vDPA VF Device Driver
+Copyright(c) 2023 Advanced Micro Devices, Inc
+
+Overview
+========
+
+The ``pds_vdpa`` driver is an auxiliary bus driver that supplies
+a vDPA device for use by the virtio network stack.  It is used with
+the Pensando Virtual Function devices that offer vDPA and virtio queue
+services.  It depends on the ``pds_core`` driver and hardware for the PF
+and VF PCI handling as well as for device configuration services.
+
+Using the device
+================
+
+The ``pds_vdpa`` device is enabled via multiple configuration steps and
+depends on the ``pds_core`` driver to create and enable SR-IOV Virtual
+Function devices.
+
+Shown below are the steps to bind the driver to a VF and also to the
+associated auxiliary device created by the ``pds_core`` driver.
+
+.. code-block:: bash
+
+  #!/bin/bash
+
+  modprobe pds_core
+  modprobe vdpa
+  modprobe pds_vdpa
+
+  PF_BDF=`grep -H "vDPA.*1" /sys/kernel/debug/pds_core/*/viftypes | head -1 | awk -F / '{print $6}'`
+
+  # Enable vDPA VF auxiliary device(s) in the PF
+  devlink dev param set pci/$PF_BDF name enable_vnet value true cmode runtime
+
+  # Create a VF for vDPA use
+  echo 1 > /sys/bus/pci/drivers/pds_core/$PF_BDF/sriov_numvfs
+
+  # Find the vDPA services/devices available
+  PDS_VDPA_MGMT=`vdpa mgmtdev show | grep vDPA | head -1 | cut -d: -f1`
+
+  # Create a vDPA device for use in virtio network configurations
+  vdpa dev add name vdpa1 mgmtdev $PDS_VDPA_MGMT mac 00:11:22:33:44:55
+
+  # Set up an ethernet interface on the vdpa device
+  modprobe virtio_vdpa
+
+
+
+Enabling the driver
+===================
+
+The driver is enabled via the standard kernel configuration system,
+using the make command::
+
+  make oldconfig/menuconfig/etc.
+
+The driver is located in the menu structure at:
+
+  -> Device Drivers
+    -> Network device support (NETDEVICES [=y])
+      -> Ethernet driver support
+        -> Pensando devices
+          -> Pensando Ethernet PDS_VDPA Support
+
+Support
+=======
+
+For general Linux networking support, please use the netdev mailing
+list, which is monitored by Pensando personnel::
+
+  netdev@vger.kernel.org
+
+For more specific support needs, please use the Pensando driver support
+email::
+
+  drivers@pensando.io
diff --git a/MAINTAINERS b/MAINTAINERS
index cb21dcd3a02a..da981c5bc830 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -22120,6 +22120,10 @@ SNET DPU VIRTIO DATA PATH ACCELERATOR
 R:	Alvaro Karsz <alvaro.karsz@solid-run.com>
 F:	drivers/vdpa/solidrun/
 
+PDS DSC VIRTIO DATA PATH ACCELERATOR
+R:	Shannon Nelson <shannon.nelson@amd.com>
+F:	drivers/vdpa/pds/
+
 VIRTIO BALLOON
 M:	"Michael S. Tsirkin" <mst@redhat.com>
 M:	David Hildenbrand <david@redhat.com>
diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig
index cd6ad92f3f05..c910cb119c1b 100644
--- a/drivers/vdpa/Kconfig
+++ b/drivers/vdpa/Kconfig
@@ -116,4 +116,12 @@ config ALIBABA_ENI_VDPA
 	  This driver includes a HW monitor device that
 	  reads health values from the DPU.
 
+config PDS_VDPA
+	tristate "vDPA driver for AMD/Pensando DSC devices"
+	depends on PDS_CORE
+	help
+	  VDPA network driver for AMD/Pensando's PDS Core devices.
+	  With this driver, the VirtIO dataplane can be
+	  offloaded to an AMD/Pensando DSC device.
+
 endif # VDPA
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 1/7] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 1/7] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC Shannon Nelson
@ 2023-03-12 14:06   ` Simon Horman
  2023-03-12 14:35     ` Simon Horman
  0 siblings, 1 reply; 36+ messages in thread
From: Simon Horman @ 2023-03-12 14:06 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: jasowang, mst, virtualization, brett.creeley, davem, netdev,
	kuba, drivers

On Wed, Mar 08, 2023 at 05:30:40PM -0800, Shannon Nelson wrote:
> This is the initial auxiliary driver framework for a new vDPA
> device driver, an auxiliary_bus client of the pds_core driver.
> The pds_core driver supplies the PCI services for the VF device
> and for accessing the adminq in the PF device.
> 
> This patch adds the very basics of registering for the auxiliary
> device, setting up debugfs entries, and registering with devlink.
> 
> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>

...

> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
> new file mode 100644
> index 000000000000..a9cd2f450ae1
> --- /dev/null
> +++ b/drivers/vdpa/pds/Makefile
> @@ -0,0 +1,8 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +# Copyright(c) 2023 Advanced Micro Devices, Inc
> +
> +obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
> +
> +pds_vdpa-y := aux_drv.o
> +
> +pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
> new file mode 100644
> index 000000000000..b3f36170253c
> --- /dev/null
> +++ b/drivers/vdpa/pds/aux_drv.c
> @@ -0,0 +1,99 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> +
> +#include <linux/auxiliary_bus.h>
> +
> +#include <linux/pds/pds_core.h>

Perhaps I'm missing something obvious, but
pds_core.h doesn't exist (yet).

> +#include <linux/pds/pds_auxbus.h>
> +#include <linux/pds/pds_vdpa.h>

...

> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
> new file mode 100644
> index 000000000000..3c163dc7b66f
> --- /dev/null
> +++ b/drivers/vdpa/pds/debugfs.c
> @@ -0,0 +1,25 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> +
> +#include <linux/pds/pds_core.h>
> +#include <linux/pds/pds_auxbus.h>
> +
> +#include "aux_drv.h"
> +#include "debugfs.h"
> +
> +#ifdef CONFIG_DEBUG_FS

Again, perhaps I'm missing something obvious, but
compilation of this file is guarded by CONFIG_DEBUG_FS (in ./Makefile).
So I don't think this guard is needed here.

> +
> +static struct dentry *dbfs_dir;
> +
> +void pds_vdpa_debugfs_create(void)
> +{
> +	dbfs_dir = debugfs_create_dir(PDS_VDPA_DRV_NAME, NULL);
> +}
> +
> +void pds_vdpa_debugfs_destroy(void)
> +{
> +	debugfs_remove_recursive(dbfs_dir);
> +	dbfs_dir = NULL;
> +}
> +
> +#endif /* CONFIG_DEBUG_FS */

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 1/7] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC
  2023-03-12 14:06   ` Simon Horman
@ 2023-03-12 14:35     ` Simon Horman
  2023-03-13 16:13       ` Shannon Nelson
  0 siblings, 1 reply; 36+ messages in thread
From: Simon Horman @ 2023-03-12 14:35 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: jasowang, mst, virtualization, brett.creeley, davem, netdev,
	kuba, drivers

On Sun, Mar 12, 2023 at 03:06:39PM +0100, Simon Horman wrote:
> On Wed, Mar 08, 2023 at 05:30:40PM -0800, Shannon Nelson wrote:
> > This is the initial auxiliary driver framework for a new vDPA
> > device driver, an auxiliary_bus client of the pds_core driver.
> > The pds_core driver supplies the PCI services for the VF device
> > and for accessing the adminq in the PF device.
> > 
> > This patch adds the very basics of registering for the auxiliary
> > device, setting up debugfs entries, and registering with devlink.
> > 
> > Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> 
> ...
> 
> > diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
> > new file mode 100644
> > index 000000000000..a9cd2f450ae1
> > --- /dev/null
> > +++ b/drivers/vdpa/pds/Makefile
> > @@ -0,0 +1,8 @@
> > +# SPDX-License-Identifier: GPL-2.0-only
> > +# Copyright(c) 2023 Advanced Micro Devices, Inc
> > +
> > +obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
> > +
> > +pds_vdpa-y := aux_drv.o
> > +
> > +pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
> > diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
> > new file mode 100644
> > index 000000000000..b3f36170253c
> > --- /dev/null
> > +++ b/drivers/vdpa/pds/aux_drv.c
> > @@ -0,0 +1,99 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> > +
> > +#include <linux/auxiliary_bus.h>
> > +
> > +#include <linux/pds/pds_core.h>
> 
> Perhaps I'm missing something obvious, but
> pds_core.h doesn't exist (yet).

The obvious thing that I was missing is that it is added by

* [PATCH RFC v4 net-next 00/13] pds_core driver

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 1/7] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC
  2023-03-12 14:35     ` Simon Horman
@ 2023-03-13 16:13       ` Shannon Nelson
  2023-03-13 16:26         ` Simon Horman
  0 siblings, 1 reply; 36+ messages in thread
From: Shannon Nelson @ 2023-03-13 16:13 UTC (permalink / raw)
  To: Simon Horman
  Cc: jasowang, mst, virtualization, brett.creeley, davem, netdev,
	kuba, drivers

On 3/12/23 7:35 AM, Simon Horman wrote:
> On Sun, Mar 12, 2023 at 03:06:39PM +0100, Simon Horman wrote:
>> On Wed, Mar 08, 2023 at 05:30:40PM -0800, Shannon Nelson wrote:
>>> This is the initial auxiliary driver framework for a new vDPA
>>> device driver, an auxiliary_bus client of the pds_core driver.
>>> The pds_core driver supplies the PCI services for the VF device
>>> and for accessing the adminq in the PF device.
>>>
>>> This patch adds the very basics of registering for the auxiliary
>>> device, setting up debugfs entries, and registering with devlink.
>>>
>>> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
>>
>> ...
>>
>>> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
>>> new file mode 100644
>>> index 000000000000..a9cd2f450ae1
>>> --- /dev/null
>>> +++ b/drivers/vdpa/pds/Makefile
>>> @@ -0,0 +1,8 @@
>>> +# SPDX-License-Identifier: GPL-2.0-only
>>> +# Copyright(c) 2023 Advanced Micro Devices, Inc
>>> +
>>> +obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
>>> +
>>> +pds_vdpa-y := aux_drv.o
>>> +
>>> +pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
>>> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
>>> new file mode 100644
>>> index 000000000000..b3f36170253c
>>> --- /dev/null
>>> +++ b/drivers/vdpa/pds/aux_drv.c
>>> @@ -0,0 +1,99 @@
>>> +// SPDX-License-Identifier: GPL-2.0-only
>>> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
>>> +
>>> +#include <linux/auxiliary_bus.h>
>>> +
>>> +#include <linux/pds/pds_core.h>
>>
>> Perhaps I'm missing something obvious, but
>> pds_core.h doesn't exist (yet).
> 
> The obvious thing that I was missing is that it is added by
> 
> * [PATCH RFC v4 net-next 00/13] pds_core driver

Sorry about that - I can try to make that dependency more obvious in the 
next round.

sln

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 1/7] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC
  2023-03-13 16:13       ` Shannon Nelson
@ 2023-03-13 16:26         ` Simon Horman
  0 siblings, 0 replies; 36+ messages in thread
From: Simon Horman @ 2023-03-13 16:26 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: jasowang, mst, virtualization, brett.creeley, davem, netdev,
	kuba, drivers

On Mon, Mar 13, 2023 at 09:13:11AM -0700, Shannon Nelson wrote:
> On 3/12/23 7:35 AM, Simon Horman wrote:
> > On Sun, Mar 12, 2023 at 03:06:39PM +0100, Simon Horman wrote:
> > > On Wed, Mar 08, 2023 at 05:30:40PM -0800, Shannon Nelson wrote:
> > > > This is the initial auxiliary driver framework for a new vDPA
> > > > device driver, an auxiliary_bus client of the pds_core driver.
> > > > The pds_core driver supplies the PCI services for the VF device
> > > > and for accessing the adminq in the PF device.
> > > > 
> > > > This patch adds the very basics of registering for the auxiliary
> > > > device, setting up debugfs entries, and registering with devlink.
> > > > 
> > > > Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> > > 
> > > ...
> > > 
> > > > diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
> > > > new file mode 100644
> > > > index 000000000000..a9cd2f450ae1
> > > > --- /dev/null
> > > > +++ b/drivers/vdpa/pds/Makefile
> > > > @@ -0,0 +1,8 @@
> > > > +# SPDX-License-Identifier: GPL-2.0-only
> > > > +# Copyright(c) 2023 Advanced Micro Devices, Inc
> > > > +
> > > > +obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
> > > > +
> > > > +pds_vdpa-y := aux_drv.o
> > > > +
> > > > +pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
> > > > diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
> > > > new file mode 100644
> > > > index 000000000000..b3f36170253c
> > > > --- /dev/null
> > > > +++ b/drivers/vdpa/pds/aux_drv.c
> > > > @@ -0,0 +1,99 @@
> > > > +// SPDX-License-Identifier: GPL-2.0-only
> > > > +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> > > > +
> > > > +#include <linux/auxiliary_bus.h>
> > > > +
> > > > +#include <linux/pds/pds_core.h>
> > > 
> > > Perhaps I'm missing something obvious, but
> > > pds_core.h doesn't exist (yet).
> > 
> > The obvious thing that I was missing is that it is added by
> > 
> > * [PATCH RFC v4 net-next 00/13] pds_core driver
> 
> Sorry about that - I can try to make that dependency more obvious in the
> next round.

That might be a good idea.
But I am likewise sorry for jumping the gun with my email yesterday.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 2/7] pds_vdpa: get vdpa management info
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 2/7] pds_vdpa: get vdpa management info Shannon Nelson
@ 2023-03-15  7:05     ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-15  7:05 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: brett.creeley, mst, netdev, virtualization, kuba, drivers, davem

On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> Find the vDPA management information from the DSC in order to
> advertise it to the vdpa subsystem.
>
> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> ---
>  drivers/vdpa/pds/Makefile    |   3 +-
>  drivers/vdpa/pds/aux_drv.c   |  13 ++++
>  drivers/vdpa/pds/aux_drv.h   |   7 +++
>  drivers/vdpa/pds/debugfs.c   |   3 +
>  drivers/vdpa/pds/vdpa_dev.c  | 113 +++++++++++++++++++++++++++++++++++
>  drivers/vdpa/pds/vdpa_dev.h  |  15 +++++
>  include/linux/pds/pds_vdpa.h |  92 ++++++++++++++++++++++++++++
>  7 files changed, 245 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/vdpa/pds/vdpa_dev.c
>  create mode 100644 drivers/vdpa/pds/vdpa_dev.h
>
> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
> index a9cd2f450ae1..13b50394ec64 100644
> --- a/drivers/vdpa/pds/Makefile
> +++ b/drivers/vdpa/pds/Makefile
> @@ -3,6 +3,7 @@
>
>  obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
>
> -pds_vdpa-y := aux_drv.o
> +pds_vdpa-y := aux_drv.o \
> +             vdpa_dev.o
>
>  pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
> index b3f36170253c..63e40ae68211 100644
> --- a/drivers/vdpa/pds/aux_drv.c
> +++ b/drivers/vdpa/pds/aux_drv.c
> @@ -2,6 +2,8 @@
>  /* Copyright(c) 2023 Advanced Micro Devices, Inc */
>
>  #include <linux/auxiliary_bus.h>
> +#include <linux/pci.h>
> +#include <linux/vdpa.h>
>
>  #include <linux/pds/pds_core.h>
>  #include <linux/pds/pds_auxbus.h>
> @@ -9,6 +11,7 @@
>
>  #include "aux_drv.h"
>  #include "debugfs.h"
> +#include "vdpa_dev.h"
>
>  static const struct auxiliary_device_id pds_vdpa_id_table[] = {
>         { .name = PDS_VDPA_DEV_NAME, },
> @@ -30,6 +33,7 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>                 return -ENOMEM;
>
>         vdpa_aux->padev = padev;
> +       vdpa_aux->vf_id = pci_iov_vf_id(padev->vf->pdev);
>         auxiliary_set_drvdata(aux_dev, vdpa_aux);
>
>         /* Register our PDS client with the pds_core */
> @@ -40,8 +44,15 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>                 goto err_free_mem;
>         }
>
> +       /* Get device ident info and set up the vdpa_mgmt_dev */
> +       err = pds_vdpa_get_mgmt_info(vdpa_aux);
> +       if (err)
> +               goto err_aux_unreg;
> +
>         return 0;
>
> +err_aux_unreg:
> +       padev->ops->unregister_client(padev);
>  err_free_mem:
>         kfree(vdpa_aux);
>         auxiliary_set_drvdata(aux_dev, NULL);
> @@ -54,6 +65,8 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
>         struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
>         struct device *dev = &aux_dev->dev;
>
> +       pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
> +
>         vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
>
>         kfree(vdpa_aux);
> diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
> index 14e465944dfd..94ba7abcaa43 100644
> --- a/drivers/vdpa/pds/aux_drv.h
> +++ b/drivers/vdpa/pds/aux_drv.h
> @@ -10,6 +10,13 @@
>  struct pds_vdpa_aux {
>         struct pds_auxiliary_dev *padev;
>
> +       struct vdpa_mgmt_dev vdpa_mdev;
> +
> +       struct pds_vdpa_ident ident;
> +
> +       int vf_id;
>         struct dentry *dentry;
> +
> +       int nintrs;
>  };
>  #endif /* _AUX_DRV_H_ */
> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
> index 3c163dc7b66f..7b7e90fd6578 100644
> --- a/drivers/vdpa/pds/debugfs.c
> +++ b/drivers/vdpa/pds/debugfs.c
> @@ -1,7 +1,10 @@
>  // SPDX-License-Identifier: GPL-2.0-only
>  /* Copyright(c) 2023 Advanced Micro Devices, Inc */
>
> +#include <linux/vdpa.h>
> +
>  #include <linux/pds/pds_core.h>
> +#include <linux/pds/pds_vdpa.h>
>  #include <linux/pds/pds_auxbus.h>
>
>  #include "aux_drv.h"
> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
> new file mode 100644
> index 000000000000..bd840688503c
> --- /dev/null
> +++ b/drivers/vdpa/pds/vdpa_dev.c
> @@ -0,0 +1,113 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> +
> +#include <linux/pci.h>
> +#include <linux/vdpa.h>
> +#include <uapi/linux/vdpa.h>
> +
> +#include <linux/pds/pds_core.h>
> +#include <linux/pds/pds_adminq.h>
> +#include <linux/pds/pds_auxbus.h>
> +#include <linux/pds/pds_vdpa.h>
> +
> +#include "vdpa_dev.h"
> +#include "aux_drv.h"
> +
> +static struct virtio_device_id pds_vdpa_id_table[] = {
> +       {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID},
> +       {0},
> +};
> +
> +static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
> +                           const struct vdpa_dev_set_config *add_config)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
> +static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev,
> +                            struct vdpa_device *vdpa_dev)
> +{
> +}
> +
> +static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = {
> +       .dev_add = pds_vdpa_dev_add,
> +       .dev_del = pds_vdpa_dev_del
> +};
> +
> +int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux)
> +{
> +       struct pds_vdpa_ident_cmd ident_cmd = {
> +               .opcode = PDS_VDPA_CMD_IDENT,
> +               .vf_id = cpu_to_le16(vdpa_aux->vf_id),
> +       };
> +       struct pds_vdpa_comp ident_comp = {0};
> +       struct vdpa_mgmt_dev *mgmt;
> +       struct device *pf_dev;
> +       struct pci_dev *pdev;
> +       dma_addr_t ident_pa;
> +       struct device *dev;
> +       u16 max_vqs;
> +       int err;
> +
> +       dev = &vdpa_aux->padev->aux_dev.dev;
> +       pdev = vdpa_aux->padev->vf->pdev;
> +       mgmt = &vdpa_aux->vdpa_mdev;
> +
> +       /* Get resource info through the PF's adminq.  It is a block of info,
> +        * so we need to map some memory for PF to make available to the
> +        * firmware for writing the data.
> +        */

It looks to me pds_vdpa_ident is not very large:

struct pds_vdpa_ident {
        __le64 hw_features;
        __le16 max_vqs;
        __le16 max_qlen;
        __le16 min_qlen;
};

Any reason it is not packed into some type of the comp structure of adminq?

Others look good.

Thanks

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 2/7] pds_vdpa: get vdpa management info
@ 2023-03-15  7:05     ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-15  7:05 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> Find the vDPA management information from the DSC in order to
> advertise it to the vdpa subsystem.
>
> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> ---
>  drivers/vdpa/pds/Makefile    |   3 +-
>  drivers/vdpa/pds/aux_drv.c   |  13 ++++
>  drivers/vdpa/pds/aux_drv.h   |   7 +++
>  drivers/vdpa/pds/debugfs.c   |   3 +
>  drivers/vdpa/pds/vdpa_dev.c  | 113 +++++++++++++++++++++++++++++++++++
>  drivers/vdpa/pds/vdpa_dev.h  |  15 +++++
>  include/linux/pds/pds_vdpa.h |  92 ++++++++++++++++++++++++++++
>  7 files changed, 245 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/vdpa/pds/vdpa_dev.c
>  create mode 100644 drivers/vdpa/pds/vdpa_dev.h
>
> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
> index a9cd2f450ae1..13b50394ec64 100644
> --- a/drivers/vdpa/pds/Makefile
> +++ b/drivers/vdpa/pds/Makefile
> @@ -3,6 +3,7 @@
>
>  obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
>
> -pds_vdpa-y := aux_drv.o
> +pds_vdpa-y := aux_drv.o \
> +             vdpa_dev.o
>
>  pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
> index b3f36170253c..63e40ae68211 100644
> --- a/drivers/vdpa/pds/aux_drv.c
> +++ b/drivers/vdpa/pds/aux_drv.c
> @@ -2,6 +2,8 @@
>  /* Copyright(c) 2023 Advanced Micro Devices, Inc */
>
>  #include <linux/auxiliary_bus.h>
> +#include <linux/pci.h>
> +#include <linux/vdpa.h>
>
>  #include <linux/pds/pds_core.h>
>  #include <linux/pds/pds_auxbus.h>
> @@ -9,6 +11,7 @@
>
>  #include "aux_drv.h"
>  #include "debugfs.h"
> +#include "vdpa_dev.h"
>
>  static const struct auxiliary_device_id pds_vdpa_id_table[] = {
>         { .name = PDS_VDPA_DEV_NAME, },
> @@ -30,6 +33,7 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>                 return -ENOMEM;
>
>         vdpa_aux->padev = padev;
> +       vdpa_aux->vf_id = pci_iov_vf_id(padev->vf->pdev);
>         auxiliary_set_drvdata(aux_dev, vdpa_aux);
>
>         /* Register our PDS client with the pds_core */
> @@ -40,8 +44,15 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>                 goto err_free_mem;
>         }
>
> +       /* Get device ident info and set up the vdpa_mgmt_dev */
> +       err = pds_vdpa_get_mgmt_info(vdpa_aux);
> +       if (err)
> +               goto err_aux_unreg;
> +
>         return 0;
>
> +err_aux_unreg:
> +       padev->ops->unregister_client(padev);
>  err_free_mem:
>         kfree(vdpa_aux);
>         auxiliary_set_drvdata(aux_dev, NULL);
> @@ -54,6 +65,8 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
>         struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
>         struct device *dev = &aux_dev->dev;
>
> +       pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
> +
>         vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
>
>         kfree(vdpa_aux);
> diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
> index 14e465944dfd..94ba7abcaa43 100644
> --- a/drivers/vdpa/pds/aux_drv.h
> +++ b/drivers/vdpa/pds/aux_drv.h
> @@ -10,6 +10,13 @@
>  struct pds_vdpa_aux {
>         struct pds_auxiliary_dev *padev;
>
> +       struct vdpa_mgmt_dev vdpa_mdev;
> +
> +       struct pds_vdpa_ident ident;
> +
> +       int vf_id;
>         struct dentry *dentry;
> +
> +       int nintrs;
>  };
>  #endif /* _AUX_DRV_H_ */
> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
> index 3c163dc7b66f..7b7e90fd6578 100644
> --- a/drivers/vdpa/pds/debugfs.c
> +++ b/drivers/vdpa/pds/debugfs.c
> @@ -1,7 +1,10 @@
>  // SPDX-License-Identifier: GPL-2.0-only
>  /* Copyright(c) 2023 Advanced Micro Devices, Inc */
>
> +#include <linux/vdpa.h>
> +
>  #include <linux/pds/pds_core.h>
> +#include <linux/pds/pds_vdpa.h>
>  #include <linux/pds/pds_auxbus.h>
>
>  #include "aux_drv.h"
> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
> new file mode 100644
> index 000000000000..bd840688503c
> --- /dev/null
> +++ b/drivers/vdpa/pds/vdpa_dev.c
> @@ -0,0 +1,113 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> +
> +#include <linux/pci.h>
> +#include <linux/vdpa.h>
> +#include <uapi/linux/vdpa.h>
> +
> +#include <linux/pds/pds_core.h>
> +#include <linux/pds/pds_adminq.h>
> +#include <linux/pds/pds_auxbus.h>
> +#include <linux/pds/pds_vdpa.h>
> +
> +#include "vdpa_dev.h"
> +#include "aux_drv.h"
> +
> +static struct virtio_device_id pds_vdpa_id_table[] = {
> +       {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID},
> +       {0},
> +};
> +
> +static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
> +                           const struct vdpa_dev_set_config *add_config)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
> +static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev,
> +                            struct vdpa_device *vdpa_dev)
> +{
> +}
> +
> +static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = {
> +       .dev_add = pds_vdpa_dev_add,
> +       .dev_del = pds_vdpa_dev_del
> +};
> +
> +int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux)
> +{
> +       struct pds_vdpa_ident_cmd ident_cmd = {
> +               .opcode = PDS_VDPA_CMD_IDENT,
> +               .vf_id = cpu_to_le16(vdpa_aux->vf_id),
> +       };
> +       struct pds_vdpa_comp ident_comp = {0};
> +       struct vdpa_mgmt_dev *mgmt;
> +       struct device *pf_dev;
> +       struct pci_dev *pdev;
> +       dma_addr_t ident_pa;
> +       struct device *dev;
> +       u16 max_vqs;
> +       int err;
> +
> +       dev = &vdpa_aux->padev->aux_dev.dev;
> +       pdev = vdpa_aux->padev->vf->pdev;
> +       mgmt = &vdpa_aux->vdpa_mdev;
> +
> +       /* Get resource info through the PF's adminq.  It is a block of info,
> +        * so we need to map some memory for PF to make available to the
> +        * firmware for writing the data.
> +        */

It looks to me pds_vdpa_ident is not very large:

struct pds_vdpa_ident {
        __le64 hw_features;
        __le16 max_vqs;
        __le16 max_qlen;
        __le16 min_qlen;
};

Any reason it is not packed into some type of the comp structure of adminq?

Others look good.

Thanks


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 3/7] pds_vdpa: virtio bar setup for vdpa
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 3/7] pds_vdpa: virtio bar setup for vdpa Shannon Nelson
@ 2023-03-15  7:05     ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-15  7:05 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: brett.creeley, mst, netdev, virtualization, kuba, drivers, davem

On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> The PDS vDPA device has a virtio BAR for describing itself, and
> the pds_vdpa driver needs to access it.  Here we copy liberally
> from the existing drivers/virtio/virtio_pci_modern_dev.c as it
> has what we need, but we need to modify it so that it can work
> with our device id and so we can use our own DMA mask.

By passing a pointer to a customized id probing routine to vp_modern_probe()?

Thanks


>
> We suspect there is room for discussion here about making the
> existing code a little more flexible, but we thought we'd at
> least start the discussion here.
>
> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> ---
>  drivers/vdpa/pds/Makefile     |   1 +
>  drivers/vdpa/pds/aux_drv.c    |  14 ++
>  drivers/vdpa/pds/aux_drv.h    |   1 +
>  drivers/vdpa/pds/debugfs.c    |   1 +
>  drivers/vdpa/pds/vdpa_dev.c   |   1 +
>  drivers/vdpa/pds/virtio_pci.c | 281 ++++++++++++++++++++++++++++++++++
>  drivers/vdpa/pds/virtio_pci.h |   8 +
>  7 files changed, 307 insertions(+)
>  create mode 100644 drivers/vdpa/pds/virtio_pci.c
>  create mode 100644 drivers/vdpa/pds/virtio_pci.h
>
> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
> index 13b50394ec64..ca2efa8c6eb5 100644
> --- a/drivers/vdpa/pds/Makefile
> +++ b/drivers/vdpa/pds/Makefile
> @@ -4,6 +4,7 @@
>  obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
>
>  pds_vdpa-y := aux_drv.o \
> +             virtio_pci.o \
>               vdpa_dev.o
>
>  pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
> index 63e40ae68211..28158d0d98a5 100644
> --- a/drivers/vdpa/pds/aux_drv.c
> +++ b/drivers/vdpa/pds/aux_drv.c
> @@ -4,6 +4,7 @@
>  #include <linux/auxiliary_bus.h>
>  #include <linux/pci.h>
>  #include <linux/vdpa.h>
> +#include <linux/virtio_pci_modern.h>
>
>  #include <linux/pds/pds_core.h>
>  #include <linux/pds/pds_auxbus.h>
> @@ -12,6 +13,7 @@
>  #include "aux_drv.h"
>  #include "debugfs.h"
>  #include "vdpa_dev.h"
> +#include "virtio_pci.h"
>
>  static const struct auxiliary_device_id pds_vdpa_id_table[] = {
>         { .name = PDS_VDPA_DEV_NAME, },
> @@ -49,8 +51,19 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>         if (err)
>                 goto err_aux_unreg;
>
> +       /* Find the virtio configuration */
> +       vdpa_aux->vd_mdev.pci_dev = padev->vf->pdev;
> +       err = pds_vdpa_probe_virtio(&vdpa_aux->vd_mdev);
> +       if (err) {
> +               dev_err(dev, "Unable to probe for virtio configuration: %pe\n",
> +                       ERR_PTR(err));
> +               goto err_free_mgmt_info;
> +       }
> +
>         return 0;
>
> +err_free_mgmt_info:
> +       pci_free_irq_vectors(padev->vf->pdev);
>  err_aux_unreg:
>         padev->ops->unregister_client(padev);
>  err_free_mem:
> @@ -65,6 +78,7 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
>         struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
>         struct device *dev = &aux_dev->dev;
>
> +       pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
>         pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
>
>         vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
> diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
> index 94ba7abcaa43..87ac3c01c476 100644
> --- a/drivers/vdpa/pds/aux_drv.h
> +++ b/drivers/vdpa/pds/aux_drv.h
> @@ -16,6 +16,7 @@ struct pds_vdpa_aux {
>
>         int vf_id;
>         struct dentry *dentry;
> +       struct virtio_pci_modern_device vd_mdev;
>
>         int nintrs;
>  };
> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
> index 7b7e90fd6578..aa5e9677fe74 100644
> --- a/drivers/vdpa/pds/debugfs.c
> +++ b/drivers/vdpa/pds/debugfs.c
> @@ -1,6 +1,7 @@
>  // SPDX-License-Identifier: GPL-2.0-only
>  /* Copyright(c) 2023 Advanced Micro Devices, Inc */
>
> +#include <linux/virtio_pci_modern.h>
>  #include <linux/vdpa.h>
>
>  #include <linux/pds/pds_core.h>
> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
> index bd840688503c..15d623297203 100644
> --- a/drivers/vdpa/pds/vdpa_dev.c
> +++ b/drivers/vdpa/pds/vdpa_dev.c
> @@ -4,6 +4,7 @@
>  #include <linux/pci.h>
>  #include <linux/vdpa.h>
>  #include <uapi/linux/vdpa.h>
> +#include <linux/virtio_pci_modern.h>
>
>  #include <linux/pds/pds_core.h>
>  #include <linux/pds/pds_adminq.h>
> diff --git a/drivers/vdpa/pds/virtio_pci.c b/drivers/vdpa/pds/virtio_pci.c
> new file mode 100644
> index 000000000000..cb879619dac3
> --- /dev/null
> +++ b/drivers/vdpa/pds/virtio_pci.c
> @@ -0,0 +1,281 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +
> +/*
> + * adapted from drivers/virtio/virtio_pci_modern_dev.c, v6.0-rc1
> + */
> +
> +#include <linux/virtio_pci_modern.h>
> +#include <linux/pci.h>
> +
> +#include "virtio_pci.h"
> +
> +/*
> + * pds_vdpa_map_capability - map a part of virtio pci capability
> + * @mdev: the modern virtio-pci device
> + * @off: offset of the capability
> + * @minlen: minimal length of the capability
> + * @align: align requirement
> + * @start: start from the capability
> + * @size: map size
> + * @len: the length that is actually mapped
> + * @pa: physical address of the capability
> + *
> + * Returns the io address of for the part of the capability
> + */
> +static void __iomem *
> +pds_vdpa_map_capability(struct virtio_pci_modern_device *mdev, int off,
> +                       size_t minlen, u32 align, u32 start, u32 size,
> +                       size_t *len, resource_size_t *pa)
> +{
> +       struct pci_dev *dev = mdev->pci_dev;
> +       u8 bar;
> +       u32 offset, length;
> +       void __iomem *p;
> +
> +       pci_read_config_byte(dev, off + offsetof(struct virtio_pci_cap,
> +                                                bar),
> +                            &bar);
> +       pci_read_config_dword(dev, off + offsetof(struct virtio_pci_cap, offset),
> +                             &offset);
> +       pci_read_config_dword(dev, off + offsetof(struct virtio_pci_cap, length),
> +                             &length);
> +
> +       /* Check if the BAR may have changed since we requested the region. */
> +       if (bar >= PCI_STD_NUM_BARS || !(mdev->modern_bars & (1 << bar))) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: bar unexpectedly changed to %u\n", bar);
> +               return NULL;
> +       }
> +
> +       if (length <= start) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: bad capability len %u (>%u expected)\n",
> +                       length, start);
> +               return NULL;
> +       }
> +
> +       if (length - start < minlen) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: bad capability len %u (>=%zu expected)\n",
> +                       length, minlen);
> +               return NULL;
> +       }
> +
> +       length -= start;
> +
> +       if (start + offset < offset) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: map wrap-around %u+%u\n",
> +                       start, offset);
> +               return NULL;
> +       }
> +
> +       offset += start;
> +
> +       if (offset & (align - 1)) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: offset %u not aligned to %u\n",
> +                       offset, align);
> +               return NULL;
> +       }
> +
> +       if (length > size)
> +               length = size;
> +
> +       if (len)
> +               *len = length;
> +
> +       if (minlen + offset < minlen ||
> +           minlen + offset > pci_resource_len(dev, bar)) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: map virtio %zu@%u out of range on bar %i length %lu\n",
> +                       minlen, offset,
> +                       bar, (unsigned long)pci_resource_len(dev, bar));
> +               return NULL;
> +       }
> +
> +       p = pci_iomap_range(dev, bar, offset, length);
> +       if (!p)
> +               dev_err(&dev->dev,
> +                       "virtio_pci: unable to map virtio %u@%u on bar %i\n",
> +                       length, offset, bar);
> +       else if (pa)
> +               *pa = pci_resource_start(dev, bar) + offset;
> +
> +       return p;
> +}
> +
> +/**
> + * virtio_pci_find_capability - walk capabilities to find device info.
> + * @dev: the pci device
> + * @cfg_type: the VIRTIO_PCI_CAP_* value we seek
> + * @ioresource_types: IORESOURCE_MEM and/or IORESOURCE_IO.
> + * @bars: the bitmask of BARs
> + *
> + * Returns offset of the capability, or 0.
> + */
> +static inline int virtio_pci_find_capability(struct pci_dev *dev, u8 cfg_type,
> +                                            u32 ioresource_types, int *bars)
> +{
> +       int pos;
> +
> +       for (pos = pci_find_capability(dev, PCI_CAP_ID_VNDR);
> +            pos > 0;
> +            pos = pci_find_next_capability(dev, pos, PCI_CAP_ID_VNDR)) {
> +               u8 type, bar;
> +
> +               pci_read_config_byte(dev, pos + offsetof(struct virtio_pci_cap,
> +                                                        cfg_type),
> +                                    &type);
> +               pci_read_config_byte(dev, pos + offsetof(struct virtio_pci_cap,
> +                                                        bar),
> +                                    &bar);
> +
> +               /* Ignore structures with reserved BAR values */
> +               if (bar >= PCI_STD_NUM_BARS)
> +                       continue;
> +
> +               if (type == cfg_type) {
> +                       if (pci_resource_len(dev, bar) &&
> +                           pci_resource_flags(dev, bar) & ioresource_types) {
> +                               *bars |= (1 << bar);
> +                               return pos;
> +                       }
> +               }
> +       }
> +       return 0;
> +}
> +
> +/*
> + * pds_vdpa_probe_virtio: probe the modern virtio pci device, note that the
> + * caller is required to enable PCI device before calling this function.
> + * @mdev: the modern virtio-pci device
> + *
> + * Return 0 on succeed otherwise fail
> + */
> +int pds_vdpa_probe_virtio(struct virtio_pci_modern_device *mdev)
> +{
> +       struct pci_dev *pci_dev = mdev->pci_dev;
> +       int err, common, isr, notify, device;
> +       u32 notify_length;
> +       u32 notify_offset;
> +
> +       /* check for a common config: if not, use legacy mode (bar 0). */
> +       common = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_COMMON_CFG,
> +                                           IORESOURCE_IO | IORESOURCE_MEM,
> +                                           &mdev->modern_bars);
> +       if (!common) {
> +               dev_info(&pci_dev->dev,
> +                        "virtio_pci: missing common config\n");
> +               return -ENODEV;
> +       }
> +
> +       /* If common is there, these should be too... */
> +       isr = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_ISR_CFG,
> +                                        IORESOURCE_IO | IORESOURCE_MEM,
> +                                        &mdev->modern_bars);
> +       notify = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_NOTIFY_CFG,
> +                                           IORESOURCE_IO | IORESOURCE_MEM,
> +                                           &mdev->modern_bars);
> +       if (!isr || !notify) {
> +               dev_err(&pci_dev->dev,
> +                       "virtio_pci: missing capabilities %i/%i/%i\n",
> +                       common, isr, notify);
> +               return -EINVAL;
> +       }
> +
> +       /* Device capability is only mandatory for devices that have
> +        * device-specific configuration.
> +        */
> +       device = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_DEVICE_CFG,
> +                                           IORESOURCE_IO | IORESOURCE_MEM,
> +                                           &mdev->modern_bars);
> +
> +       err = pci_request_selected_regions(pci_dev, mdev->modern_bars,
> +                                          "virtio-pci-modern");
> +       if (err)
> +               return err;
> +
> +       err = -EINVAL;
> +       mdev->common = pds_vdpa_map_capability(mdev, common,
> +                                              sizeof(struct virtio_pci_common_cfg),
> +                                              4, 0,
> +                                              sizeof(struct virtio_pci_common_cfg),
> +                                              NULL, NULL);
> +       if (!mdev->common)
> +               goto err_map_common;
> +       mdev->isr = pds_vdpa_map_capability(mdev, isr, sizeof(u8), 1,
> +                                           0, 1, NULL, NULL);
> +       if (!mdev->isr)
> +               goto err_map_isr;
> +
> +       /* Read notify_off_multiplier from config space. */
> +       pci_read_config_dword(pci_dev,
> +                             notify + offsetof(struct virtio_pci_notify_cap,
> +                                               notify_off_multiplier),
> +                             &mdev->notify_offset_multiplier);
> +       /* Read notify length and offset from config space. */
> +       pci_read_config_dword(pci_dev,
> +                             notify + offsetof(struct virtio_pci_notify_cap,
> +                                               cap.length),
> +                             &notify_length);
> +
> +       pci_read_config_dword(pci_dev,
> +                             notify + offsetof(struct virtio_pci_notify_cap,
> +                                               cap.offset),
> +                             &notify_offset);
> +
> +       /* We don't know how many VQs we'll map, ahead of the time.
> +        * If notify length is small, map it all now.
> +        * Otherwise, map each VQ individually later.
> +        */
> +       if ((u64)notify_length + (notify_offset % PAGE_SIZE) <= PAGE_SIZE) {
> +               mdev->notify_base = pds_vdpa_map_capability(mdev, notify,
> +                                                           2, 2,
> +                                                           0, notify_length,
> +                                                           &mdev->notify_len,
> +                                                           &mdev->notify_pa);
> +               if (!mdev->notify_base)
> +                       goto err_map_notify;
> +       } else {
> +               mdev->notify_map_cap = notify;
> +       }
> +
> +       /* Again, we don't know how much we should map, but PAGE_SIZE
> +        * is more than enough for all existing devices.
> +        */
> +       if (device) {
> +               mdev->device = pds_vdpa_map_capability(mdev, device, 0, 4,
> +                                                      0, PAGE_SIZE,
> +                                                      &mdev->device_len,
> +                                                      NULL);
> +               if (!mdev->device)
> +                       goto err_map_device;
> +       }
> +
> +       return 0;
> +
> +err_map_device:
> +       if (mdev->notify_base)
> +               pci_iounmap(pci_dev, mdev->notify_base);
> +err_map_notify:
> +       pci_iounmap(pci_dev, mdev->isr);
> +err_map_isr:
> +       pci_iounmap(pci_dev, mdev->common);
> +err_map_common:
> +       pci_release_selected_regions(pci_dev, mdev->modern_bars);
> +       return err;
> +}
> +
> +void pds_vdpa_remove_virtio(struct virtio_pci_modern_device *mdev)
> +{
> +       struct pci_dev *pci_dev = mdev->pci_dev;
> +
> +       if (mdev->device)
> +               pci_iounmap(pci_dev, mdev->device);
> +       if (mdev->notify_base)
> +               pci_iounmap(pci_dev, mdev->notify_base);
> +       pci_iounmap(pci_dev, mdev->isr);
> +       pci_iounmap(pci_dev, mdev->common);
> +       pci_release_selected_regions(pci_dev, mdev->modern_bars);
> +}
> diff --git a/drivers/vdpa/pds/virtio_pci.h b/drivers/vdpa/pds/virtio_pci.h
> new file mode 100644
> index 000000000000..f017cfa1173c
> --- /dev/null
> +++ b/drivers/vdpa/pds/virtio_pci.h
> @@ -0,0 +1,8 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> +
> +#ifndef _PDS_VIRTIO_PCI_H_
> +#define _PDS_VIRTIO_PCI_H_
> +int pds_vdpa_probe_virtio(struct virtio_pci_modern_device *mdev);
> +void pds_vdpa_remove_virtio(struct virtio_pci_modern_device *mdev);
> +#endif /* _PDS_VIRTIO_PCI_H_ */
> --
> 2.17.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 3/7] pds_vdpa: virtio bar setup for vdpa
@ 2023-03-15  7:05     ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-15  7:05 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> The PDS vDPA device has a virtio BAR for describing itself, and
> the pds_vdpa driver needs to access it.  Here we copy liberally
> from the existing drivers/virtio/virtio_pci_modern_dev.c as it
> has what we need, but we need to modify it so that it can work
> with our device id and so we can use our own DMA mask.

By passing a pointer to a customized id probing routine to vp_modern_probe()?

Thanks


>
> We suspect there is room for discussion here about making the
> existing code a little more flexible, but we thought we'd at
> least start the discussion here.
>
> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> ---
>  drivers/vdpa/pds/Makefile     |   1 +
>  drivers/vdpa/pds/aux_drv.c    |  14 ++
>  drivers/vdpa/pds/aux_drv.h    |   1 +
>  drivers/vdpa/pds/debugfs.c    |   1 +
>  drivers/vdpa/pds/vdpa_dev.c   |   1 +
>  drivers/vdpa/pds/virtio_pci.c | 281 ++++++++++++++++++++++++++++++++++
>  drivers/vdpa/pds/virtio_pci.h |   8 +
>  7 files changed, 307 insertions(+)
>  create mode 100644 drivers/vdpa/pds/virtio_pci.c
>  create mode 100644 drivers/vdpa/pds/virtio_pci.h
>
> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
> index 13b50394ec64..ca2efa8c6eb5 100644
> --- a/drivers/vdpa/pds/Makefile
> +++ b/drivers/vdpa/pds/Makefile
> @@ -4,6 +4,7 @@
>  obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
>
>  pds_vdpa-y := aux_drv.o \
> +             virtio_pci.o \
>               vdpa_dev.o
>
>  pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
> index 63e40ae68211..28158d0d98a5 100644
> --- a/drivers/vdpa/pds/aux_drv.c
> +++ b/drivers/vdpa/pds/aux_drv.c
> @@ -4,6 +4,7 @@
>  #include <linux/auxiliary_bus.h>
>  #include <linux/pci.h>
>  #include <linux/vdpa.h>
> +#include <linux/virtio_pci_modern.h>
>
>  #include <linux/pds/pds_core.h>
>  #include <linux/pds/pds_auxbus.h>
> @@ -12,6 +13,7 @@
>  #include "aux_drv.h"
>  #include "debugfs.h"
>  #include "vdpa_dev.h"
> +#include "virtio_pci.h"
>
>  static const struct auxiliary_device_id pds_vdpa_id_table[] = {
>         { .name = PDS_VDPA_DEV_NAME, },
> @@ -49,8 +51,19 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>         if (err)
>                 goto err_aux_unreg;
>
> +       /* Find the virtio configuration */
> +       vdpa_aux->vd_mdev.pci_dev = padev->vf->pdev;
> +       err = pds_vdpa_probe_virtio(&vdpa_aux->vd_mdev);
> +       if (err) {
> +               dev_err(dev, "Unable to probe for virtio configuration: %pe\n",
> +                       ERR_PTR(err));
> +               goto err_free_mgmt_info;
> +       }
> +
>         return 0;
>
> +err_free_mgmt_info:
> +       pci_free_irq_vectors(padev->vf->pdev);
>  err_aux_unreg:
>         padev->ops->unregister_client(padev);
>  err_free_mem:
> @@ -65,6 +78,7 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
>         struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
>         struct device *dev = &aux_dev->dev;
>
> +       pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
>         pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
>
>         vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
> diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
> index 94ba7abcaa43..87ac3c01c476 100644
> --- a/drivers/vdpa/pds/aux_drv.h
> +++ b/drivers/vdpa/pds/aux_drv.h
> @@ -16,6 +16,7 @@ struct pds_vdpa_aux {
>
>         int vf_id;
>         struct dentry *dentry;
> +       struct virtio_pci_modern_device vd_mdev;
>
>         int nintrs;
>  };
> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
> index 7b7e90fd6578..aa5e9677fe74 100644
> --- a/drivers/vdpa/pds/debugfs.c
> +++ b/drivers/vdpa/pds/debugfs.c
> @@ -1,6 +1,7 @@
>  // SPDX-License-Identifier: GPL-2.0-only
>  /* Copyright(c) 2023 Advanced Micro Devices, Inc */
>
> +#include <linux/virtio_pci_modern.h>
>  #include <linux/vdpa.h>
>
>  #include <linux/pds/pds_core.h>
> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
> index bd840688503c..15d623297203 100644
> --- a/drivers/vdpa/pds/vdpa_dev.c
> +++ b/drivers/vdpa/pds/vdpa_dev.c
> @@ -4,6 +4,7 @@
>  #include <linux/pci.h>
>  #include <linux/vdpa.h>
>  #include <uapi/linux/vdpa.h>
> +#include <linux/virtio_pci_modern.h>
>
>  #include <linux/pds/pds_core.h>
>  #include <linux/pds/pds_adminq.h>
> diff --git a/drivers/vdpa/pds/virtio_pci.c b/drivers/vdpa/pds/virtio_pci.c
> new file mode 100644
> index 000000000000..cb879619dac3
> --- /dev/null
> +++ b/drivers/vdpa/pds/virtio_pci.c
> @@ -0,0 +1,281 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +
> +/*
> + * adapted from drivers/virtio/virtio_pci_modern_dev.c, v6.0-rc1
> + */
> +
> +#include <linux/virtio_pci_modern.h>
> +#include <linux/pci.h>
> +
> +#include "virtio_pci.h"
> +
> +/*
> + * pds_vdpa_map_capability - map a part of virtio pci capability
> + * @mdev: the modern virtio-pci device
> + * @off: offset of the capability
> + * @minlen: minimal length of the capability
> + * @align: align requirement
> + * @start: start from the capability
> + * @size: map size
> + * @len: the length that is actually mapped
> + * @pa: physical address of the capability
> + *
> + * Returns the io address of for the part of the capability
> + */
> +static void __iomem *
> +pds_vdpa_map_capability(struct virtio_pci_modern_device *mdev, int off,
> +                       size_t minlen, u32 align, u32 start, u32 size,
> +                       size_t *len, resource_size_t *pa)
> +{
> +       struct pci_dev *dev = mdev->pci_dev;
> +       u8 bar;
> +       u32 offset, length;
> +       void __iomem *p;
> +
> +       pci_read_config_byte(dev, off + offsetof(struct virtio_pci_cap,
> +                                                bar),
> +                            &bar);
> +       pci_read_config_dword(dev, off + offsetof(struct virtio_pci_cap, offset),
> +                             &offset);
> +       pci_read_config_dword(dev, off + offsetof(struct virtio_pci_cap, length),
> +                             &length);
> +
> +       /* Check if the BAR may have changed since we requested the region. */
> +       if (bar >= PCI_STD_NUM_BARS || !(mdev->modern_bars & (1 << bar))) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: bar unexpectedly changed to %u\n", bar);
> +               return NULL;
> +       }
> +
> +       if (length <= start) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: bad capability len %u (>%u expected)\n",
> +                       length, start);
> +               return NULL;
> +       }
> +
> +       if (length - start < minlen) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: bad capability len %u (>=%zu expected)\n",
> +                       length, minlen);
> +               return NULL;
> +       }
> +
> +       length -= start;
> +
> +       if (start + offset < offset) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: map wrap-around %u+%u\n",
> +                       start, offset);
> +               return NULL;
> +       }
> +
> +       offset += start;
> +
> +       if (offset & (align - 1)) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: offset %u not aligned to %u\n",
> +                       offset, align);
> +               return NULL;
> +       }
> +
> +       if (length > size)
> +               length = size;
> +
> +       if (len)
> +               *len = length;
> +
> +       if (minlen + offset < minlen ||
> +           minlen + offset > pci_resource_len(dev, bar)) {
> +               dev_err(&dev->dev,
> +                       "virtio_pci: map virtio %zu@%u out of range on bar %i length %lu\n",
> +                       minlen, offset,
> +                       bar, (unsigned long)pci_resource_len(dev, bar));
> +               return NULL;
> +       }
> +
> +       p = pci_iomap_range(dev, bar, offset, length);
> +       if (!p)
> +               dev_err(&dev->dev,
> +                       "virtio_pci: unable to map virtio %u@%u on bar %i\n",
> +                       length, offset, bar);
> +       else if (pa)
> +               *pa = pci_resource_start(dev, bar) + offset;
> +
> +       return p;
> +}
> +
> +/**
> + * virtio_pci_find_capability - walk capabilities to find device info.
> + * @dev: the pci device
> + * @cfg_type: the VIRTIO_PCI_CAP_* value we seek
> + * @ioresource_types: IORESOURCE_MEM and/or IORESOURCE_IO.
> + * @bars: the bitmask of BARs
> + *
> + * Returns offset of the capability, or 0.
> + */
> +static inline int virtio_pci_find_capability(struct pci_dev *dev, u8 cfg_type,
> +                                            u32 ioresource_types, int *bars)
> +{
> +       int pos;
> +
> +       for (pos = pci_find_capability(dev, PCI_CAP_ID_VNDR);
> +            pos > 0;
> +            pos = pci_find_next_capability(dev, pos, PCI_CAP_ID_VNDR)) {
> +               u8 type, bar;
> +
> +               pci_read_config_byte(dev, pos + offsetof(struct virtio_pci_cap,
> +                                                        cfg_type),
> +                                    &type);
> +               pci_read_config_byte(dev, pos + offsetof(struct virtio_pci_cap,
> +                                                        bar),
> +                                    &bar);
> +
> +               /* Ignore structures with reserved BAR values */
> +               if (bar >= PCI_STD_NUM_BARS)
> +                       continue;
> +
> +               if (type == cfg_type) {
> +                       if (pci_resource_len(dev, bar) &&
> +                           pci_resource_flags(dev, bar) & ioresource_types) {
> +                               *bars |= (1 << bar);
> +                               return pos;
> +                       }
> +               }
> +       }
> +       return 0;
> +}
> +
> +/*
> + * pds_vdpa_probe_virtio: probe the modern virtio pci device, note that the
> + * caller is required to enable PCI device before calling this function.
> + * @mdev: the modern virtio-pci device
> + *
> + * Return 0 on succeed otherwise fail
> + */
> +int pds_vdpa_probe_virtio(struct virtio_pci_modern_device *mdev)
> +{
> +       struct pci_dev *pci_dev = mdev->pci_dev;
> +       int err, common, isr, notify, device;
> +       u32 notify_length;
> +       u32 notify_offset;
> +
> +       /* check for a common config: if not, use legacy mode (bar 0). */
> +       common = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_COMMON_CFG,
> +                                           IORESOURCE_IO | IORESOURCE_MEM,
> +                                           &mdev->modern_bars);
> +       if (!common) {
> +               dev_info(&pci_dev->dev,
> +                        "virtio_pci: missing common config\n");
> +               return -ENODEV;
> +       }
> +
> +       /* If common is there, these should be too... */
> +       isr = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_ISR_CFG,
> +                                        IORESOURCE_IO | IORESOURCE_MEM,
> +                                        &mdev->modern_bars);
> +       notify = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_NOTIFY_CFG,
> +                                           IORESOURCE_IO | IORESOURCE_MEM,
> +                                           &mdev->modern_bars);
> +       if (!isr || !notify) {
> +               dev_err(&pci_dev->dev,
> +                       "virtio_pci: missing capabilities %i/%i/%i\n",
> +                       common, isr, notify);
> +               return -EINVAL;
> +       }
> +
> +       /* Device capability is only mandatory for devices that have
> +        * device-specific configuration.
> +        */
> +       device = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_DEVICE_CFG,
> +                                           IORESOURCE_IO | IORESOURCE_MEM,
> +                                           &mdev->modern_bars);
> +
> +       err = pci_request_selected_regions(pci_dev, mdev->modern_bars,
> +                                          "virtio-pci-modern");
> +       if (err)
> +               return err;
> +
> +       err = -EINVAL;
> +       mdev->common = pds_vdpa_map_capability(mdev, common,
> +                                              sizeof(struct virtio_pci_common_cfg),
> +                                              4, 0,
> +                                              sizeof(struct virtio_pci_common_cfg),
> +                                              NULL, NULL);
> +       if (!mdev->common)
> +               goto err_map_common;
> +       mdev->isr = pds_vdpa_map_capability(mdev, isr, sizeof(u8), 1,
> +                                           0, 1, NULL, NULL);
> +       if (!mdev->isr)
> +               goto err_map_isr;
> +
> +       /* Read notify_off_multiplier from config space. */
> +       pci_read_config_dword(pci_dev,
> +                             notify + offsetof(struct virtio_pci_notify_cap,
> +                                               notify_off_multiplier),
> +                             &mdev->notify_offset_multiplier);
> +       /* Read notify length and offset from config space. */
> +       pci_read_config_dword(pci_dev,
> +                             notify + offsetof(struct virtio_pci_notify_cap,
> +                                               cap.length),
> +                             &notify_length);
> +
> +       pci_read_config_dword(pci_dev,
> +                             notify + offsetof(struct virtio_pci_notify_cap,
> +                                               cap.offset),
> +                             &notify_offset);
> +
> +       /* We don't know how many VQs we'll map, ahead of the time.
> +        * If notify length is small, map it all now.
> +        * Otherwise, map each VQ individually later.
> +        */
> +       if ((u64)notify_length + (notify_offset % PAGE_SIZE) <= PAGE_SIZE) {
> +               mdev->notify_base = pds_vdpa_map_capability(mdev, notify,
> +                                                           2, 2,
> +                                                           0, notify_length,
> +                                                           &mdev->notify_len,
> +                                                           &mdev->notify_pa);
> +               if (!mdev->notify_base)
> +                       goto err_map_notify;
> +       } else {
> +               mdev->notify_map_cap = notify;
> +       }
> +
> +       /* Again, we don't know how much we should map, but PAGE_SIZE
> +        * is more than enough for all existing devices.
> +        */
> +       if (device) {
> +               mdev->device = pds_vdpa_map_capability(mdev, device, 0, 4,
> +                                                      0, PAGE_SIZE,
> +                                                      &mdev->device_len,
> +                                                      NULL);
> +               if (!mdev->device)
> +                       goto err_map_device;
> +       }
> +
> +       return 0;
> +
> +err_map_device:
> +       if (mdev->notify_base)
> +               pci_iounmap(pci_dev, mdev->notify_base);
> +err_map_notify:
> +       pci_iounmap(pci_dev, mdev->isr);
> +err_map_isr:
> +       pci_iounmap(pci_dev, mdev->common);
> +err_map_common:
> +       pci_release_selected_regions(pci_dev, mdev->modern_bars);
> +       return err;
> +}
> +
> +void pds_vdpa_remove_virtio(struct virtio_pci_modern_device *mdev)
> +{
> +       struct pci_dev *pci_dev = mdev->pci_dev;
> +
> +       if (mdev->device)
> +               pci_iounmap(pci_dev, mdev->device);
> +       if (mdev->notify_base)
> +               pci_iounmap(pci_dev, mdev->notify_base);
> +       pci_iounmap(pci_dev, mdev->isr);
> +       pci_iounmap(pci_dev, mdev->common);
> +       pci_release_selected_regions(pci_dev, mdev->modern_bars);
> +}
> diff --git a/drivers/vdpa/pds/virtio_pci.h b/drivers/vdpa/pds/virtio_pci.h
> new file mode 100644
> index 000000000000..f017cfa1173c
> --- /dev/null
> +++ b/drivers/vdpa/pds/virtio_pci.h
> @@ -0,0 +1,8 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> +
> +#ifndef _PDS_VIRTIO_PCI_H_
> +#define _PDS_VIRTIO_PCI_H_
> +int pds_vdpa_probe_virtio(struct virtio_pci_modern_device *mdev);
> +void pds_vdpa_remove_virtio(struct virtio_pci_modern_device *mdev);
> +#endif /* _PDS_VIRTIO_PCI_H_ */
> --
> 2.17.1
>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 4/7] pds_vdpa: add vdpa config client commands
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 4/7] pds_vdpa: add vdpa config client commands Shannon Nelson
@ 2023-03-15  7:05     ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-15  7:05 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: brett.creeley, mst, netdev, virtualization, kuba, drivers, davem

On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> These are the adminq commands that will be needed for
> setting up and using the vDPA device.

It's better to explain under which case the driver should use adminq,
I see some functions overlap with common configuration capability.
More below.

>
> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> ---
>  drivers/vdpa/pds/Makefile    |   1 +
>  drivers/vdpa/pds/cmds.c      | 207 +++++++++++++++++++++++++++++++++++
>  drivers/vdpa/pds/cmds.h      |  16 +++
>  drivers/vdpa/pds/vdpa_dev.h  |  36 +++++-
>  include/linux/pds/pds_vdpa.h | 175 +++++++++++++++++++++++++++++
>  5 files changed, 434 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/vdpa/pds/cmds.c
>  create mode 100644 drivers/vdpa/pds/cmds.h
>
> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
> index ca2efa8c6eb5..7211eba3d942 100644
> --- a/drivers/vdpa/pds/Makefile
> +++ b/drivers/vdpa/pds/Makefile
> @@ -4,6 +4,7 @@
>  obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
>
>  pds_vdpa-y := aux_drv.o \
> +             cmds.o \
>               virtio_pci.o \
>               vdpa_dev.o
>
> diff --git a/drivers/vdpa/pds/cmds.c b/drivers/vdpa/pds/cmds.c
> new file mode 100644
> index 000000000000..45410739107c
> --- /dev/null
> +++ b/drivers/vdpa/pds/cmds.c
> @@ -0,0 +1,207 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> +
> +#include <linux/vdpa.h>
> +#include <linux/virtio_pci_modern.h>
> +
> +#include <linux/pds/pds_core_if.h>
> +#include <linux/pds/pds_adminq.h>
> +#include <linux/pds/pds_auxbus.h>
> +#include <linux/pds/pds_vdpa.h>
> +
> +#include "vdpa_dev.h"
> +#include "aux_drv.h"
> +#include "cmds.h"
> +
> +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_init_cmd init_cmd = {
> +               .opcode = PDS_VDPA_CMD_INIT,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .len = cpu_to_le32(sizeof(struct virtio_net_config)),
> +               .config_pa = 0,   /* we use the PCI space, not an alternate space */
> +       };
> +       struct pds_vdpa_comp init_comp = {0};
> +       int err;
> +
> +       /* Initialize the vdpa/virtio device */
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&init_cmd,
> +                                    sizeof(init_cmd),
> +                                    (union pds_core_adminq_comp *)&init_comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to init hw, status %d: %pe\n",
> +                       init_comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> +
> +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv)
> +{

This function is not used.

And I wonder what's the difference between reset via adminq and reset
via pds_vdpa_set_status(0) ?

> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_RESET,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to reset hw, status %d: %pe\n",
> +                       comp.status, ERR_PTR(err));

It might be better to use deb_dbg() here since it can be triggered by the guest.

> +
> +       return err;
> +}
> +
> +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_setattr_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_SET_ATTR,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .attr = PDS_VDPA_ATTR_MAC,
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       ether_addr_copy(cmd.mac, mac);
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to set mac address %pM, status %d: %pe\n",
> +                       mac, comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> +
> +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_setattr_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_SET_ATTR,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .attr = PDS_VDPA_ATTR_MAX_VQ_PAIRS,
> +               .max_vq_pairs = cpu_to_le16(max_vqp),
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to set max vq pairs %u, status %d: %pe\n",
> +                       max_vqp, comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> +
> +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid,
> +                        struct pds_vdpa_vq_info *vq_info)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_vq_init_comp comp = {0};
> +       struct pds_vdpa_vq_init_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_VQ_INIT,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .qid = cpu_to_le16(qid),
> +               .len = cpu_to_le16(ilog2(vq_info->q_len)),
> +               .desc_addr = cpu_to_le64(vq_info->desc_addr),
> +               .avail_addr = cpu_to_le64(vq_info->avail_addr),
> +               .used_addr = cpu_to_le64(vq_info->used_addr),
> +               .intr_index = cpu_to_le16(qid),
> +       };
> +       int err;
> +
> +       dev_dbg(dev, "%s: qid %d len %d desc_addr %#llx avail_addr %#llx used_addr %#llx\n",
> +               __func__, qid, ilog2(vq_info->q_len),
> +               vq_info->desc_addr, vq_info->avail_addr, vq_info->used_addr);
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);

We map common cfg in pds_vdpa_probe_virtio, any reason for using
adminq here? (I guess it might be faster?)

> +       if (err) {
> +               dev_err(dev, "Failed to init vq %d, status %d: %pe\n",
> +                       qid, comp.status, ERR_PTR(err));
> +               return err;
> +       }
> +
> +       vq_info->hw_qtype = comp.hw_qtype;

What does hw_qtype mean?

> +       vq_info->hw_qindex = le16_to_cpu(comp.hw_qindex);
> +
> +       return 0;
> +}
> +
> +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_vq_reset_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_VQ_RESET,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .qid = cpu_to_le16(qid),
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to reset vq %d, status %d: %pe\n",
> +                       qid, comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> +
> +int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_set_features_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_SET_FEATURES,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .features = cpu_to_le64(features),
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to set features %#llx, status %d: %pe\n",
> +                       features, comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> diff --git a/drivers/vdpa/pds/cmds.h b/drivers/vdpa/pds/cmds.h
> new file mode 100644
> index 000000000000..72e19f4efde6
> --- /dev/null
> +++ b/drivers/vdpa/pds/cmds.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> +
> +#ifndef _VDPA_CMDS_H_
> +#define _VDPA_CMDS_H_
> +
> +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv);
> +
> +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv);
> +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac);
> +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp);
> +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid,
> +                        struct pds_vdpa_vq_info *vq_info);
> +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid);
> +int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features);
> +#endif /* _VDPA_CMDS_H_ */
> diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h
> index 97fab833a0aa..33284ebe538c 100644
> --- a/drivers/vdpa/pds/vdpa_dev.h
> +++ b/drivers/vdpa/pds/vdpa_dev.h
> @@ -4,11 +4,45 @@
>  #ifndef _VDPA_DEV_H_
>  #define _VDPA_DEV_H_
>
> -#define PDS_VDPA_MAX_QUEUES    65
> +#include <linux/pci.h>
> +#include <linux/vdpa.h>
> +
> +struct pds_vdpa_vq_info {
> +       bool ready;
> +       u64 desc_addr;
> +       u64 avail_addr;
> +       u64 used_addr;
> +       u32 q_len;
> +       u16 qid;
> +       int irq;
> +       char irq_name[32];
> +
> +       void __iomem *notify;
> +       dma_addr_t notify_pa;
> +
> +       u64 doorbell;
> +       u16 avail_idx;
> +       u16 used_idx;
> +
> +       u8 hw_qtype;
> +       u16 hw_qindex;
>
> +       struct vdpa_callback event_cb;
> +       struct pds_vdpa_device *pdsv;
> +};
> +
> +#define PDS_VDPA_MAX_QUEUES    65
> +#define PDS_VDPA_MAX_QLEN      32768
>  struct pds_vdpa_device {
>         struct vdpa_device vdpa_dev;
>         struct pds_vdpa_aux *vdpa_aux;
> +
> +       struct pds_vdpa_vq_info vqs[PDS_VDPA_MAX_QUEUES];
> +       u64 req_features;               /* features requested by vdpa */
> +       u64 actual_features;            /* features negotiated and in use */
> +       u8 vdpa_index;                  /* rsvd for future subdevice use */
> +       u8 num_vqs;                     /* num vqs in use */
> +       struct vdpa_callback config_cb;
>  };
>
>  int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux);
> diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h
> index 3f7c08551163..b6a4cb4d3c6b 100644
> --- a/include/linux/pds/pds_vdpa.h
> +++ b/include/linux/pds/pds_vdpa.h
> @@ -101,4 +101,179 @@ struct pds_vdpa_ident_cmd {
>         __le32 len;
>         __le64 ident_pa;
>  };
> +
> +/**
> + * struct pds_vdpa_status_cmd - STATUS_UPDATE command
> + * @opcode:    Opcode PDS_VDPA_CMD_STATUS_UPDATE
> + * @vdpa_index: Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @status:    new status bits
> + */
> +struct pds_vdpa_status_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       u8     status;
> +};
> +
> +/**
> + * enum pds_vdpa_attr - List of VDPA device attributes
> + * @PDS_VDPA_ATTR_MAC:          MAC address
> + * @PDS_VDPA_ATTR_MAX_VQ_PAIRS: Max virtqueue pairs
> + */
> +enum pds_vdpa_attr {
> +       PDS_VDPA_ATTR_MAC          = 1,
> +       PDS_VDPA_ATTR_MAX_VQ_PAIRS = 2,
> +};
> +
> +/**
> + * struct pds_vdpa_setattr_cmd - SET_ATTR command
> + * @opcode:            Opcode PDS_VDPA_CMD_SET_ATTR
> + * @vdpa_index:                Index for vdpa subdevice
> + * @vf_id:             VF id
> + * @attr:              attribute to be changed (enum pds_vdpa_attr)
> + * @pad:               Word boundary padding
> + * @mac:               new mac address to be assigned as vdpa device address
> + * @max_vq_pairs:      new limit of virtqueue pairs
> + */
> +struct pds_vdpa_setattr_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       u8     attr;
> +       u8     pad[3];
> +       union {
> +               u8 mac[6];
> +               __le16 max_vq_pairs;
> +       } __packed;
> +};
> +
> +/**
> + * struct pds_vdpa_vq_init_cmd - queue init command
> + * @opcode: Opcode PDS_VDPA_CMD_VQ_INIT
> + * @vdpa_index:        Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @qid:       Queue id (bit0 clear = rx, bit0 set = tx, qid=N is ctrlq)
> + * @len:       log(2) of max descriptor count
> + * @desc_addr: DMA address of descriptor area
> + * @avail_addr:        DMA address of available descriptors (aka driver area)
> + * @used_addr: DMA address of used descriptors (aka device area)
> + * @intr_index:        interrupt index
> + */
> +struct pds_vdpa_vq_init_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       __le16 qid;
> +       __le16 len;
> +       __le64 desc_addr;
> +       __le64 avail_addr;
> +       __le64 used_addr;
> +       __le16 intr_index;

Just wonder in which case intr_index can be different from qid, in
pds_vdpa_cmd_init_vq() we had:

                .intr_index = cpu_to_le16(qid),

Thanks


> +};
> +
> +/**
> + * struct pds_vdpa_vq_init_comp - queue init completion
> + * @status:    Status of the command (enum pds_core_status_code)
> + * @hw_qtype:  HW queue type, used in doorbell selection
> + * @hw_qindex: HW queue index, used in doorbell selection
> + * @rsvd:      Word boundary padding
> + * @color:     Color bit
> + */
> +struct pds_vdpa_vq_init_comp {
> +       u8     status;
> +       u8     hw_qtype;
> +       __le16 hw_qindex;
> +       u8     rsvd[11];
> +       u8     color;
> +};
> +
> +/**
> + * struct pds_vdpa_vq_reset_cmd - queue reset command
> + * @opcode:    Opcode PDS_VDPA_CMD_VQ_RESET
> + * @vdpa_index:        Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @qid:       Queue id
> + */
> +struct pds_vdpa_vq_reset_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       __le16 qid;
> +};
> +
> +/**
> + * struct pds_vdpa_set_features_cmd - set hw features
> + * @opcode: Opcode PDS_VDPA_CMD_SET_FEATURES
> + * @vdpa_index:        Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @rsvd:       Word boundary padding
> + * @features:  Feature bit mask
> + */
> +struct pds_vdpa_set_features_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       __le32 rsvd;
> +       __le64 features;
> +};
> +
> +/**
> + * struct pds_vdpa_vq_set_state_cmd - set vq state
> + * @opcode:    Opcode PDS_VDPA_CMD_VQ_SET_STATE
> + * @vdpa_index:        Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @qid:       Queue id
> + * @avail:     Device avail index.
> + * @used:      Device used index.
> + *
> + * If the virtqueue uses packed descriptor format, then the avail and used
> + * index must have a wrap count.  The bits should be arranged like the upper
> + * 16 bits in the device available notification data: 15 bit index, 1 bit wrap.
> + */
> +struct pds_vdpa_vq_set_state_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       __le16 qid;
> +       __le16 avail;
> +       __le16 used;
> +};
> +
> +/**
> + * struct pds_vdpa_vq_get_state_cmd - get vq state
> + * @opcode:    Opcode PDS_VDPA_CMD_VQ_GET_STATE
> + * @vdpa_index:        Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @qid:       Queue id
> + */
> +struct pds_vdpa_vq_get_state_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       __le16 qid;
> +};
> +
> +/**
> + * struct pds_vdpa_vq_get_state_comp - get vq state completion
> + * @status:    Status of the command (enum pds_core_status_code)
> + * @rsvd0:      Word boundary padding
> + * @avail:     Device avail index.
> + * @used:      Device used index.
> + * @rsvd:       Word boundary padding
> + * @color:     Color bit
> + *
> + * If the virtqueue uses packed descriptor format, then the avail and used
> + * index will have a wrap count.  The bits will be arranged like the "next"
> + * part of device available notification data: 15 bit index, 1 bit wrap.
> + */
> +struct pds_vdpa_vq_get_state_comp {
> +       u8     status;
> +       u8     rsvd0;
> +       __le16 avail;
> +       __le16 used;
> +       u8     rsvd[9];
> +       u8     color;
> +};
> +
>  #endif /* _PDS_VDPA_IF_H_ */
> --
> 2.17.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 4/7] pds_vdpa: add vdpa config client commands
@ 2023-03-15  7:05     ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-15  7:05 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> These are the adminq commands that will be needed for
> setting up and using the vDPA device.

It's better to explain under which case the driver should use adminq,
I see some functions overlap with common configuration capability.
More below.

>
> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> ---
>  drivers/vdpa/pds/Makefile    |   1 +
>  drivers/vdpa/pds/cmds.c      | 207 +++++++++++++++++++++++++++++++++++
>  drivers/vdpa/pds/cmds.h      |  16 +++
>  drivers/vdpa/pds/vdpa_dev.h  |  36 +++++-
>  include/linux/pds/pds_vdpa.h | 175 +++++++++++++++++++++++++++++
>  5 files changed, 434 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/vdpa/pds/cmds.c
>  create mode 100644 drivers/vdpa/pds/cmds.h
>
> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
> index ca2efa8c6eb5..7211eba3d942 100644
> --- a/drivers/vdpa/pds/Makefile
> +++ b/drivers/vdpa/pds/Makefile
> @@ -4,6 +4,7 @@
>  obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
>
>  pds_vdpa-y := aux_drv.o \
> +             cmds.o \
>               virtio_pci.o \
>               vdpa_dev.o
>
> diff --git a/drivers/vdpa/pds/cmds.c b/drivers/vdpa/pds/cmds.c
> new file mode 100644
> index 000000000000..45410739107c
> --- /dev/null
> +++ b/drivers/vdpa/pds/cmds.c
> @@ -0,0 +1,207 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> +
> +#include <linux/vdpa.h>
> +#include <linux/virtio_pci_modern.h>
> +
> +#include <linux/pds/pds_core_if.h>
> +#include <linux/pds/pds_adminq.h>
> +#include <linux/pds/pds_auxbus.h>
> +#include <linux/pds/pds_vdpa.h>
> +
> +#include "vdpa_dev.h"
> +#include "aux_drv.h"
> +#include "cmds.h"
> +
> +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_init_cmd init_cmd = {
> +               .opcode = PDS_VDPA_CMD_INIT,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .len = cpu_to_le32(sizeof(struct virtio_net_config)),
> +               .config_pa = 0,   /* we use the PCI space, not an alternate space */
> +       };
> +       struct pds_vdpa_comp init_comp = {0};
> +       int err;
> +
> +       /* Initialize the vdpa/virtio device */
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&init_cmd,
> +                                    sizeof(init_cmd),
> +                                    (union pds_core_adminq_comp *)&init_comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to init hw, status %d: %pe\n",
> +                       init_comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> +
> +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv)
> +{

This function is not used.

And I wonder what's the difference between reset via adminq and reset
via pds_vdpa_set_status(0) ?

> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_RESET,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to reset hw, status %d: %pe\n",
> +                       comp.status, ERR_PTR(err));

It might be better to use deb_dbg() here since it can be triggered by the guest.

> +
> +       return err;
> +}
> +
> +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_setattr_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_SET_ATTR,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .attr = PDS_VDPA_ATTR_MAC,
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       ether_addr_copy(cmd.mac, mac);
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to set mac address %pM, status %d: %pe\n",
> +                       mac, comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> +
> +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_setattr_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_SET_ATTR,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .attr = PDS_VDPA_ATTR_MAX_VQ_PAIRS,
> +               .max_vq_pairs = cpu_to_le16(max_vqp),
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to set max vq pairs %u, status %d: %pe\n",
> +                       max_vqp, comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> +
> +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid,
> +                        struct pds_vdpa_vq_info *vq_info)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_vq_init_comp comp = {0};
> +       struct pds_vdpa_vq_init_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_VQ_INIT,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .qid = cpu_to_le16(qid),
> +               .len = cpu_to_le16(ilog2(vq_info->q_len)),
> +               .desc_addr = cpu_to_le64(vq_info->desc_addr),
> +               .avail_addr = cpu_to_le64(vq_info->avail_addr),
> +               .used_addr = cpu_to_le64(vq_info->used_addr),
> +               .intr_index = cpu_to_le16(qid),
> +       };
> +       int err;
> +
> +       dev_dbg(dev, "%s: qid %d len %d desc_addr %#llx avail_addr %#llx used_addr %#llx\n",
> +               __func__, qid, ilog2(vq_info->q_len),
> +               vq_info->desc_addr, vq_info->avail_addr, vq_info->used_addr);
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);

We map common cfg in pds_vdpa_probe_virtio, any reason for using
adminq here? (I guess it might be faster?)

> +       if (err) {
> +               dev_err(dev, "Failed to init vq %d, status %d: %pe\n",
> +                       qid, comp.status, ERR_PTR(err));
> +               return err;
> +       }
> +
> +       vq_info->hw_qtype = comp.hw_qtype;

What does hw_qtype mean?

> +       vq_info->hw_qindex = le16_to_cpu(comp.hw_qindex);
> +
> +       return 0;
> +}
> +
> +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_vq_reset_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_VQ_RESET,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .qid = cpu_to_le16(qid),
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to reset vq %d, status %d: %pe\n",
> +                       qid, comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> +
> +int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features)
> +{
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_set_features_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_SET_FEATURES,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .features = cpu_to_le64(features),
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err)
> +               dev_err(dev, "Failed to set features %#llx, status %d: %pe\n",
> +                       features, comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> diff --git a/drivers/vdpa/pds/cmds.h b/drivers/vdpa/pds/cmds.h
> new file mode 100644
> index 000000000000..72e19f4efde6
> --- /dev/null
> +++ b/drivers/vdpa/pds/cmds.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> +
> +#ifndef _VDPA_CMDS_H_
> +#define _VDPA_CMDS_H_
> +
> +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv);
> +
> +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv);
> +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac);
> +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp);
> +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid,
> +                        struct pds_vdpa_vq_info *vq_info);
> +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid);
> +int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features);
> +#endif /* _VDPA_CMDS_H_ */
> diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h
> index 97fab833a0aa..33284ebe538c 100644
> --- a/drivers/vdpa/pds/vdpa_dev.h
> +++ b/drivers/vdpa/pds/vdpa_dev.h
> @@ -4,11 +4,45 @@
>  #ifndef _VDPA_DEV_H_
>  #define _VDPA_DEV_H_
>
> -#define PDS_VDPA_MAX_QUEUES    65
> +#include <linux/pci.h>
> +#include <linux/vdpa.h>
> +
> +struct pds_vdpa_vq_info {
> +       bool ready;
> +       u64 desc_addr;
> +       u64 avail_addr;
> +       u64 used_addr;
> +       u32 q_len;
> +       u16 qid;
> +       int irq;
> +       char irq_name[32];
> +
> +       void __iomem *notify;
> +       dma_addr_t notify_pa;
> +
> +       u64 doorbell;
> +       u16 avail_idx;
> +       u16 used_idx;
> +
> +       u8 hw_qtype;
> +       u16 hw_qindex;
>
> +       struct vdpa_callback event_cb;
> +       struct pds_vdpa_device *pdsv;
> +};
> +
> +#define PDS_VDPA_MAX_QUEUES    65
> +#define PDS_VDPA_MAX_QLEN      32768
>  struct pds_vdpa_device {
>         struct vdpa_device vdpa_dev;
>         struct pds_vdpa_aux *vdpa_aux;
> +
> +       struct pds_vdpa_vq_info vqs[PDS_VDPA_MAX_QUEUES];
> +       u64 req_features;               /* features requested by vdpa */
> +       u64 actual_features;            /* features negotiated and in use */
> +       u8 vdpa_index;                  /* rsvd for future subdevice use */
> +       u8 num_vqs;                     /* num vqs in use */
> +       struct vdpa_callback config_cb;
>  };
>
>  int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux);
> diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h
> index 3f7c08551163..b6a4cb4d3c6b 100644
> --- a/include/linux/pds/pds_vdpa.h
> +++ b/include/linux/pds/pds_vdpa.h
> @@ -101,4 +101,179 @@ struct pds_vdpa_ident_cmd {
>         __le32 len;
>         __le64 ident_pa;
>  };
> +
> +/**
> + * struct pds_vdpa_status_cmd - STATUS_UPDATE command
> + * @opcode:    Opcode PDS_VDPA_CMD_STATUS_UPDATE
> + * @vdpa_index: Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @status:    new status bits
> + */
> +struct pds_vdpa_status_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       u8     status;
> +};
> +
> +/**
> + * enum pds_vdpa_attr - List of VDPA device attributes
> + * @PDS_VDPA_ATTR_MAC:          MAC address
> + * @PDS_VDPA_ATTR_MAX_VQ_PAIRS: Max virtqueue pairs
> + */
> +enum pds_vdpa_attr {
> +       PDS_VDPA_ATTR_MAC          = 1,
> +       PDS_VDPA_ATTR_MAX_VQ_PAIRS = 2,
> +};
> +
> +/**
> + * struct pds_vdpa_setattr_cmd - SET_ATTR command
> + * @opcode:            Opcode PDS_VDPA_CMD_SET_ATTR
> + * @vdpa_index:                Index for vdpa subdevice
> + * @vf_id:             VF id
> + * @attr:              attribute to be changed (enum pds_vdpa_attr)
> + * @pad:               Word boundary padding
> + * @mac:               new mac address to be assigned as vdpa device address
> + * @max_vq_pairs:      new limit of virtqueue pairs
> + */
> +struct pds_vdpa_setattr_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       u8     attr;
> +       u8     pad[3];
> +       union {
> +               u8 mac[6];
> +               __le16 max_vq_pairs;
> +       } __packed;
> +};
> +
> +/**
> + * struct pds_vdpa_vq_init_cmd - queue init command
> + * @opcode: Opcode PDS_VDPA_CMD_VQ_INIT
> + * @vdpa_index:        Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @qid:       Queue id (bit0 clear = rx, bit0 set = tx, qid=N is ctrlq)
> + * @len:       log(2) of max descriptor count
> + * @desc_addr: DMA address of descriptor area
> + * @avail_addr:        DMA address of available descriptors (aka driver area)
> + * @used_addr: DMA address of used descriptors (aka device area)
> + * @intr_index:        interrupt index
> + */
> +struct pds_vdpa_vq_init_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       __le16 qid;
> +       __le16 len;
> +       __le64 desc_addr;
> +       __le64 avail_addr;
> +       __le64 used_addr;
> +       __le16 intr_index;

Just wonder in which case intr_index can be different from qid, in
pds_vdpa_cmd_init_vq() we had:

                .intr_index = cpu_to_le16(qid),

Thanks


> +};
> +
> +/**
> + * struct pds_vdpa_vq_init_comp - queue init completion
> + * @status:    Status of the command (enum pds_core_status_code)
> + * @hw_qtype:  HW queue type, used in doorbell selection
> + * @hw_qindex: HW queue index, used in doorbell selection
> + * @rsvd:      Word boundary padding
> + * @color:     Color bit
> + */
> +struct pds_vdpa_vq_init_comp {
> +       u8     status;
> +       u8     hw_qtype;
> +       __le16 hw_qindex;
> +       u8     rsvd[11];
> +       u8     color;
> +};
> +
> +/**
> + * struct pds_vdpa_vq_reset_cmd - queue reset command
> + * @opcode:    Opcode PDS_VDPA_CMD_VQ_RESET
> + * @vdpa_index:        Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @qid:       Queue id
> + */
> +struct pds_vdpa_vq_reset_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       __le16 qid;
> +};
> +
> +/**
> + * struct pds_vdpa_set_features_cmd - set hw features
> + * @opcode: Opcode PDS_VDPA_CMD_SET_FEATURES
> + * @vdpa_index:        Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @rsvd:       Word boundary padding
> + * @features:  Feature bit mask
> + */
> +struct pds_vdpa_set_features_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       __le32 rsvd;
> +       __le64 features;
> +};
> +
> +/**
> + * struct pds_vdpa_vq_set_state_cmd - set vq state
> + * @opcode:    Opcode PDS_VDPA_CMD_VQ_SET_STATE
> + * @vdpa_index:        Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @qid:       Queue id
> + * @avail:     Device avail index.
> + * @used:      Device used index.
> + *
> + * If the virtqueue uses packed descriptor format, then the avail and used
> + * index must have a wrap count.  The bits should be arranged like the upper
> + * 16 bits in the device available notification data: 15 bit index, 1 bit wrap.
> + */
> +struct pds_vdpa_vq_set_state_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       __le16 qid;
> +       __le16 avail;
> +       __le16 used;
> +};
> +
> +/**
> + * struct pds_vdpa_vq_get_state_cmd - get vq state
> + * @opcode:    Opcode PDS_VDPA_CMD_VQ_GET_STATE
> + * @vdpa_index:        Index for vdpa subdevice
> + * @vf_id:     VF id
> + * @qid:       Queue id
> + */
> +struct pds_vdpa_vq_get_state_cmd {
> +       u8     opcode;
> +       u8     vdpa_index;
> +       __le16 vf_id;
> +       __le16 qid;
> +};
> +
> +/**
> + * struct pds_vdpa_vq_get_state_comp - get vq state completion
> + * @status:    Status of the command (enum pds_core_status_code)
> + * @rsvd0:      Word boundary padding
> + * @avail:     Device avail index.
> + * @used:      Device used index.
> + * @rsvd:       Word boundary padding
> + * @color:     Color bit
> + *
> + * If the virtqueue uses packed descriptor format, then the avail and used
> + * index will have a wrap count.  The bits will be arranged like the "next"
> + * part of device available notification data: 15 bit index, 1 bit wrap.
> + */
> +struct pds_vdpa_vq_get_state_comp {
> +       u8     status;
> +       u8     rsvd0;
> +       __le16 avail;
> +       __le16 used;
> +       u8     rsvd[9];
> +       u8     color;
> +};
> +
>  #endif /* _PDS_VDPA_IF_H_ */
> --
> 2.17.1
>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 5/7] pds_vdpa: add support for vdpa and vdpamgmt interfaces
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 5/7] pds_vdpa: add support for vdpa and vdpamgmt interfaces Shannon Nelson
@ 2023-03-15  7:05     ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-15  7:05 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: brett.creeley, mst, netdev, virtualization, kuba, drivers, davem

On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> This is the vDPA device support, where we advertise that we can
> support the virtio queues and deal with the configuration work
> through the pds_core's adminq.
>
> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> ---
>  drivers/vdpa/pds/aux_drv.c  |  15 +
>  drivers/vdpa/pds/aux_drv.h  |   1 +
>  drivers/vdpa/pds/debugfs.c  | 172 ++++++++++++
>  drivers/vdpa/pds/debugfs.h  |   8 +
>  drivers/vdpa/pds/vdpa_dev.c | 545 +++++++++++++++++++++++++++++++++++-
>  5 files changed, 740 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
> index 28158d0d98a5..d706f06f7400 100644
> --- a/drivers/vdpa/pds/aux_drv.c
> +++ b/drivers/vdpa/pds/aux_drv.c
> @@ -60,8 +60,21 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>                 goto err_free_mgmt_info;
>         }
>
> +       /* Let vdpa know that we can provide devices */
> +       err = vdpa_mgmtdev_register(&vdpa_aux->vdpa_mdev);
> +       if (err) {
> +               dev_err(dev, "%s: Failed to initialize vdpa_mgmt interface: %pe\n",
> +                       __func__, ERR_PTR(err));
> +               goto err_free_virtio;
> +       }
> +
> +       pds_vdpa_debugfs_add_pcidev(vdpa_aux);
> +       pds_vdpa_debugfs_add_ident(vdpa_aux);
> +
>         return 0;
>
> +err_free_virtio:
> +       pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
>  err_free_mgmt_info:
>         pci_free_irq_vectors(padev->vf->pdev);
>  err_aux_unreg:
> @@ -78,11 +91,13 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
>         struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
>         struct device *dev = &aux_dev->dev;
>
> +       vdpa_mgmtdev_unregister(&vdpa_aux->vdpa_mdev);
>         pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
>         pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
>
>         vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
>
> +       pds_vdpa_debugfs_del_vdpadev(vdpa_aux);
>         kfree(vdpa_aux);
>         auxiliary_set_drvdata(aux_dev, NULL);
>
> diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
> index 87ac3c01c476..1ab1ce64da7c 100644
> --- a/drivers/vdpa/pds/aux_drv.h
> +++ b/drivers/vdpa/pds/aux_drv.h
> @@ -11,6 +11,7 @@ struct pds_vdpa_aux {
>         struct pds_auxiliary_dev *padev;
>
>         struct vdpa_mgmt_dev vdpa_mdev;
> +       struct pds_vdpa_device *pdsv;
>
>         struct pds_vdpa_ident ident;
>
> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
> index aa5e9677fe74..b3ee4f42f3b6 100644
> --- a/drivers/vdpa/pds/debugfs.c
> +++ b/drivers/vdpa/pds/debugfs.c
> @@ -9,6 +9,7 @@
>  #include <linux/pds/pds_auxbus.h>
>
>  #include "aux_drv.h"
> +#include "vdpa_dev.h"
>  #include "debugfs.h"
>
>  #ifdef CONFIG_DEBUG_FS
> @@ -26,4 +27,175 @@ void pds_vdpa_debugfs_destroy(void)
>         dbfs_dir = NULL;
>  }
>
> +#define PRINT_SBIT_NAME(__seq, __f, __name)                     \
> +       do {                                                    \
> +               if ((__f) & (__name))                               \
> +                       seq_printf(__seq, " %s", &#__name[16]); \
> +       } while (0)
> +
> +static void print_status_bits(struct seq_file *seq, u16 status)
> +{
> +       seq_puts(seq, "status:");
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_ACKNOWLEDGE);
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER);
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER_OK);
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FEATURES_OK);
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_NEEDS_RESET);
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FAILED);
> +       seq_puts(seq, "\n");
> +}
> +
> +#define PRINT_FBIT_NAME(__seq, __f, __name)                \
> +       do {                                               \
> +               if ((__f) & BIT_ULL(__name))                 \
> +                       seq_printf(__seq, " %s", #__name); \
> +       } while (0)
> +
> +static void print_feature_bits(struct seq_file *seq, u64 features)
> +{
> +       seq_puts(seq, "features:");
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CSUM);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_CSUM);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MTU);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MAC);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_TSO4);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_TSO6);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_ECN);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_UFO);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_TSO4);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_TSO6);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_ECN);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_UFO);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MRG_RXBUF);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_STATUS);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_VQ);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_RX);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_VLAN);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_RX_EXTRA);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_ANNOUNCE);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MQ);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_MAC_ADDR);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HASH_REPORT);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_RSS);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_RSC_EXT);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_STANDBY);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_SPEED_DUPLEX);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_NOTIFY_ON_EMPTY);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_ANY_LAYOUT);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_VERSION_1);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_ACCESS_PLATFORM);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_RING_PACKED);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_ORDER_PLATFORM);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_SR_IOV);
> +       seq_puts(seq, "\n");

Should we print the features that are not understood here?

> +}
> +
> +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux)
> +{
> +       vdpa_aux->dentry = debugfs_create_dir(pci_name(vdpa_aux->padev->vf->pdev), dbfs_dir);
> +}
> +
> +static int identity_show(struct seq_file *seq, void *v)
> +{
> +       struct pds_vdpa_aux *vdpa_aux = seq->private;
> +       struct vdpa_mgmt_dev *mgmt;
> +
> +       seq_printf(seq, "aux_dev:            %s\n",
> +                  dev_name(&vdpa_aux->padev->aux_dev.dev));
> +
> +       mgmt = &vdpa_aux->vdpa_mdev;
> +       seq_printf(seq, "max_vqs:            %d\n", mgmt->max_supported_vqs);
> +       seq_printf(seq, "config_attr_mask:   %#llx\n", mgmt->config_attr_mask);
> +       seq_printf(seq, "supported_features: %#llx\n", mgmt->supported_features);
> +       print_feature_bits(seq, mgmt->supported_features);
> +
> +       return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(identity);
> +
> +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux)
> +{
> +       debugfs_create_file("identity", 0400, vdpa_aux->dentry,
> +                           vdpa_aux, &identity_fops);
> +}
> +
> +static int config_show(struct seq_file *seq, void *v)
> +{
> +       struct pds_vdpa_device *pdsv = seq->private;
> +       struct virtio_net_config vc;
> +
> +       memcpy_fromio(&vc, pdsv->vdpa_aux->vd_mdev.device,
> +                     sizeof(struct virtio_net_config));
> +
> +       seq_printf(seq, "mac:                  %pM\n", vc.mac);
> +       seq_printf(seq, "max_virtqueue_pairs:  %d\n",
> +                  __virtio16_to_cpu(true, vc.max_virtqueue_pairs));
> +       seq_printf(seq, "mtu:                  %d\n", __virtio16_to_cpu(true, vc.mtu));
> +       seq_printf(seq, "speed:                %d\n", le32_to_cpu(vc.speed));
> +       seq_printf(seq, "duplex:               %d\n", vc.duplex);
> +       seq_printf(seq, "rss_max_key_size:     %d\n", vc.rss_max_key_size);
> +       seq_printf(seq, "rss_max_indirection_table_length: %d\n",
> +                  le16_to_cpu(vc.rss_max_indirection_table_length));
> +       seq_printf(seq, "supported_hash_types: %#x\n",
> +                  le32_to_cpu(vc.supported_hash_types));
> +       seq_printf(seq, "vn_status:            %#x\n",
> +                  __virtio16_to_cpu(true, vc.status));
> +       print_status_bits(seq, __virtio16_to_cpu(true, vc.status));
> +
> +       seq_printf(seq, "req_features:         %#llx\n", pdsv->req_features);
> +       print_feature_bits(seq, pdsv->req_features);
> +       seq_printf(seq, "actual_features:      %#llx\n", pdsv->actual_features);
> +       print_feature_bits(seq, pdsv->actual_features);
> +       seq_printf(seq, "vdpa_index:           %d\n", pdsv->vdpa_index);
> +       seq_printf(seq, "num_vqs:              %d\n", pdsv->num_vqs);
> +
> +       return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(config);
> +
> +static int vq_show(struct seq_file *seq, void *v)
> +{
> +       struct pds_vdpa_vq_info *vq = seq->private;
> +
> +       seq_printf(seq, "ready:      %d\n", vq->ready);
> +       seq_printf(seq, "desc_addr:  %#llx\n", vq->desc_addr);
> +       seq_printf(seq, "avail_addr: %#llx\n", vq->avail_addr);
> +       seq_printf(seq, "used_addr:  %#llx\n", vq->used_addr);
> +       seq_printf(seq, "q_len:      %d\n", vq->q_len);
> +       seq_printf(seq, "qid:        %d\n", vq->qid);
> +
> +       seq_printf(seq, "doorbell:   %#llx\n", vq->doorbell);
> +       seq_printf(seq, "avail_idx:  %d\n", vq->avail_idx);
> +       seq_printf(seq, "used_idx:   %d\n", vq->used_idx);
> +       seq_printf(seq, "irq:        %d\n", vq->irq);
> +       seq_printf(seq, "irq-name:   %s\n", vq->irq_name);
> +
> +       seq_printf(seq, "hw_qtype:   %d\n", vq->hw_qtype);
> +       seq_printf(seq, "hw_qindex:  %d\n", vq->hw_qindex);
> +
> +       return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(vq);
> +
> +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux)
> +{
> +       int i;
> +
> +       debugfs_create_file("config", 0400, vdpa_aux->dentry, vdpa_aux->pdsv, &config_fops);
> +
> +       for (i = 0; i < vdpa_aux->pdsv->num_vqs; i++) {
> +               char name[8];
> +
> +               snprintf(name, sizeof(name), "vq%02d", i);
> +               debugfs_create_file(name, 0400, vdpa_aux->dentry,
> +                                   &vdpa_aux->pdsv->vqs[i], &vq_fops);
> +       }
> +}
> +
> +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux)
> +{
> +       debugfs_remove_recursive(vdpa_aux->dentry);
> +       vdpa_aux->dentry = NULL;
> +}
>  #endif /* CONFIG_DEBUG_FS */
> diff --git a/drivers/vdpa/pds/debugfs.h b/drivers/vdpa/pds/debugfs.h
> index fff078a869e5..23e8345add0d 100644
> --- a/drivers/vdpa/pds/debugfs.h
> +++ b/drivers/vdpa/pds/debugfs.h
> @@ -10,9 +10,17 @@
>
>  void pds_vdpa_debugfs_create(void);
>  void pds_vdpa_debugfs_destroy(void);
> +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux);
> +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux);
> +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux);
> +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux);
>  #else
>  static inline void pds_vdpa_debugfs_create(void) { }
>  static inline void pds_vdpa_debugfs_destroy(void) { }
> +static inline void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux) { }
> +static inline void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux) { }
> +static inline void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux) { }
> +static inline void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux) { }
>  #endif
>
>  #endif /* _PDS_VDPA_DEBUGFS_H_ */
> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
> index 15d623297203..2e0a5078d379 100644
> --- a/drivers/vdpa/pds/vdpa_dev.c
> +++ b/drivers/vdpa/pds/vdpa_dev.c
> @@ -5,6 +5,7 @@
>  #include <linux/vdpa.h>
>  #include <uapi/linux/vdpa.h>
>  #include <linux/virtio_pci_modern.h>
> +#include <uapi/linux/virtio_pci.h>
>
>  #include <linux/pds/pds_core.h>
>  #include <linux/pds/pds_adminq.h>
> @@ -13,7 +14,426 @@
>
>  #include "vdpa_dev.h"
>  #include "aux_drv.h"
> +#include "cmds.h"
> +#include "debugfs.h"
>
> +static struct pds_vdpa_device *vdpa_to_pdsv(struct vdpa_device *vdpa_dev)
> +{
> +       return container_of(vdpa_dev, struct pds_vdpa_device, vdpa_dev);
> +}
> +
> +static int pds_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid,
> +                                  u64 desc_addr, u64 driver_addr, u64 device_addr)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +
> +       pdsv->vqs[qid].desc_addr = desc_addr;
> +       pdsv->vqs[qid].avail_addr = driver_addr;
> +       pdsv->vqs[qid].used_addr = device_addr;
> +
> +       return 0;
> +}
> +
> +static void pds_vdpa_set_vq_num(struct vdpa_device *vdpa_dev, u16 qid, u32 num)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +
> +       pdsv->vqs[qid].q_len = num;
> +}
> +
> +static void pds_vdpa_kick_vq(struct vdpa_device *vdpa_dev, u16 qid)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +
> +       iowrite16(qid, pdsv->vqs[qid].notify);
> +}
> +
> +static void pds_vdpa_set_vq_cb(struct vdpa_device *vdpa_dev, u16 qid,
> +                              struct vdpa_callback *cb)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +
> +       pdsv->vqs[qid].event_cb = *cb;
> +}
> +
> +static irqreturn_t pds_vdpa_isr(int irq, void *data)
> +{
> +       struct pds_vdpa_vq_info *vq;
> +
> +       vq = data;
> +       if (vq->event_cb.callback)
> +               vq->event_cb.callback(vq->event_cb.private);
> +
> +       return IRQ_HANDLED;
> +}
> +
> +static void pds_vdpa_release_irq(struct pds_vdpa_device *pdsv, int qid)
> +{
> +       if (pdsv->vqs[qid].irq == VIRTIO_MSI_NO_VECTOR)
> +               return;
> +
> +       free_irq(pdsv->vqs[qid].irq, &pdsv->vqs[qid]);
> +       pdsv->vqs[qid].irq = VIRTIO_MSI_NO_VECTOR;
> +}
> +
> +static void pds_vdpa_set_vq_ready(struct vdpa_device *vdpa_dev, u16 qid, bool ready)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +       struct pci_dev *pdev = pdsv->vdpa_aux->padev->vf->pdev;
> +       struct device *dev = &pdsv->vdpa_dev.dev;
> +       int irq;
> +       int err;
> +
> +       dev_dbg(dev, "%s: qid %d ready %d => %d\n",
> +               __func__, qid, pdsv->vqs[qid].ready, ready);
> +       if (ready == pdsv->vqs[qid].ready)
> +               return;
> +
> +       if (ready) {
> +               irq = pci_irq_vector(pdev, qid);
> +               snprintf(pdsv->vqs[qid].irq_name, sizeof(pdsv->vqs[qid].irq_name),
> +                        "vdpa-%s-%d", dev_name(dev), qid);
> +
> +               err = request_irq(irq, pds_vdpa_isr, 0,
> +                                 pdsv->vqs[qid].irq_name, &pdsv->vqs[qid]);
> +               if (err) {
> +                       dev_err(dev, "%s: no irq for qid %d: %pe\n",
> +                               __func__, qid, ERR_PTR(err));
> +                       return;
> +               }
> +               pdsv->vqs[qid].irq = irq;
> +
> +               /* Pass vq setup info to DSC */
> +               err = pds_vdpa_cmd_init_vq(pdsv, qid, &pdsv->vqs[qid]);
> +               if (err) {
> +                       pds_vdpa_release_irq(pdsv, qid);
> +                       ready = false;
> +               }
> +       } else {
> +               err = pds_vdpa_cmd_reset_vq(pdsv, qid);
> +               if (err)
> +                       dev_err(dev, "%s: reset_vq failed qid %d: %pe\n",
> +                               __func__, qid, ERR_PTR(err));
> +               pds_vdpa_release_irq(pdsv, qid);
> +       }
> +
> +       pdsv->vqs[qid].ready = ready;
> +}
> +
> +static bool pds_vdpa_get_vq_ready(struct vdpa_device *vdpa_dev, u16 qid)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +
> +       return pdsv->vqs[qid].ready;
> +}
> +
> +static int pds_vdpa_set_vq_state(struct vdpa_device *vdpa_dev, u16 qid,
> +                                const struct vdpa_vq_state *state)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_vq_set_state_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_VQ_SET_STATE,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .qid = cpu_to_le16(qid),
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       dev_dbg(dev, "%s: qid %d avail %#x\n",
> +               __func__, qid, state->packed.last_avail_idx);
> +
> +       if (pdsv->actual_features & VIRTIO_F_RING_PACKED) {
> +               cmd.avail = cpu_to_le16(state->packed.last_avail_idx |
> +                                       (state->packed.last_avail_counter << 15));
> +               cmd.used = cpu_to_le16(state->packed.last_used_idx |
> +                                      (state->packed.last_used_counter << 15));
> +       } else {
> +               cmd.avail = cpu_to_le16(state->split.avail_index);
> +               /* state->split does not provide a used_index:
> +                * the vq will be set to "empty" here, and the vq will read
> +                * the current used index the next time the vq is kicked.
> +                */
> +               cmd.used = cpu_to_le16(state->split.avail_index);
> +       }
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);

I had one question for adminq command. I think we should use PF
instead of VF but in __pdsc_adminq_post() I saw:

        q_info->dest = comp;
        memcpy(q_info->desc, cmd, sizeof(*cmd));

So cmd should be fine since it is copied to the q_info->desc which is
already mapped. But q_info->dest look suspicious, where did it mapped?

Thanks


> +       if (err)
> +               dev_err(dev, "Failed to set vq state qid %u, status %d: %pe\n",
> +                       qid, comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> +
> +static int pds_vdpa_get_vq_state(struct vdpa_device *vdpa_dev, u16 qid,
> +                                struct vdpa_vq_state *state)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_vq_get_state_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_VQ_GET_STATE,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .qid = cpu_to_le16(qid),
> +       };
> +       struct pds_vdpa_vq_get_state_comp comp = {0};
> +       int err;
> +
> +       dev_dbg(dev, "%s: qid %d\n", __func__, qid);
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err) {
> +               dev_err(dev, "Failed to get vq state qid %u, status %d: %pe\n",
> +                       qid, comp.status, ERR_PTR(err));
> +               return err;
> +       }
> +
> +       if (pdsv->actual_features & VIRTIO_F_RING_PACKED) {
> +               state->packed.last_avail_idx = le16_to_cpu(comp.avail) & 0x7fff;
> +               state->packed.last_avail_counter = le16_to_cpu(comp.avail) >> 15;
> +       } else {
> +               state->split.avail_index = le16_to_cpu(comp.avail);
> +               /* state->split does not provide a used_index. */
> +       }
> +
> +       return err;
> +}
> +
> +static struct vdpa_notification_area
> +pds_vdpa_get_vq_notification(struct vdpa_device *vdpa_dev, u16 qid)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +       struct virtio_pci_modern_device *vd_mdev;
> +       struct vdpa_notification_area area;
> +
> +       area.addr = pdsv->vqs[qid].notify_pa;
> +
> +       vd_mdev = &pdsv->vdpa_aux->vd_mdev;
> +       if (!vd_mdev->notify_offset_multiplier)
> +               area.size = PAGE_SIZE;

Note that PAGE_SIZE varies among archs, I doubt we should use a fixed size here.

Others look good.

Thanks

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 5/7] pds_vdpa: add support for vdpa and vdpamgmt interfaces
@ 2023-03-15  7:05     ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-15  7:05 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> This is the vDPA device support, where we advertise that we can
> support the virtio queues and deal with the configuration work
> through the pds_core's adminq.
>
> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> ---
>  drivers/vdpa/pds/aux_drv.c  |  15 +
>  drivers/vdpa/pds/aux_drv.h  |   1 +
>  drivers/vdpa/pds/debugfs.c  | 172 ++++++++++++
>  drivers/vdpa/pds/debugfs.h  |   8 +
>  drivers/vdpa/pds/vdpa_dev.c | 545 +++++++++++++++++++++++++++++++++++-
>  5 files changed, 740 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
> index 28158d0d98a5..d706f06f7400 100644
> --- a/drivers/vdpa/pds/aux_drv.c
> +++ b/drivers/vdpa/pds/aux_drv.c
> @@ -60,8 +60,21 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>                 goto err_free_mgmt_info;
>         }
>
> +       /* Let vdpa know that we can provide devices */
> +       err = vdpa_mgmtdev_register(&vdpa_aux->vdpa_mdev);
> +       if (err) {
> +               dev_err(dev, "%s: Failed to initialize vdpa_mgmt interface: %pe\n",
> +                       __func__, ERR_PTR(err));
> +               goto err_free_virtio;
> +       }
> +
> +       pds_vdpa_debugfs_add_pcidev(vdpa_aux);
> +       pds_vdpa_debugfs_add_ident(vdpa_aux);
> +
>         return 0;
>
> +err_free_virtio:
> +       pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
>  err_free_mgmt_info:
>         pci_free_irq_vectors(padev->vf->pdev);
>  err_aux_unreg:
> @@ -78,11 +91,13 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
>         struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
>         struct device *dev = &aux_dev->dev;
>
> +       vdpa_mgmtdev_unregister(&vdpa_aux->vdpa_mdev);
>         pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
>         pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
>
>         vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
>
> +       pds_vdpa_debugfs_del_vdpadev(vdpa_aux);
>         kfree(vdpa_aux);
>         auxiliary_set_drvdata(aux_dev, NULL);
>
> diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
> index 87ac3c01c476..1ab1ce64da7c 100644
> --- a/drivers/vdpa/pds/aux_drv.h
> +++ b/drivers/vdpa/pds/aux_drv.h
> @@ -11,6 +11,7 @@ struct pds_vdpa_aux {
>         struct pds_auxiliary_dev *padev;
>
>         struct vdpa_mgmt_dev vdpa_mdev;
> +       struct pds_vdpa_device *pdsv;
>
>         struct pds_vdpa_ident ident;
>
> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
> index aa5e9677fe74..b3ee4f42f3b6 100644
> --- a/drivers/vdpa/pds/debugfs.c
> +++ b/drivers/vdpa/pds/debugfs.c
> @@ -9,6 +9,7 @@
>  #include <linux/pds/pds_auxbus.h>
>
>  #include "aux_drv.h"
> +#include "vdpa_dev.h"
>  #include "debugfs.h"
>
>  #ifdef CONFIG_DEBUG_FS
> @@ -26,4 +27,175 @@ void pds_vdpa_debugfs_destroy(void)
>         dbfs_dir = NULL;
>  }
>
> +#define PRINT_SBIT_NAME(__seq, __f, __name)                     \
> +       do {                                                    \
> +               if ((__f) & (__name))                               \
> +                       seq_printf(__seq, " %s", &#__name[16]); \
> +       } while (0)
> +
> +static void print_status_bits(struct seq_file *seq, u16 status)
> +{
> +       seq_puts(seq, "status:");
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_ACKNOWLEDGE);
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER);
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER_OK);
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FEATURES_OK);
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_NEEDS_RESET);
> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FAILED);
> +       seq_puts(seq, "\n");
> +}
> +
> +#define PRINT_FBIT_NAME(__seq, __f, __name)                \
> +       do {                                               \
> +               if ((__f) & BIT_ULL(__name))                 \
> +                       seq_printf(__seq, " %s", #__name); \
> +       } while (0)
> +
> +static void print_feature_bits(struct seq_file *seq, u64 features)
> +{
> +       seq_puts(seq, "features:");
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CSUM);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_CSUM);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MTU);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MAC);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_TSO4);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_TSO6);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_ECN);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_UFO);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_TSO4);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_TSO6);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_ECN);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_UFO);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MRG_RXBUF);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_STATUS);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_VQ);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_RX);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_VLAN);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_RX_EXTRA);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_ANNOUNCE);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MQ);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_MAC_ADDR);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HASH_REPORT);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_RSS);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_RSC_EXT);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_STANDBY);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_SPEED_DUPLEX);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_NOTIFY_ON_EMPTY);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_ANY_LAYOUT);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_VERSION_1);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_ACCESS_PLATFORM);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_RING_PACKED);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_ORDER_PLATFORM);
> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_SR_IOV);
> +       seq_puts(seq, "\n");

Should we print the features that are not understood here?

> +}
> +
> +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux)
> +{
> +       vdpa_aux->dentry = debugfs_create_dir(pci_name(vdpa_aux->padev->vf->pdev), dbfs_dir);
> +}
> +
> +static int identity_show(struct seq_file *seq, void *v)
> +{
> +       struct pds_vdpa_aux *vdpa_aux = seq->private;
> +       struct vdpa_mgmt_dev *mgmt;
> +
> +       seq_printf(seq, "aux_dev:            %s\n",
> +                  dev_name(&vdpa_aux->padev->aux_dev.dev));
> +
> +       mgmt = &vdpa_aux->vdpa_mdev;
> +       seq_printf(seq, "max_vqs:            %d\n", mgmt->max_supported_vqs);
> +       seq_printf(seq, "config_attr_mask:   %#llx\n", mgmt->config_attr_mask);
> +       seq_printf(seq, "supported_features: %#llx\n", mgmt->supported_features);
> +       print_feature_bits(seq, mgmt->supported_features);
> +
> +       return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(identity);
> +
> +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux)
> +{
> +       debugfs_create_file("identity", 0400, vdpa_aux->dentry,
> +                           vdpa_aux, &identity_fops);
> +}
> +
> +static int config_show(struct seq_file *seq, void *v)
> +{
> +       struct pds_vdpa_device *pdsv = seq->private;
> +       struct virtio_net_config vc;
> +
> +       memcpy_fromio(&vc, pdsv->vdpa_aux->vd_mdev.device,
> +                     sizeof(struct virtio_net_config));
> +
> +       seq_printf(seq, "mac:                  %pM\n", vc.mac);
> +       seq_printf(seq, "max_virtqueue_pairs:  %d\n",
> +                  __virtio16_to_cpu(true, vc.max_virtqueue_pairs));
> +       seq_printf(seq, "mtu:                  %d\n", __virtio16_to_cpu(true, vc.mtu));
> +       seq_printf(seq, "speed:                %d\n", le32_to_cpu(vc.speed));
> +       seq_printf(seq, "duplex:               %d\n", vc.duplex);
> +       seq_printf(seq, "rss_max_key_size:     %d\n", vc.rss_max_key_size);
> +       seq_printf(seq, "rss_max_indirection_table_length: %d\n",
> +                  le16_to_cpu(vc.rss_max_indirection_table_length));
> +       seq_printf(seq, "supported_hash_types: %#x\n",
> +                  le32_to_cpu(vc.supported_hash_types));
> +       seq_printf(seq, "vn_status:            %#x\n",
> +                  __virtio16_to_cpu(true, vc.status));
> +       print_status_bits(seq, __virtio16_to_cpu(true, vc.status));
> +
> +       seq_printf(seq, "req_features:         %#llx\n", pdsv->req_features);
> +       print_feature_bits(seq, pdsv->req_features);
> +       seq_printf(seq, "actual_features:      %#llx\n", pdsv->actual_features);
> +       print_feature_bits(seq, pdsv->actual_features);
> +       seq_printf(seq, "vdpa_index:           %d\n", pdsv->vdpa_index);
> +       seq_printf(seq, "num_vqs:              %d\n", pdsv->num_vqs);
> +
> +       return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(config);
> +
> +static int vq_show(struct seq_file *seq, void *v)
> +{
> +       struct pds_vdpa_vq_info *vq = seq->private;
> +
> +       seq_printf(seq, "ready:      %d\n", vq->ready);
> +       seq_printf(seq, "desc_addr:  %#llx\n", vq->desc_addr);
> +       seq_printf(seq, "avail_addr: %#llx\n", vq->avail_addr);
> +       seq_printf(seq, "used_addr:  %#llx\n", vq->used_addr);
> +       seq_printf(seq, "q_len:      %d\n", vq->q_len);
> +       seq_printf(seq, "qid:        %d\n", vq->qid);
> +
> +       seq_printf(seq, "doorbell:   %#llx\n", vq->doorbell);
> +       seq_printf(seq, "avail_idx:  %d\n", vq->avail_idx);
> +       seq_printf(seq, "used_idx:   %d\n", vq->used_idx);
> +       seq_printf(seq, "irq:        %d\n", vq->irq);
> +       seq_printf(seq, "irq-name:   %s\n", vq->irq_name);
> +
> +       seq_printf(seq, "hw_qtype:   %d\n", vq->hw_qtype);
> +       seq_printf(seq, "hw_qindex:  %d\n", vq->hw_qindex);
> +
> +       return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(vq);
> +
> +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux)
> +{
> +       int i;
> +
> +       debugfs_create_file("config", 0400, vdpa_aux->dentry, vdpa_aux->pdsv, &config_fops);
> +
> +       for (i = 0; i < vdpa_aux->pdsv->num_vqs; i++) {
> +               char name[8];
> +
> +               snprintf(name, sizeof(name), "vq%02d", i);
> +               debugfs_create_file(name, 0400, vdpa_aux->dentry,
> +                                   &vdpa_aux->pdsv->vqs[i], &vq_fops);
> +       }
> +}
> +
> +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux)
> +{
> +       debugfs_remove_recursive(vdpa_aux->dentry);
> +       vdpa_aux->dentry = NULL;
> +}
>  #endif /* CONFIG_DEBUG_FS */
> diff --git a/drivers/vdpa/pds/debugfs.h b/drivers/vdpa/pds/debugfs.h
> index fff078a869e5..23e8345add0d 100644
> --- a/drivers/vdpa/pds/debugfs.h
> +++ b/drivers/vdpa/pds/debugfs.h
> @@ -10,9 +10,17 @@
>
>  void pds_vdpa_debugfs_create(void);
>  void pds_vdpa_debugfs_destroy(void);
> +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux);
> +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux);
> +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux);
> +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux);
>  #else
>  static inline void pds_vdpa_debugfs_create(void) { }
>  static inline void pds_vdpa_debugfs_destroy(void) { }
> +static inline void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux) { }
> +static inline void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux) { }
> +static inline void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux) { }
> +static inline void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux) { }
>  #endif
>
>  #endif /* _PDS_VDPA_DEBUGFS_H_ */
> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
> index 15d623297203..2e0a5078d379 100644
> --- a/drivers/vdpa/pds/vdpa_dev.c
> +++ b/drivers/vdpa/pds/vdpa_dev.c
> @@ -5,6 +5,7 @@
>  #include <linux/vdpa.h>
>  #include <uapi/linux/vdpa.h>
>  #include <linux/virtio_pci_modern.h>
> +#include <uapi/linux/virtio_pci.h>
>
>  #include <linux/pds/pds_core.h>
>  #include <linux/pds/pds_adminq.h>
> @@ -13,7 +14,426 @@
>
>  #include "vdpa_dev.h"
>  #include "aux_drv.h"
> +#include "cmds.h"
> +#include "debugfs.h"
>
> +static struct pds_vdpa_device *vdpa_to_pdsv(struct vdpa_device *vdpa_dev)
> +{
> +       return container_of(vdpa_dev, struct pds_vdpa_device, vdpa_dev);
> +}
> +
> +static int pds_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid,
> +                                  u64 desc_addr, u64 driver_addr, u64 device_addr)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +
> +       pdsv->vqs[qid].desc_addr = desc_addr;
> +       pdsv->vqs[qid].avail_addr = driver_addr;
> +       pdsv->vqs[qid].used_addr = device_addr;
> +
> +       return 0;
> +}
> +
> +static void pds_vdpa_set_vq_num(struct vdpa_device *vdpa_dev, u16 qid, u32 num)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +
> +       pdsv->vqs[qid].q_len = num;
> +}
> +
> +static void pds_vdpa_kick_vq(struct vdpa_device *vdpa_dev, u16 qid)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +
> +       iowrite16(qid, pdsv->vqs[qid].notify);
> +}
> +
> +static void pds_vdpa_set_vq_cb(struct vdpa_device *vdpa_dev, u16 qid,
> +                              struct vdpa_callback *cb)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +
> +       pdsv->vqs[qid].event_cb = *cb;
> +}
> +
> +static irqreturn_t pds_vdpa_isr(int irq, void *data)
> +{
> +       struct pds_vdpa_vq_info *vq;
> +
> +       vq = data;
> +       if (vq->event_cb.callback)
> +               vq->event_cb.callback(vq->event_cb.private);
> +
> +       return IRQ_HANDLED;
> +}
> +
> +static void pds_vdpa_release_irq(struct pds_vdpa_device *pdsv, int qid)
> +{
> +       if (pdsv->vqs[qid].irq == VIRTIO_MSI_NO_VECTOR)
> +               return;
> +
> +       free_irq(pdsv->vqs[qid].irq, &pdsv->vqs[qid]);
> +       pdsv->vqs[qid].irq = VIRTIO_MSI_NO_VECTOR;
> +}
> +
> +static void pds_vdpa_set_vq_ready(struct vdpa_device *vdpa_dev, u16 qid, bool ready)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +       struct pci_dev *pdev = pdsv->vdpa_aux->padev->vf->pdev;
> +       struct device *dev = &pdsv->vdpa_dev.dev;
> +       int irq;
> +       int err;
> +
> +       dev_dbg(dev, "%s: qid %d ready %d => %d\n",
> +               __func__, qid, pdsv->vqs[qid].ready, ready);
> +       if (ready == pdsv->vqs[qid].ready)
> +               return;
> +
> +       if (ready) {
> +               irq = pci_irq_vector(pdev, qid);
> +               snprintf(pdsv->vqs[qid].irq_name, sizeof(pdsv->vqs[qid].irq_name),
> +                        "vdpa-%s-%d", dev_name(dev), qid);
> +
> +               err = request_irq(irq, pds_vdpa_isr, 0,
> +                                 pdsv->vqs[qid].irq_name, &pdsv->vqs[qid]);
> +               if (err) {
> +                       dev_err(dev, "%s: no irq for qid %d: %pe\n",
> +                               __func__, qid, ERR_PTR(err));
> +                       return;
> +               }
> +               pdsv->vqs[qid].irq = irq;
> +
> +               /* Pass vq setup info to DSC */
> +               err = pds_vdpa_cmd_init_vq(pdsv, qid, &pdsv->vqs[qid]);
> +               if (err) {
> +                       pds_vdpa_release_irq(pdsv, qid);
> +                       ready = false;
> +               }
> +       } else {
> +               err = pds_vdpa_cmd_reset_vq(pdsv, qid);
> +               if (err)
> +                       dev_err(dev, "%s: reset_vq failed qid %d: %pe\n",
> +                               __func__, qid, ERR_PTR(err));
> +               pds_vdpa_release_irq(pdsv, qid);
> +       }
> +
> +       pdsv->vqs[qid].ready = ready;
> +}
> +
> +static bool pds_vdpa_get_vq_ready(struct vdpa_device *vdpa_dev, u16 qid)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +
> +       return pdsv->vqs[qid].ready;
> +}
> +
> +static int pds_vdpa_set_vq_state(struct vdpa_device *vdpa_dev, u16 qid,
> +                                const struct vdpa_vq_state *state)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_vq_set_state_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_VQ_SET_STATE,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .qid = cpu_to_le16(qid),
> +       };
> +       struct pds_vdpa_comp comp = {0};
> +       int err;
> +
> +       dev_dbg(dev, "%s: qid %d avail %#x\n",
> +               __func__, qid, state->packed.last_avail_idx);
> +
> +       if (pdsv->actual_features & VIRTIO_F_RING_PACKED) {
> +               cmd.avail = cpu_to_le16(state->packed.last_avail_idx |
> +                                       (state->packed.last_avail_counter << 15));
> +               cmd.used = cpu_to_le16(state->packed.last_used_idx |
> +                                      (state->packed.last_used_counter << 15));
> +       } else {
> +               cmd.avail = cpu_to_le16(state->split.avail_index);
> +               /* state->split does not provide a used_index:
> +                * the vq will be set to "empty" here, and the vq will read
> +                * the current used index the next time the vq is kicked.
> +                */
> +               cmd.used = cpu_to_le16(state->split.avail_index);
> +       }
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);

I had one question for adminq command. I think we should use PF
instead of VF but in __pdsc_adminq_post() I saw:

        q_info->dest = comp;
        memcpy(q_info->desc, cmd, sizeof(*cmd));

So cmd should be fine since it is copied to the q_info->desc which is
already mapped. But q_info->dest look suspicious, where did it mapped?

Thanks


> +       if (err)
> +               dev_err(dev, "Failed to set vq state qid %u, status %d: %pe\n",
> +                       qid, comp.status, ERR_PTR(err));
> +
> +       return err;
> +}
> +
> +static int pds_vdpa_get_vq_state(struct vdpa_device *vdpa_dev, u16 qid,
> +                                struct vdpa_vq_state *state)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
> +       struct device *dev = &padev->aux_dev.dev;
> +       struct pds_vdpa_vq_get_state_cmd cmd = {
> +               .opcode = PDS_VDPA_CMD_VQ_GET_STATE,
> +               .vdpa_index = pdsv->vdpa_index,
> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
> +               .qid = cpu_to_le16(qid),
> +       };
> +       struct pds_vdpa_vq_get_state_comp comp = {0};
> +       int err;
> +
> +       dev_dbg(dev, "%s: qid %d\n", __func__, qid);
> +
> +       err = padev->ops->adminq_cmd(padev,
> +                                    (union pds_core_adminq_cmd *)&cmd,
> +                                    sizeof(cmd),
> +                                    (union pds_core_adminq_comp *)&comp,
> +                                    0);
> +       if (err) {
> +               dev_err(dev, "Failed to get vq state qid %u, status %d: %pe\n",
> +                       qid, comp.status, ERR_PTR(err));
> +               return err;
> +       }
> +
> +       if (pdsv->actual_features & VIRTIO_F_RING_PACKED) {
> +               state->packed.last_avail_idx = le16_to_cpu(comp.avail) & 0x7fff;
> +               state->packed.last_avail_counter = le16_to_cpu(comp.avail) >> 15;
> +       } else {
> +               state->split.avail_index = le16_to_cpu(comp.avail);
> +               /* state->split does not provide a used_index. */
> +       }
> +
> +       return err;
> +}
> +
> +static struct vdpa_notification_area
> +pds_vdpa_get_vq_notification(struct vdpa_device *vdpa_dev, u16 qid)
> +{
> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
> +       struct virtio_pci_modern_device *vd_mdev;
> +       struct vdpa_notification_area area;
> +
> +       area.addr = pdsv->vqs[qid].notify_pa;
> +
> +       vd_mdev = &pdsv->vdpa_aux->vd_mdev;
> +       if (!vd_mdev->notify_offset_multiplier)
> +               area.size = PAGE_SIZE;

Note that PAGE_SIZE varies among archs, I doubt we should use a fixed size here.

Others look good.

Thanks


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig Shannon Nelson
@ 2023-03-15  7:05     ` Jason Wang
  2023-03-15 18:10   ` kernel test robot
  1 sibling, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-15  7:05 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: brett.creeley, mst, netdev, virtualization, kuba, drivers, davem

On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> Add the documentation and Kconfig entry for pds_vdpa driver.
>
> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> ---
>  .../ethernet/pensando/pds_vdpa.rst            | 84 +++++++++++++++++++
>  MAINTAINERS                                   |  4 +
>  drivers/vdpa/Kconfig                          |  8 ++
>  3 files changed, 96 insertions(+)
>  create mode 100644 Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
>
> diff --git a/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
> new file mode 100644
> index 000000000000..d41f6dd66e3e
> --- /dev/null
> +++ b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
> @@ -0,0 +1,84 @@
> +.. SPDX-License-Identifier: GPL-2.0+
> +.. note: can be edited and viewed with /usr/bin/formiko-vim
> +
> +==========================================================
> +PCI vDPA driver for the AMD/Pensando(R) DSC adapter family
> +==========================================================
> +
> +AMD/Pensando vDPA VF Device Driver
> +Copyright(c) 2023 Advanced Micro Devices, Inc
> +
> +Overview
> +========
> +
> +The ``pds_vdpa`` driver is an auxiliary bus driver that supplies
> +a vDPA device for use by the virtio network stack.  It is used with
> +the Pensando Virtual Function devices that offer vDPA and virtio queue
> +services.  It depends on the ``pds_core`` driver and hardware for the PF
> +and VF PCI handling as well as for device configuration services.
> +
> +Using the device
> +================
> +
> +The ``pds_vdpa`` device is enabled via multiple configuration steps and
> +depends on the ``pds_core`` driver to create and enable SR-IOV Virtual
> +Function devices.
> +
> +Shown below are the steps to bind the driver to a VF and also to the
> +associated auxiliary device created by the ``pds_core`` driver.
> +
> +.. code-block:: bash
> +
> +  #!/bin/bash
> +
> +  modprobe pds_core
> +  modprobe vdpa
> +  modprobe pds_vdpa
> +
> +  PF_BDF=`grep -H "vDPA.*1" /sys/kernel/debug/pds_core/*/viftypes | head -1 | awk -F / '{print $6}'`
> +
> +  # Enable vDPA VF auxiliary device(s) in the PF
> +  devlink dev param set pci/$PF_BDF name enable_vnet value true cmode runtime
> +

Does this mean we can't do per VF configuration for vDPA enablement
(e.g VF0 for vdpa VF1 to other type)?

Thanks


> +  # Create a VF for vDPA use
> +  echo 1 > /sys/bus/pci/drivers/pds_core/$PF_BDF/sriov_numvfs
> +
> +  # Find the vDPA services/devices available
> +  PDS_VDPA_MGMT=`vdpa mgmtdev show | grep vDPA | head -1 | cut -d: -f1`
> +
> +  # Create a vDPA device for use in virtio network configurations
> +  vdpa dev add name vdpa1 mgmtdev $PDS_VDPA_MGMT mac 00:11:22:33:44:55
> +
> +  # Set up an ethernet interface on the vdpa device
> +  modprobe virtio_vdpa
> +
> +
> +
> +Enabling the driver
> +===================
> +
> +The driver is enabled via the standard kernel configuration system,
> +using the make command::
> +
> +  make oldconfig/menuconfig/etc.
> +
> +The driver is located in the menu structure at:
> +
> +  -> Device Drivers
> +    -> Network device support (NETDEVICES [=y])
> +      -> Ethernet driver support
> +        -> Pensando devices
> +          -> Pensando Ethernet PDS_VDPA Support
> +
> +Support
> +=======
> +
> +For general Linux networking support, please use the netdev mailing
> +list, which is monitored by Pensando personnel::
> +
> +  netdev@vger.kernel.org
> +
> +For more specific support needs, please use the Pensando driver support
> +email::
> +
> +  drivers@pensando.io
> diff --git a/MAINTAINERS b/MAINTAINERS
> index cb21dcd3a02a..da981c5bc830 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -22120,6 +22120,10 @@ SNET DPU VIRTIO DATA PATH ACCELERATOR
>  R:     Alvaro Karsz <alvaro.karsz@solid-run.com>
>  F:     drivers/vdpa/solidrun/
>
> +PDS DSC VIRTIO DATA PATH ACCELERATOR
> +R:     Shannon Nelson <shannon.nelson@amd.com>
> +F:     drivers/vdpa/pds/
> +
>  VIRTIO BALLOON
>  M:     "Michael S. Tsirkin" <mst@redhat.com>
>  M:     David Hildenbrand <david@redhat.com>
> diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig
> index cd6ad92f3f05..c910cb119c1b 100644
> --- a/drivers/vdpa/Kconfig
> +++ b/drivers/vdpa/Kconfig
> @@ -116,4 +116,12 @@ config ALIBABA_ENI_VDPA
>           This driver includes a HW monitor device that
>           reads health values from the DPU.
>
> +config PDS_VDPA
> +       tristate "vDPA driver for AMD/Pensando DSC devices"
> +       depends on PDS_CORE
> +       help
> +         VDPA network driver for AMD/Pensando's PDS Core devices.
> +         With this driver, the VirtIO dataplane can be
> +         offloaded to an AMD/Pensando DSC device.
> +
>  endif # VDPA
> --
> 2.17.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig
@ 2023-03-15  7:05     ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-15  7:05 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> Add the documentation and Kconfig entry for pds_vdpa driver.
>
> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> ---
>  .../ethernet/pensando/pds_vdpa.rst            | 84 +++++++++++++++++++
>  MAINTAINERS                                   |  4 +
>  drivers/vdpa/Kconfig                          |  8 ++
>  3 files changed, 96 insertions(+)
>  create mode 100644 Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
>
> diff --git a/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
> new file mode 100644
> index 000000000000..d41f6dd66e3e
> --- /dev/null
> +++ b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
> @@ -0,0 +1,84 @@
> +.. SPDX-License-Identifier: GPL-2.0+
> +.. note: can be edited and viewed with /usr/bin/formiko-vim
> +
> +==========================================================
> +PCI vDPA driver for the AMD/Pensando(R) DSC adapter family
> +==========================================================
> +
> +AMD/Pensando vDPA VF Device Driver
> +Copyright(c) 2023 Advanced Micro Devices, Inc
> +
> +Overview
> +========
> +
> +The ``pds_vdpa`` driver is an auxiliary bus driver that supplies
> +a vDPA device for use by the virtio network stack.  It is used with
> +the Pensando Virtual Function devices that offer vDPA and virtio queue
> +services.  It depends on the ``pds_core`` driver and hardware for the PF
> +and VF PCI handling as well as for device configuration services.
> +
> +Using the device
> +================
> +
> +The ``pds_vdpa`` device is enabled via multiple configuration steps and
> +depends on the ``pds_core`` driver to create and enable SR-IOV Virtual
> +Function devices.
> +
> +Shown below are the steps to bind the driver to a VF and also to the
> +associated auxiliary device created by the ``pds_core`` driver.
> +
> +.. code-block:: bash
> +
> +  #!/bin/bash
> +
> +  modprobe pds_core
> +  modprobe vdpa
> +  modprobe pds_vdpa
> +
> +  PF_BDF=`grep -H "vDPA.*1" /sys/kernel/debug/pds_core/*/viftypes | head -1 | awk -F / '{print $6}'`
> +
> +  # Enable vDPA VF auxiliary device(s) in the PF
> +  devlink dev param set pci/$PF_BDF name enable_vnet value true cmode runtime
> +

Does this mean we can't do per VF configuration for vDPA enablement
(e.g VF0 for vdpa VF1 to other type)?

Thanks


> +  # Create a VF for vDPA use
> +  echo 1 > /sys/bus/pci/drivers/pds_core/$PF_BDF/sriov_numvfs
> +
> +  # Find the vDPA services/devices available
> +  PDS_VDPA_MGMT=`vdpa mgmtdev show | grep vDPA | head -1 | cut -d: -f1`
> +
> +  # Create a vDPA device for use in virtio network configurations
> +  vdpa dev add name vdpa1 mgmtdev $PDS_VDPA_MGMT mac 00:11:22:33:44:55
> +
> +  # Set up an ethernet interface on the vdpa device
> +  modprobe virtio_vdpa
> +
> +
> +
> +Enabling the driver
> +===================
> +
> +The driver is enabled via the standard kernel configuration system,
> +using the make command::
> +
> +  make oldconfig/menuconfig/etc.
> +
> +The driver is located in the menu structure at:
> +
> +  -> Device Drivers
> +    -> Network device support (NETDEVICES [=y])
> +      -> Ethernet driver support
> +        -> Pensando devices
> +          -> Pensando Ethernet PDS_VDPA Support
> +
> +Support
> +=======
> +
> +For general Linux networking support, please use the netdev mailing
> +list, which is monitored by Pensando personnel::
> +
> +  netdev@vger.kernel.org
> +
> +For more specific support needs, please use the Pensando driver support
> +email::
> +
> +  drivers@pensando.io
> diff --git a/MAINTAINERS b/MAINTAINERS
> index cb21dcd3a02a..da981c5bc830 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -22120,6 +22120,10 @@ SNET DPU VIRTIO DATA PATH ACCELERATOR
>  R:     Alvaro Karsz <alvaro.karsz@solid-run.com>
>  F:     drivers/vdpa/solidrun/
>
> +PDS DSC VIRTIO DATA PATH ACCELERATOR
> +R:     Shannon Nelson <shannon.nelson@amd.com>
> +F:     drivers/vdpa/pds/
> +
>  VIRTIO BALLOON
>  M:     "Michael S. Tsirkin" <mst@redhat.com>
>  M:     David Hildenbrand <david@redhat.com>
> diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig
> index cd6ad92f3f05..c910cb119c1b 100644
> --- a/drivers/vdpa/Kconfig
> +++ b/drivers/vdpa/Kconfig
> @@ -116,4 +116,12 @@ config ALIBABA_ENI_VDPA
>           This driver includes a HW monitor device that
>           reads health values from the DPU.
>
> +config PDS_VDPA
> +       tristate "vDPA driver for AMD/Pensando DSC devices"
> +       depends on PDS_CORE
> +       help
> +         VDPA network driver for AMD/Pensando's PDS Core devices.
> +         With this driver, the VirtIO dataplane can be
> +         offloaded to an AMD/Pensando DSC device.
> +
>  endif # VDPA
> --
> 2.17.1
>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig
  2023-03-09  1:30 ` [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig Shannon Nelson
  2023-03-15  7:05     ` Jason Wang
@ 2023-03-15 18:10   ` kernel test robot
  1 sibling, 0 replies; 36+ messages in thread
From: kernel test robot @ 2023-03-15 18:10 UTC (permalink / raw)
  To: Shannon Nelson; +Cc: oe-kbuild-all

Hi Shannon,

[FYI, it's a private test report for your RFC patch.]
[auto build test WARNING on linus/master]
[also build test WARNING on v6.3-rc2 next-20230315]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Shannon-Nelson/pds_vdpa-Add-new-vDPA-driver-for-AMD-Pensando-DSC/20230309-093236
patch link:    https://lore.kernel.org/r/20230309013046.23523-8-shannon.nelson%40amd.com
patch subject: [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig
reproduce:
        # https://github.com/intel-lab-lkp/linux/commit/62e66fe7b18c78aab45a3aad6ef9925a932f75c9
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Shannon-Nelson/pds_vdpa-Add-new-vDPA-driver-for-AMD-Pensando-DSC/20230309-093236
        git checkout 62e66fe7b18c78aab45a3aad6ef9925a932f75c9
        make menuconfig
        # enable CONFIG_COMPILE_TEST, CONFIG_WARN_MISSING_DOCUMENTS, CONFIG_WARN_ABI_ERRORS
        make htmldocs

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202303160136.S3u5ryaW-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst: WARNING: document isn't included in any toctree

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 2/7] pds_vdpa: get vdpa management info
  2023-03-15  7:05     ` Jason Wang
  (?)
@ 2023-03-16  3:25     ` Shannon Nelson
  2023-03-17  3:33         ` Jason Wang
  -1 siblings, 1 reply; 36+ messages in thread
From: Shannon Nelson @ 2023-03-16  3:25 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On 3/15/23 12:05 AM, Jason Wang wrote:
> On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>>
>> Find the vDPA management information from the DSC in order to
>> advertise it to the vdpa subsystem.
>>
>> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
>> ---
>>   drivers/vdpa/pds/Makefile    |   3 +-
>>   drivers/vdpa/pds/aux_drv.c   |  13 ++++
>>   drivers/vdpa/pds/aux_drv.h   |   7 +++
>>   drivers/vdpa/pds/debugfs.c   |   3 +
>>   drivers/vdpa/pds/vdpa_dev.c  | 113 +++++++++++++++++++++++++++++++++++
>>   drivers/vdpa/pds/vdpa_dev.h  |  15 +++++
>>   include/linux/pds/pds_vdpa.h |  92 ++++++++++++++++++++++++++++
>>   7 files changed, 245 insertions(+), 1 deletion(-)
>>   create mode 100644 drivers/vdpa/pds/vdpa_dev.c
>>   create mode 100644 drivers/vdpa/pds/vdpa_dev.h
>>
>> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
>> index a9cd2f450ae1..13b50394ec64 100644
>> --- a/drivers/vdpa/pds/Makefile
>> +++ b/drivers/vdpa/pds/Makefile
>> @@ -3,6 +3,7 @@
>>
>>   obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
>>
>> -pds_vdpa-y := aux_drv.o
>> +pds_vdpa-y := aux_drv.o \
>> +             vdpa_dev.o
>>
>>   pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
>> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
>> index b3f36170253c..63e40ae68211 100644
>> --- a/drivers/vdpa/pds/aux_drv.c
>> +++ b/drivers/vdpa/pds/aux_drv.c
>> @@ -2,6 +2,8 @@
>>   /* Copyright(c) 2023 Advanced Micro Devices, Inc */
>>
>>   #include <linux/auxiliary_bus.h>
>> +#include <linux/pci.h>
>> +#include <linux/vdpa.h>
>>
>>   #include <linux/pds/pds_core.h>
>>   #include <linux/pds/pds_auxbus.h>
>> @@ -9,6 +11,7 @@
>>
>>   #include "aux_drv.h"
>>   #include "debugfs.h"
>> +#include "vdpa_dev.h"
>>
>>   static const struct auxiliary_device_id pds_vdpa_id_table[] = {
>>          { .name = PDS_VDPA_DEV_NAME, },
>> @@ -30,6 +33,7 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>>                  return -ENOMEM;
>>
>>          vdpa_aux->padev = padev;
>> +       vdpa_aux->vf_id = pci_iov_vf_id(padev->vf->pdev);
>>          auxiliary_set_drvdata(aux_dev, vdpa_aux);
>>
>>          /* Register our PDS client with the pds_core */
>> @@ -40,8 +44,15 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>>                  goto err_free_mem;
>>          }
>>
>> +       /* Get device ident info and set up the vdpa_mgmt_dev */
>> +       err = pds_vdpa_get_mgmt_info(vdpa_aux);
>> +       if (err)
>> +               goto err_aux_unreg;
>> +
>>          return 0;
>>
>> +err_aux_unreg:
>> +       padev->ops->unregister_client(padev);
>>   err_free_mem:
>>          kfree(vdpa_aux);
>>          auxiliary_set_drvdata(aux_dev, NULL);
>> @@ -54,6 +65,8 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
>>          struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
>>          struct device *dev = &aux_dev->dev;
>>
>> +       pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
>> +
>>          vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
>>
>>          kfree(vdpa_aux);
>> diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
>> index 14e465944dfd..94ba7abcaa43 100644
>> --- a/drivers/vdpa/pds/aux_drv.h
>> +++ b/drivers/vdpa/pds/aux_drv.h
>> @@ -10,6 +10,13 @@
>>   struct pds_vdpa_aux {
>>          struct pds_auxiliary_dev *padev;
>>
>> +       struct vdpa_mgmt_dev vdpa_mdev;
>> +
>> +       struct pds_vdpa_ident ident;
>> +
>> +       int vf_id;
>>          struct dentry *dentry;
>> +
>> +       int nintrs;
>>   };
>>   #endif /* _AUX_DRV_H_ */
>> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
>> index 3c163dc7b66f..7b7e90fd6578 100644
>> --- a/drivers/vdpa/pds/debugfs.c
>> +++ b/drivers/vdpa/pds/debugfs.c
>> @@ -1,7 +1,10 @@
>>   // SPDX-License-Identifier: GPL-2.0-only
>>   /* Copyright(c) 2023 Advanced Micro Devices, Inc */
>>
>> +#include <linux/vdpa.h>
>> +
>>   #include <linux/pds/pds_core.h>
>> +#include <linux/pds/pds_vdpa.h>
>>   #include <linux/pds/pds_auxbus.h>
>>
>>   #include "aux_drv.h"
>> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
>> new file mode 100644
>> index 000000000000..bd840688503c
>> --- /dev/null
>> +++ b/drivers/vdpa/pds/vdpa_dev.c
>> @@ -0,0 +1,113 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
>> +
>> +#include <linux/pci.h>
>> +#include <linux/vdpa.h>
>> +#include <uapi/linux/vdpa.h>
>> +
>> +#include <linux/pds/pds_core.h>
>> +#include <linux/pds/pds_adminq.h>
>> +#include <linux/pds/pds_auxbus.h>
>> +#include <linux/pds/pds_vdpa.h>
>> +
>> +#include "vdpa_dev.h"
>> +#include "aux_drv.h"
>> +
>> +static struct virtio_device_id pds_vdpa_id_table[] = {
>> +       {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID},
>> +       {0},
>> +};
>> +
>> +static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
>> +                           const struct vdpa_dev_set_config *add_config)
>> +{
>> +       return -EOPNOTSUPP;
>> +}
>> +
>> +static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev,
>> +                            struct vdpa_device *vdpa_dev)
>> +{
>> +}
>> +
>> +static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = {
>> +       .dev_add = pds_vdpa_dev_add,
>> +       .dev_del = pds_vdpa_dev_del
>> +};
>> +
>> +int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux)
>> +{
>> +       struct pds_vdpa_ident_cmd ident_cmd = {
>> +               .opcode = PDS_VDPA_CMD_IDENT,
>> +               .vf_id = cpu_to_le16(vdpa_aux->vf_id),
>> +       };
>> +       struct pds_vdpa_comp ident_comp = {0};
>> +       struct vdpa_mgmt_dev *mgmt;
>> +       struct device *pf_dev;
>> +       struct pci_dev *pdev;
>> +       dma_addr_t ident_pa;
>> +       struct device *dev;
>> +       u16 max_vqs;
>> +       int err;
>> +
>> +       dev = &vdpa_aux->padev->aux_dev.dev;
>> +       pdev = vdpa_aux->padev->vf->pdev;
>> +       mgmt = &vdpa_aux->vdpa_mdev;
>> +
>> +       /* Get resource info through the PF's adminq.  It is a block of info,
>> +        * so we need to map some memory for PF to make available to the
>> +        * firmware for writing the data.
>> +        */
> 
> It looks to me pds_vdpa_ident is not very large:
> 
> struct pds_vdpa_ident {
>          __le64 hw_features;
>          __le16 max_vqs;
>          __le16 max_qlen;
>          __le16 min_qlen;
> };
> 
> Any reason it is not packed into some type of the comp structure of adminq?

Unfortunately, the completion structs are limited to 16 bytes, with 4 up 
front and 1 at the end already spoken for.  I suppose we could shrink 
max_vqs to a single byte and squeeze this into the comp, but then we'd 
have no ability to add to it if needed.  I'd rather leave it as it is 
for now.

sln

> 
> Others look good.
> 
> Thanks
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 4/7] pds_vdpa: add vdpa config client commands
  2023-03-15  7:05     ` Jason Wang
  (?)
@ 2023-03-16  3:25     ` Shannon Nelson
  2023-03-17  3:36         ` Jason Wang
  -1 siblings, 1 reply; 36+ messages in thread
From: Shannon Nelson @ 2023-03-16  3:25 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On 3/15/23 12:05 AM, Jason Wang wrote:
> On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>>
>> These are the adminq commands that will be needed for
>> setting up and using the vDPA device.
> 
> It's better to explain under which case the driver should use adminq,
> I see some functions overlap with common configuration capability.
> More below.

Yes, I agree this needs to be more clearly stated.  The overlap is 
because the original FW didn't have the virtio device as well modeled 
and we had to go through adminq calls to get things done.  Now that we 
have a reasonable virtio emulation and can use the virtio_net_config, we 
have a lot less need for the adminq calls.


> 
>>
>> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
>> ---
>>   drivers/vdpa/pds/Makefile    |   1 +
>>   drivers/vdpa/pds/cmds.c      | 207 +++++++++++++++++++++++++++++++++++
>>   drivers/vdpa/pds/cmds.h      |  16 +++
>>   drivers/vdpa/pds/vdpa_dev.h  |  36 +++++-
>>   include/linux/pds/pds_vdpa.h | 175 +++++++++++++++++++++++++++++
>>   5 files changed, 434 insertions(+), 1 deletion(-)
>>   create mode 100644 drivers/vdpa/pds/cmds.c
>>   create mode 100644 drivers/vdpa/pds/cmds.h
>>
>> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
>> index ca2efa8c6eb5..7211eba3d942 100644
>> --- a/drivers/vdpa/pds/Makefile
>> +++ b/drivers/vdpa/pds/Makefile
>> @@ -4,6 +4,7 @@
>>   obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
>>
>>   pds_vdpa-y := aux_drv.o \
>> +             cmds.o \
>>                virtio_pci.o \
>>                vdpa_dev.o
>>
>> diff --git a/drivers/vdpa/pds/cmds.c b/drivers/vdpa/pds/cmds.c
>> new file mode 100644
>> index 000000000000..45410739107c
>> --- /dev/null
>> +++ b/drivers/vdpa/pds/cmds.c
>> @@ -0,0 +1,207 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
>> +
>> +#include <linux/vdpa.h>
>> +#include <linux/virtio_pci_modern.h>
>> +
>> +#include <linux/pds/pds_core_if.h>
>> +#include <linux/pds/pds_adminq.h>
>> +#include <linux/pds/pds_auxbus.h>
>> +#include <linux/pds/pds_vdpa.h>
>> +
>> +#include "vdpa_dev.h"
>> +#include "aux_drv.h"
>> +#include "cmds.h"
>> +
>> +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv)
>> +{
>> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
>> +       struct device *dev = &padev->aux_dev.dev;
>> +       struct pds_vdpa_init_cmd init_cmd = {
>> +               .opcode = PDS_VDPA_CMD_INIT,
>> +               .vdpa_index = pdsv->vdpa_index,
>> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
>> +               .len = cpu_to_le32(sizeof(struct virtio_net_config)),
>> +               .config_pa = 0,   /* we use the PCI space, not an alternate space */
>> +       };
>> +       struct pds_vdpa_comp init_comp = {0};
>> +       int err;
>> +
>> +       /* Initialize the vdpa/virtio device */
>> +       err = padev->ops->adminq_cmd(padev,
>> +                                    (union pds_core_adminq_cmd *)&init_cmd,
>> +                                    sizeof(init_cmd),
>> +                                    (union pds_core_adminq_comp *)&init_comp,
>> +                                    0);
>> +       if (err)
>> +               dev_err(dev, "Failed to init hw, status %d: %pe\n",
>> +                       init_comp.status, ERR_PTR(err));
>> +
>> +       return err;
>> +}
>> +
>> +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv)
>> +{
> 
> This function is not used.
> 
> And I wonder what's the difference between reset via adminq and reset
> via pds_vdpa_set_status(0) ?

Ideally no difference.


> 
>> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
>> +       struct device *dev = &padev->aux_dev.dev;
>> +       struct pds_vdpa_cmd cmd = {
>> +               .opcode = PDS_VDPA_CMD_RESET,
>> +               .vdpa_index = pdsv->vdpa_index,
>> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
>> +       };
>> +       struct pds_vdpa_comp comp = {0};
>> +       int err;
>> +
>> +       err = padev->ops->adminq_cmd(padev,
>> +                                    (union pds_core_adminq_cmd *)&cmd,
>> +                                    sizeof(cmd),
>> +                                    (union pds_core_adminq_comp *)&comp,
>> +                                    0);
>> +       if (err)
>> +               dev_err(dev, "Failed to reset hw, status %d: %pe\n",
>> +                       comp.status, ERR_PTR(err));
> 
> It might be better to use deb_dbg() here since it can be triggered by the guest.

Sure.

> 
>> +
>> +       return err;
>> +}
>> +
>> +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac)
>> +{
>> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
>> +       struct device *dev = &padev->aux_dev.dev;
>> +       struct pds_vdpa_setattr_cmd cmd = {
>> +               .opcode = PDS_VDPA_CMD_SET_ATTR,
>> +               .vdpa_index = pdsv->vdpa_index,
>> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
>> +               .attr = PDS_VDPA_ATTR_MAC,
>> +       };
>> +       struct pds_vdpa_comp comp = {0};
>> +       int err;
>> +
>> +       ether_addr_copy(cmd.mac, mac);
>> +       err = padev->ops->adminq_cmd(padev,
>> +                                    (union pds_core_adminq_cmd *)&cmd,
>> +                                    sizeof(cmd),
>> +                                    (union pds_core_adminq_comp *)&comp,
>> +                                    0);
>> +       if (err)
>> +               dev_err(dev, "Failed to set mac address %pM, status %d: %pe\n",
>> +                       mac, comp.status, ERR_PTR(err));
>> +
>> +       return err;
>> +}
>> +
>> +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp)
>> +{
>> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
>> +       struct device *dev = &padev->aux_dev.dev;
>> +       struct pds_vdpa_setattr_cmd cmd = {
>> +               .opcode = PDS_VDPA_CMD_SET_ATTR,
>> +               .vdpa_index = pdsv->vdpa_index,
>> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
>> +               .attr = PDS_VDPA_ATTR_MAX_VQ_PAIRS,
>> +               .max_vq_pairs = cpu_to_le16(max_vqp),
>> +       };
>> +       struct pds_vdpa_comp comp = {0};
>> +       int err;
>> +
>> +       err = padev->ops->adminq_cmd(padev,
>> +                                    (union pds_core_adminq_cmd *)&cmd,
>> +                                    sizeof(cmd),
>> +                                    (union pds_core_adminq_comp *)&comp,
>> +                                    0);
>> +       if (err)
>> +               dev_err(dev, "Failed to set max vq pairs %u, status %d: %pe\n",
>> +                       max_vqp, comp.status, ERR_PTR(err));
>> +
>> +       return err;
>> +}
>> +
>> +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid,
>> +                        struct pds_vdpa_vq_info *vq_info)
>> +{
>> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
>> +       struct device *dev = &padev->aux_dev.dev;
>> +       struct pds_vdpa_vq_init_comp comp = {0};
>> +       struct pds_vdpa_vq_init_cmd cmd = {
>> +               .opcode = PDS_VDPA_CMD_VQ_INIT,
>> +               .vdpa_index = pdsv->vdpa_index,
>> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
>> +               .qid = cpu_to_le16(qid),
>> +               .len = cpu_to_le16(ilog2(vq_info->q_len)),
>> +               .desc_addr = cpu_to_le64(vq_info->desc_addr),
>> +               .avail_addr = cpu_to_le64(vq_info->avail_addr),
>> +               .used_addr = cpu_to_le64(vq_info->used_addr),
>> +               .intr_index = cpu_to_le16(qid),
>> +       };
>> +       int err;
>> +
>> +       dev_dbg(dev, "%s: qid %d len %d desc_addr %#llx avail_addr %#llx used_addr %#llx\n",
>> +               __func__, qid, ilog2(vq_info->q_len),
>> +               vq_info->desc_addr, vq_info->avail_addr, vq_info->used_addr);
>> +
>> +       err = padev->ops->adminq_cmd(padev,
>> +                                    (union pds_core_adminq_cmd *)&cmd,
>> +                                    sizeof(cmd),
>> +                                    (union pds_core_adminq_comp *)&comp,
>> +                                    0);
> 
> We map common cfg in pds_vdpa_probe_virtio, any reason for using
> adminq here? (I guess it might be faster?)

It's just easier to hand the values to the FW in a single package and 
let it sort things out as it needs, and it will complain with a handy 
error code if necessary.

> 
>> +       if (err) {
>> +               dev_err(dev, "Failed to init vq %d, status %d: %pe\n",
>> +                       qid, comp.status, ERR_PTR(err));
>> +               return err;
>> +       }
>> +
>> +       vq_info->hw_qtype = comp.hw_qtype;
> 
> What does hw_qtype mean?

Hmmm... this and hw_qindex are hardware specific values that I don't 
thinkg we need any longer.  I'll pull them out.

> 
>> +       vq_info->hw_qindex = le16_to_cpu(comp.hw_qindex);
>> +
>> +       return 0;
>> +}
>> +
>> +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid)
>> +{
>> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
>> +       struct device *dev = &padev->aux_dev.dev;
>> +       struct pds_vdpa_vq_reset_cmd cmd = {
>> +               .opcode = PDS_VDPA_CMD_VQ_RESET,
>> +               .vdpa_index = pdsv->vdpa_index,
>> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
>> +               .qid = cpu_to_le16(qid),
>> +       };
>> +       struct pds_vdpa_comp comp = {0};
>> +       int err;
>> +
>> +       err = padev->ops->adminq_cmd(padev,
>> +                                    (union pds_core_adminq_cmd *)&cmd,
>> +                                    sizeof(cmd),
>> +                                    (union pds_core_adminq_comp *)&comp,
>> +                                    0);
>> +       if (err)
>> +               dev_err(dev, "Failed to reset vq %d, status %d: %pe\n",
>> +                       qid, comp.status, ERR_PTR(err));
>> +
>> +       return err;
>> +}
>> +
>> +int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features)
>> +{
>> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
>> +       struct device *dev = &padev->aux_dev.dev;
>> +       struct pds_vdpa_set_features_cmd cmd = {
>> +               .opcode = PDS_VDPA_CMD_SET_FEATURES,
>> +               .vdpa_index = pdsv->vdpa_index,
>> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
>> +               .features = cpu_to_le64(features),
>> +       };
>> +       struct pds_vdpa_comp comp = {0};
>> +       int err;
>> +
>> +       err = padev->ops->adminq_cmd(padev,
>> +                                    (union pds_core_adminq_cmd *)&cmd,
>> +                                    sizeof(cmd),
>> +                                    (union pds_core_adminq_comp *)&comp,
>> +                                    0);
>> +       if (err)
>> +               dev_err(dev, "Failed to set features %#llx, status %d: %pe\n",
>> +                       features, comp.status, ERR_PTR(err));
>> +
>> +       return err;
>> +}
>> diff --git a/drivers/vdpa/pds/cmds.h b/drivers/vdpa/pds/cmds.h
>> new file mode 100644
>> index 000000000000..72e19f4efde6
>> --- /dev/null
>> +++ b/drivers/vdpa/pds/cmds.h
>> @@ -0,0 +1,16 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
>> +
>> +#ifndef _VDPA_CMDS_H_
>> +#define _VDPA_CMDS_H_
>> +
>> +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv);
>> +
>> +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv);
>> +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac);
>> +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp);
>> +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid,
>> +                        struct pds_vdpa_vq_info *vq_info);
>> +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid);
>> +int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features);
>> +#endif /* _VDPA_CMDS_H_ */
>> diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h
>> index 97fab833a0aa..33284ebe538c 100644
>> --- a/drivers/vdpa/pds/vdpa_dev.h
>> +++ b/drivers/vdpa/pds/vdpa_dev.h
>> @@ -4,11 +4,45 @@
>>   #ifndef _VDPA_DEV_H_
>>   #define _VDPA_DEV_H_
>>
>> -#define PDS_VDPA_MAX_QUEUES    65
>> +#include <linux/pci.h>
>> +#include <linux/vdpa.h>
>> +
>> +struct pds_vdpa_vq_info {
>> +       bool ready;
>> +       u64 desc_addr;
>> +       u64 avail_addr;
>> +       u64 used_addr;
>> +       u32 q_len;
>> +       u16 qid;
>> +       int irq;
>> +       char irq_name[32];
>> +
>> +       void __iomem *notify;
>> +       dma_addr_t notify_pa;
>> +
>> +       u64 doorbell;
>> +       u16 avail_idx;
>> +       u16 used_idx;
>> +
>> +       u8 hw_qtype;
>> +       u16 hw_qindex;
>>
>> +       struct vdpa_callback event_cb;
>> +       struct pds_vdpa_device *pdsv;
>> +};
>> +
>> +#define PDS_VDPA_MAX_QUEUES    65
>> +#define PDS_VDPA_MAX_QLEN      32768
>>   struct pds_vdpa_device {
>>          struct vdpa_device vdpa_dev;
>>          struct pds_vdpa_aux *vdpa_aux;
>> +
>> +       struct pds_vdpa_vq_info vqs[PDS_VDPA_MAX_QUEUES];
>> +       u64 req_features;               /* features requested by vdpa */
>> +       u64 actual_features;            /* features negotiated and in use */
>> +       u8 vdpa_index;                  /* rsvd for future subdevice use */
>> +       u8 num_vqs;                     /* num vqs in use */
>> +       struct vdpa_callback config_cb;
>>   };
>>
>>   int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux);
>> diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h
>> index 3f7c08551163..b6a4cb4d3c6b 100644
>> --- a/include/linux/pds/pds_vdpa.h
>> +++ b/include/linux/pds/pds_vdpa.h
>> @@ -101,4 +101,179 @@ struct pds_vdpa_ident_cmd {
>>          __le32 len;
>>          __le64 ident_pa;
>>   };
>> +
>> +/**
>> + * struct pds_vdpa_status_cmd - STATUS_UPDATE command
>> + * @opcode:    Opcode PDS_VDPA_CMD_STATUS_UPDATE
>> + * @vdpa_index: Index for vdpa subdevice
>> + * @vf_id:     VF id
>> + * @status:    new status bits
>> + */
>> +struct pds_vdpa_status_cmd {
>> +       u8     opcode;
>> +       u8     vdpa_index;
>> +       __le16 vf_id;
>> +       u8     status;
>> +};
>> +
>> +/**
>> + * enum pds_vdpa_attr - List of VDPA device attributes
>> + * @PDS_VDPA_ATTR_MAC:          MAC address
>> + * @PDS_VDPA_ATTR_MAX_VQ_PAIRS: Max virtqueue pairs
>> + */
>> +enum pds_vdpa_attr {
>> +       PDS_VDPA_ATTR_MAC          = 1,
>> +       PDS_VDPA_ATTR_MAX_VQ_PAIRS = 2,
>> +};
>> +
>> +/**
>> + * struct pds_vdpa_setattr_cmd - SET_ATTR command
>> + * @opcode:            Opcode PDS_VDPA_CMD_SET_ATTR
>> + * @vdpa_index:                Index for vdpa subdevice
>> + * @vf_id:             VF id
>> + * @attr:              attribute to be changed (enum pds_vdpa_attr)
>> + * @pad:               Word boundary padding
>> + * @mac:               new mac address to be assigned as vdpa device address
>> + * @max_vq_pairs:      new limit of virtqueue pairs
>> + */
>> +struct pds_vdpa_setattr_cmd {
>> +       u8     opcode;
>> +       u8     vdpa_index;
>> +       __le16 vf_id;
>> +       u8     attr;
>> +       u8     pad[3];
>> +       union {
>> +               u8 mac[6];
>> +               __le16 max_vq_pairs;
>> +       } __packed;
>> +};
>> +
>> +/**
>> + * struct pds_vdpa_vq_init_cmd - queue init command
>> + * @opcode: Opcode PDS_VDPA_CMD_VQ_INIT
>> + * @vdpa_index:        Index for vdpa subdevice
>> + * @vf_id:     VF id
>> + * @qid:       Queue id (bit0 clear = rx, bit0 set = tx, qid=N is ctrlq)
>> + * @len:       log(2) of max descriptor count
>> + * @desc_addr: DMA address of descriptor area
>> + * @avail_addr:        DMA address of available descriptors (aka driver area)
>> + * @used_addr: DMA address of used descriptors (aka device area)
>> + * @intr_index:        interrupt index
>> + */
>> +struct pds_vdpa_vq_init_cmd {
>> +       u8     opcode;
>> +       u8     vdpa_index;
>> +       __le16 vf_id;
>> +       __le16 qid;
>> +       __le16 len;
>> +       __le64 desc_addr;
>> +       __le64 avail_addr;
>> +       __le64 used_addr;
>> +       __le16 intr_index;
> 
> Just wonder in which case intr_index can be different from qid, in
> pds_vdpa_cmd_init_vq() we had:
> 
>                  .intr_index = cpu_to_le16(qid),

Yes, it normally is going to be the same.  The FW allows us to specify 
it separate from the qid in order to allow flexibility in setting up 
interrupts when we want to experiment with it.  For now we just plug in 
the number.

sln

> 
> Thanks
> 
> 
>> +};
>> +
>> +/**
>> + * struct pds_vdpa_vq_init_comp - queue init completion
>> + * @status:    Status of the command (enum pds_core_status_code)
>> + * @hw_qtype:  HW queue type, used in doorbell selection
>> + * @hw_qindex: HW queue index, used in doorbell selection
>> + * @rsvd:      Word boundary padding
>> + * @color:     Color bit
>> + */
>> +struct pds_vdpa_vq_init_comp {
>> +       u8     status;
>> +       u8     hw_qtype;
>> +       __le16 hw_qindex;
>> +       u8     rsvd[11];
>> +       u8     color;
>> +};
>> +
>> +/**
>> + * struct pds_vdpa_vq_reset_cmd - queue reset command
>> + * @opcode:    Opcode PDS_VDPA_CMD_VQ_RESET
>> + * @vdpa_index:        Index for vdpa subdevice
>> + * @vf_id:     VF id
>> + * @qid:       Queue id
>> + */
>> +struct pds_vdpa_vq_reset_cmd {
>> +       u8     opcode;
>> +       u8     vdpa_index;
>> +       __le16 vf_id;
>> +       __le16 qid;
>> +};
>> +
>> +/**
>> + * struct pds_vdpa_set_features_cmd - set hw features
>> + * @opcode: Opcode PDS_VDPA_CMD_SET_FEATURES
>> + * @vdpa_index:        Index for vdpa subdevice
>> + * @vf_id:     VF id
>> + * @rsvd:       Word boundary padding
>> + * @features:  Feature bit mask
>> + */
>> +struct pds_vdpa_set_features_cmd {
>> +       u8     opcode;
>> +       u8     vdpa_index;
>> +       __le16 vf_id;
>> +       __le32 rsvd;
>> +       __le64 features;
>> +};
>> +
>> +/**
>> + * struct pds_vdpa_vq_set_state_cmd - set vq state
>> + * @opcode:    Opcode PDS_VDPA_CMD_VQ_SET_STATE
>> + * @vdpa_index:        Index for vdpa subdevice
>> + * @vf_id:     VF id
>> + * @qid:       Queue id
>> + * @avail:     Device avail index.
>> + * @used:      Device used index.
>> + *
>> + * If the virtqueue uses packed descriptor format, then the avail and used
>> + * index must have a wrap count.  The bits should be arranged like the upper
>> + * 16 bits in the device available notification data: 15 bit index, 1 bit wrap.
>> + */
>> +struct pds_vdpa_vq_set_state_cmd {
>> +       u8     opcode;
>> +       u8     vdpa_index;
>> +       __le16 vf_id;
>> +       __le16 qid;
>> +       __le16 avail;
>> +       __le16 used;
>> +};
>> +
>> +/**
>> + * struct pds_vdpa_vq_get_state_cmd - get vq state
>> + * @opcode:    Opcode PDS_VDPA_CMD_VQ_GET_STATE
>> + * @vdpa_index:        Index for vdpa subdevice
>> + * @vf_id:     VF id
>> + * @qid:       Queue id
>> + */
>> +struct pds_vdpa_vq_get_state_cmd {
>> +       u8     opcode;
>> +       u8     vdpa_index;
>> +       __le16 vf_id;
>> +       __le16 qid;
>> +};
>> +
>> +/**
>> + * struct pds_vdpa_vq_get_state_comp - get vq state completion
>> + * @status:    Status of the command (enum pds_core_status_code)
>> + * @rsvd0:      Word boundary padding
>> + * @avail:     Device avail index.
>> + * @used:      Device used index.
>> + * @rsvd:       Word boundary padding
>> + * @color:     Color bit
>> + *
>> + * If the virtqueue uses packed descriptor format, then the avail and used
>> + * index will have a wrap count.  The bits will be arranged like the "next"
>> + * part of device available notification data: 15 bit index, 1 bit wrap.
>> + */
>> +struct pds_vdpa_vq_get_state_comp {
>> +       u8     status;
>> +       u8     rsvd0;
>> +       __le16 avail;
>> +       __le16 used;
>> +       u8     rsvd[9];
>> +       u8     color;
>> +};
>> +
>>   #endif /* _PDS_VDPA_IF_H_ */
>> --
>> 2.17.1
>>
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 3/7] pds_vdpa: virtio bar setup for vdpa
  2023-03-15  7:05     ` Jason Wang
  (?)
@ 2023-03-16  3:25     ` Shannon Nelson
  2023-03-17  3:37         ` Jason Wang
  -1 siblings, 1 reply; 36+ messages in thread
From: Shannon Nelson @ 2023-03-16  3:25 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On 3/15/23 12:05 AM, Jason Wang wrote:
> On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>>
>> The PDS vDPA device has a virtio BAR for describing itself, and
>> the pds_vdpa driver needs to access it.  Here we copy liberally
>> from the existing drivers/virtio/virtio_pci_modern_dev.c as it
>> has what we need, but we need to modify it so that it can work
>> with our device id and so we can use our own DMA mask.
> 
> By passing a pointer to a customized id probing routine to vp_modern_probe()?

The only real differences are that we needed to cut out the device id 
checks to use our vDPA VF device id, and remove 
dma_set_mask_and_coherent() because we need a different DMA_BIT_MASK().

Maybe a function pointer to something that can validate the device id, 
and a bitmask for setting DMA mapping; if they are 0/NULL, use the 
default device id check and DMA mask.

Adding them as extra arguments to the function call seems a bit messy, 
maybe add them to the struct virtio_pci_modern_device and the caller can 
set them as overrides if needed?

struct virtio_pci_modern_device {

	...

	int (*device_id_check_override(struct pci_dev *pdev));
	u64 dma_mask_override;
}

sln


> 
> Thanks
> 
> 
>>
>> We suspect there is room for discussion here about making the
>> existing code a little more flexible, but we thought we'd at
>> least start the discussion here.
>>
>> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
>> ---
>>   drivers/vdpa/pds/Makefile     |   1 +
>>   drivers/vdpa/pds/aux_drv.c    |  14 ++
>>   drivers/vdpa/pds/aux_drv.h    |   1 +
>>   drivers/vdpa/pds/debugfs.c    |   1 +
>>   drivers/vdpa/pds/vdpa_dev.c   |   1 +
>>   drivers/vdpa/pds/virtio_pci.c | 281 ++++++++++++++++++++++++++++++++++
>>   drivers/vdpa/pds/virtio_pci.h |   8 +
>>   7 files changed, 307 insertions(+)
>>   create mode 100644 drivers/vdpa/pds/virtio_pci.c
>>   create mode 100644 drivers/vdpa/pds/virtio_pci.h
>>
>> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
>> index 13b50394ec64..ca2efa8c6eb5 100644
>> --- a/drivers/vdpa/pds/Makefile
>> +++ b/drivers/vdpa/pds/Makefile
>> @@ -4,6 +4,7 @@
>>   obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
>>
>>   pds_vdpa-y := aux_drv.o \
>> +             virtio_pci.o \
>>                vdpa_dev.o
>>
>>   pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
>> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
>> index 63e40ae68211..28158d0d98a5 100644
>> --- a/drivers/vdpa/pds/aux_drv.c
>> +++ b/drivers/vdpa/pds/aux_drv.c
>> @@ -4,6 +4,7 @@
>>   #include <linux/auxiliary_bus.h>
>>   #include <linux/pci.h>
>>   #include <linux/vdpa.h>
>> +#include <linux/virtio_pci_modern.h>
>>
>>   #include <linux/pds/pds_core.h>
>>   #include <linux/pds/pds_auxbus.h>
>> @@ -12,6 +13,7 @@
>>   #include "aux_drv.h"
>>   #include "debugfs.h"
>>   #include "vdpa_dev.h"
>> +#include "virtio_pci.h"
>>
>>   static const struct auxiliary_device_id pds_vdpa_id_table[] = {
>>          { .name = PDS_VDPA_DEV_NAME, },
>> @@ -49,8 +51,19 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>>          if (err)
>>                  goto err_aux_unreg;
>>
>> +       /* Find the virtio configuration */
>> +       vdpa_aux->vd_mdev.pci_dev = padev->vf->pdev;
>> +       err = pds_vdpa_probe_virtio(&vdpa_aux->vd_mdev);
>> +       if (err) {
>> +               dev_err(dev, "Unable to probe for virtio configuration: %pe\n",
>> +                       ERR_PTR(err));
>> +               goto err_free_mgmt_info;
>> +       }
>> +
>>          return 0;
>>
>> +err_free_mgmt_info:
>> +       pci_free_irq_vectors(padev->vf->pdev);
>>   err_aux_unreg:
>>          padev->ops->unregister_client(padev);
>>   err_free_mem:
>> @@ -65,6 +78,7 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
>>          struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
>>          struct device *dev = &aux_dev->dev;
>>
>> +       pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
>>          pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
>>
>>          vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
>> diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
>> index 94ba7abcaa43..87ac3c01c476 100644
>> --- a/drivers/vdpa/pds/aux_drv.h
>> +++ b/drivers/vdpa/pds/aux_drv.h
>> @@ -16,6 +16,7 @@ struct pds_vdpa_aux {
>>
>>          int vf_id;
>>          struct dentry *dentry;
>> +       struct virtio_pci_modern_device vd_mdev;
>>
>>          int nintrs;
>>   };
>> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
>> index 7b7e90fd6578..aa5e9677fe74 100644
>> --- a/drivers/vdpa/pds/debugfs.c
>> +++ b/drivers/vdpa/pds/debugfs.c
>> @@ -1,6 +1,7 @@
>>   // SPDX-License-Identifier: GPL-2.0-only
>>   /* Copyright(c) 2023 Advanced Micro Devices, Inc */
>>
>> +#include <linux/virtio_pci_modern.h>
>>   #include <linux/vdpa.h>
>>
>>   #include <linux/pds/pds_core.h>
>> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
>> index bd840688503c..15d623297203 100644
>> --- a/drivers/vdpa/pds/vdpa_dev.c
>> +++ b/drivers/vdpa/pds/vdpa_dev.c
>> @@ -4,6 +4,7 @@
>>   #include <linux/pci.h>
>>   #include <linux/vdpa.h>
>>   #include <uapi/linux/vdpa.h>
>> +#include <linux/virtio_pci_modern.h>
>>
>>   #include <linux/pds/pds_core.h>
>>   #include <linux/pds/pds_adminq.h>
>> diff --git a/drivers/vdpa/pds/virtio_pci.c b/drivers/vdpa/pds/virtio_pci.c
>> new file mode 100644
>> index 000000000000..cb879619dac3
>> --- /dev/null
>> +++ b/drivers/vdpa/pds/virtio_pci.c
>> @@ -0,0 +1,281 @@
>> +// SPDX-License-Identifier: GPL-2.0-or-later
>> +
>> +/*
>> + * adapted from drivers/virtio/virtio_pci_modern_dev.c, v6.0-rc1
>> + */
>> +
>> +#include <linux/virtio_pci_modern.h>
>> +#include <linux/pci.h>
>> +
>> +#include "virtio_pci.h"
>> +
>> +/*
>> + * pds_vdpa_map_capability - map a part of virtio pci capability
>> + * @mdev: the modern virtio-pci device
>> + * @off: offset of the capability
>> + * @minlen: minimal length of the capability
>> + * @align: align requirement
>> + * @start: start from the capability
>> + * @size: map size
>> + * @len: the length that is actually mapped
>> + * @pa: physical address of the capability
>> + *
>> + * Returns the io address of for the part of the capability
>> + */
>> +static void __iomem *
>> +pds_vdpa_map_capability(struct virtio_pci_modern_device *mdev, int off,
>> +                       size_t minlen, u32 align, u32 start, u32 size,
>> +                       size_t *len, resource_size_t *pa)
>> +{
>> +       struct pci_dev *dev = mdev->pci_dev;
>> +       u8 bar;
>> +       u32 offset, length;
>> +       void __iomem *p;
>> +
>> +       pci_read_config_byte(dev, off + offsetof(struct virtio_pci_cap,
>> +                                                bar),
>> +                            &bar);
>> +       pci_read_config_dword(dev, off + offsetof(struct virtio_pci_cap, offset),
>> +                             &offset);
>> +       pci_read_config_dword(dev, off + offsetof(struct virtio_pci_cap, length),
>> +                             &length);
>> +
>> +       /* Check if the BAR may have changed since we requested the region. */
>> +       if (bar >= PCI_STD_NUM_BARS || !(mdev->modern_bars & (1 << bar))) {
>> +               dev_err(&dev->dev,
>> +                       "virtio_pci: bar unexpectedly changed to %u\n", bar);
>> +               return NULL;
>> +       }
>> +
>> +       if (length <= start) {
>> +               dev_err(&dev->dev,
>> +                       "virtio_pci: bad capability len %u (>%u expected)\n",
>> +                       length, start);
>> +               return NULL;
>> +       }
>> +
>> +       if (length - start < minlen) {
>> +               dev_err(&dev->dev,
>> +                       "virtio_pci: bad capability len %u (>=%zu expected)\n",
>> +                       length, minlen);
>> +               return NULL;
>> +       }
>> +
>> +       length -= start;
>> +
>> +       if (start + offset < offset) {
>> +               dev_err(&dev->dev,
>> +                       "virtio_pci: map wrap-around %u+%u\n",
>> +                       start, offset);
>> +               return NULL;
>> +       }
>> +
>> +       offset += start;
>> +
>> +       if (offset & (align - 1)) {
>> +               dev_err(&dev->dev,
>> +                       "virtio_pci: offset %u not aligned to %u\n",
>> +                       offset, align);
>> +               return NULL;
>> +       }
>> +
>> +       if (length > size)
>> +               length = size;
>> +
>> +       if (len)
>> +               *len = length;
>> +
>> +       if (minlen + offset < minlen ||
>> +           minlen + offset > pci_resource_len(dev, bar)) {
>> +               dev_err(&dev->dev,
>> +                       "virtio_pci: map virtio %zu@%u out of range on bar %i length %lu\n",
>> +                       minlen, offset,
>> +                       bar, (unsigned long)pci_resource_len(dev, bar));
>> +               return NULL;
>> +       }
>> +
>> +       p = pci_iomap_range(dev, bar, offset, length);
>> +       if (!p)
>> +               dev_err(&dev->dev,
>> +                       "virtio_pci: unable to map virtio %u@%u on bar %i\n",
>> +                       length, offset, bar);
>> +       else if (pa)
>> +               *pa = pci_resource_start(dev, bar) + offset;
>> +
>> +       return p;
>> +}
>> +
>> +/**
>> + * virtio_pci_find_capability - walk capabilities to find device info.
>> + * @dev: the pci device
>> + * @cfg_type: the VIRTIO_PCI_CAP_* value we seek
>> + * @ioresource_types: IORESOURCE_MEM and/or IORESOURCE_IO.
>> + * @bars: the bitmask of BARs
>> + *
>> + * Returns offset of the capability, or 0.
>> + */
>> +static inline int virtio_pci_find_capability(struct pci_dev *dev, u8 cfg_type,
>> +                                            u32 ioresource_types, int *bars)
>> +{
>> +       int pos;
>> +
>> +       for (pos = pci_find_capability(dev, PCI_CAP_ID_VNDR);
>> +            pos > 0;
>> +            pos = pci_find_next_capability(dev, pos, PCI_CAP_ID_VNDR)) {
>> +               u8 type, bar;
>> +
>> +               pci_read_config_byte(dev, pos + offsetof(struct virtio_pci_cap,
>> +                                                        cfg_type),
>> +                                    &type);
>> +               pci_read_config_byte(dev, pos + offsetof(struct virtio_pci_cap,
>> +                                                        bar),
>> +                                    &bar);
>> +
>> +               /* Ignore structures with reserved BAR values */
>> +               if (bar >= PCI_STD_NUM_BARS)
>> +                       continue;
>> +
>> +               if (type == cfg_type) {
>> +                       if (pci_resource_len(dev, bar) &&
>> +                           pci_resource_flags(dev, bar) & ioresource_types) {
>> +                               *bars |= (1 << bar);
>> +                               return pos;
>> +                       }
>> +               }
>> +       }
>> +       return 0;
>> +}
>> +
>> +/*
>> + * pds_vdpa_probe_virtio: probe the modern virtio pci device, note that the
>> + * caller is required to enable PCI device before calling this function.
>> + * @mdev: the modern virtio-pci device
>> + *
>> + * Return 0 on succeed otherwise fail
>> + */
>> +int pds_vdpa_probe_virtio(struct virtio_pci_modern_device *mdev)
>> +{
>> +       struct pci_dev *pci_dev = mdev->pci_dev;
>> +       int err, common, isr, notify, device;
>> +       u32 notify_length;
>> +       u32 notify_offset;
>> +
>> +       /* check for a common config: if not, use legacy mode (bar 0). */
>> +       common = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_COMMON_CFG,
>> +                                           IORESOURCE_IO | IORESOURCE_MEM,
>> +                                           &mdev->modern_bars);
>> +       if (!common) {
>> +               dev_info(&pci_dev->dev,
>> +                        "virtio_pci: missing common config\n");
>> +               return -ENODEV;
>> +       }
>> +
>> +       /* If common is there, these should be too... */
>> +       isr = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_ISR_CFG,
>> +                                        IORESOURCE_IO | IORESOURCE_MEM,
>> +                                        &mdev->modern_bars);
>> +       notify = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_NOTIFY_CFG,
>> +                                           IORESOURCE_IO | IORESOURCE_MEM,
>> +                                           &mdev->modern_bars);
>> +       if (!isr || !notify) {
>> +               dev_err(&pci_dev->dev,
>> +                       "virtio_pci: missing capabilities %i/%i/%i\n",
>> +                       common, isr, notify);
>> +               return -EINVAL;
>> +       }
>> +
>> +       /* Device capability is only mandatory for devices that have
>> +        * device-specific configuration.
>> +        */
>> +       device = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_DEVICE_CFG,
>> +                                           IORESOURCE_IO | IORESOURCE_MEM,
>> +                                           &mdev->modern_bars);
>> +
>> +       err = pci_request_selected_regions(pci_dev, mdev->modern_bars,
>> +                                          "virtio-pci-modern");
>> +       if (err)
>> +               return err;
>> +
>> +       err = -EINVAL;
>> +       mdev->common = pds_vdpa_map_capability(mdev, common,
>> +                                              sizeof(struct virtio_pci_common_cfg),
>> +                                              4, 0,
>> +                                              sizeof(struct virtio_pci_common_cfg),
>> +                                              NULL, NULL);
>> +       if (!mdev->common)
>> +               goto err_map_common;
>> +       mdev->isr = pds_vdpa_map_capability(mdev, isr, sizeof(u8), 1,
>> +                                           0, 1, NULL, NULL);
>> +       if (!mdev->isr)
>> +               goto err_map_isr;
>> +
>> +       /* Read notify_off_multiplier from config space. */
>> +       pci_read_config_dword(pci_dev,
>> +                             notify + offsetof(struct virtio_pci_notify_cap,
>> +                                               notify_off_multiplier),
>> +                             &mdev->notify_offset_multiplier);
>> +       /* Read notify length and offset from config space. */
>> +       pci_read_config_dword(pci_dev,
>> +                             notify + offsetof(struct virtio_pci_notify_cap,
>> +                                               cap.length),
>> +                             &notify_length);
>> +
>> +       pci_read_config_dword(pci_dev,
>> +                             notify + offsetof(struct virtio_pci_notify_cap,
>> +                                               cap.offset),
>> +                             &notify_offset);
>> +
>> +       /* We don't know how many VQs we'll map, ahead of the time.
>> +        * If notify length is small, map it all now.
>> +        * Otherwise, map each VQ individually later.
>> +        */
>> +       if ((u64)notify_length + (notify_offset % PAGE_SIZE) <= PAGE_SIZE) {
>> +               mdev->notify_base = pds_vdpa_map_capability(mdev, notify,
>> +                                                           2, 2,
>> +                                                           0, notify_length,
>> +                                                           &mdev->notify_len,
>> +                                                           &mdev->notify_pa);
>> +               if (!mdev->notify_base)
>> +                       goto err_map_notify;
>> +       } else {
>> +               mdev->notify_map_cap = notify;
>> +       }
>> +
>> +       /* Again, we don't know how much we should map, but PAGE_SIZE
>> +        * is more than enough for all existing devices.
>> +        */
>> +       if (device) {
>> +               mdev->device = pds_vdpa_map_capability(mdev, device, 0, 4,
>> +                                                      0, PAGE_SIZE,
>> +                                                      &mdev->device_len,
>> +                                                      NULL);
>> +               if (!mdev->device)
>> +                       goto err_map_device;
>> +       }
>> +
>> +       return 0;
>> +
>> +err_map_device:
>> +       if (mdev->notify_base)
>> +               pci_iounmap(pci_dev, mdev->notify_base);
>> +err_map_notify:
>> +       pci_iounmap(pci_dev, mdev->isr);
>> +err_map_isr:
>> +       pci_iounmap(pci_dev, mdev->common);
>> +err_map_common:
>> +       pci_release_selected_regions(pci_dev, mdev->modern_bars);
>> +       return err;
>> +}
>> +
>> +void pds_vdpa_remove_virtio(struct virtio_pci_modern_device *mdev)
>> +{
>> +       struct pci_dev *pci_dev = mdev->pci_dev;
>> +
>> +       if (mdev->device)
>> +               pci_iounmap(pci_dev, mdev->device);
>> +       if (mdev->notify_base)
>> +               pci_iounmap(pci_dev, mdev->notify_base);
>> +       pci_iounmap(pci_dev, mdev->isr);
>> +       pci_iounmap(pci_dev, mdev->common);
>> +       pci_release_selected_regions(pci_dev, mdev->modern_bars);
>> +}
>> diff --git a/drivers/vdpa/pds/virtio_pci.h b/drivers/vdpa/pds/virtio_pci.h
>> new file mode 100644
>> index 000000000000..f017cfa1173c
>> --- /dev/null
>> +++ b/drivers/vdpa/pds/virtio_pci.h
>> @@ -0,0 +1,8 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
>> +
>> +#ifndef _PDS_VIRTIO_PCI_H_
>> +#define _PDS_VIRTIO_PCI_H_
>> +int pds_vdpa_probe_virtio(struct virtio_pci_modern_device *mdev);
>> +void pds_vdpa_remove_virtio(struct virtio_pci_modern_device *mdev);
>> +#endif /* _PDS_VIRTIO_PCI_H_ */
>> --
>> 2.17.1
>>
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 5/7] pds_vdpa: add support for vdpa and vdpamgmt interfaces
  2023-03-15  7:05     ` Jason Wang
  (?)
@ 2023-03-16  3:25     ` Shannon Nelson
  -1 siblings, 0 replies; 36+ messages in thread
From: Shannon Nelson @ 2023-03-16  3:25 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On 3/15/23 12:05 AM, Jason Wang wrote:
> On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>>
>> This is the vDPA device support, where we advertise that we can
>> support the virtio queues and deal with the configuration work
>> through the pds_core's adminq.
>>
>> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
>> ---
>>   drivers/vdpa/pds/aux_drv.c  |  15 +
>>   drivers/vdpa/pds/aux_drv.h  |   1 +
>>   drivers/vdpa/pds/debugfs.c  | 172 ++++++++++++
>>   drivers/vdpa/pds/debugfs.h  |   8 +
>>   drivers/vdpa/pds/vdpa_dev.c | 545 +++++++++++++++++++++++++++++++++++-
>>   5 files changed, 740 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
>> index 28158d0d98a5..d706f06f7400 100644
>> --- a/drivers/vdpa/pds/aux_drv.c
>> +++ b/drivers/vdpa/pds/aux_drv.c
>> @@ -60,8 +60,21 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
>>                  goto err_free_mgmt_info;
>>          }
>>
>> +       /* Let vdpa know that we can provide devices */
>> +       err = vdpa_mgmtdev_register(&vdpa_aux->vdpa_mdev);
>> +       if (err) {
>> +               dev_err(dev, "%s: Failed to initialize vdpa_mgmt interface: %pe\n",
>> +                       __func__, ERR_PTR(err));
>> +               goto err_free_virtio;
>> +       }
>> +
>> +       pds_vdpa_debugfs_add_pcidev(vdpa_aux);
>> +       pds_vdpa_debugfs_add_ident(vdpa_aux);
>> +
>>          return 0;
>>
>> +err_free_virtio:
>> +       pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
>>   err_free_mgmt_info:
>>          pci_free_irq_vectors(padev->vf->pdev);
>>   err_aux_unreg:
>> @@ -78,11 +91,13 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
>>          struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
>>          struct device *dev = &aux_dev->dev;
>>
>> +       vdpa_mgmtdev_unregister(&vdpa_aux->vdpa_mdev);
>>          pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev);
>>          pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
>>
>>          vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
>>
>> +       pds_vdpa_debugfs_del_vdpadev(vdpa_aux);
>>          kfree(vdpa_aux);
>>          auxiliary_set_drvdata(aux_dev, NULL);
>>
>> diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
>> index 87ac3c01c476..1ab1ce64da7c 100644
>> --- a/drivers/vdpa/pds/aux_drv.h
>> +++ b/drivers/vdpa/pds/aux_drv.h
>> @@ -11,6 +11,7 @@ struct pds_vdpa_aux {
>>          struct pds_auxiliary_dev *padev;
>>
>>          struct vdpa_mgmt_dev vdpa_mdev;
>> +       struct pds_vdpa_device *pdsv;
>>
>>          struct pds_vdpa_ident ident;
>>
>> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
>> index aa5e9677fe74..b3ee4f42f3b6 100644
>> --- a/drivers/vdpa/pds/debugfs.c
>> +++ b/drivers/vdpa/pds/debugfs.c
>> @@ -9,6 +9,7 @@
>>   #include <linux/pds/pds_auxbus.h>
>>
>>   #include "aux_drv.h"
>> +#include "vdpa_dev.h"
>>   #include "debugfs.h"
>>
>>   #ifdef CONFIG_DEBUG_FS
>> @@ -26,4 +27,175 @@ void pds_vdpa_debugfs_destroy(void)
>>          dbfs_dir = NULL;
>>   }
>>
>> +#define PRINT_SBIT_NAME(__seq, __f, __name)                     \
>> +       do {                                                    \
>> +               if ((__f) & (__name))                               \
>> +                       seq_printf(__seq, " %s", &#__name[16]); \
>> +       } while (0)
>> +
>> +static void print_status_bits(struct seq_file *seq, u16 status)
>> +{
>> +       seq_puts(seq, "status:");
>> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_ACKNOWLEDGE);
>> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER);
>> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER_OK);
>> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FEATURES_OK);
>> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_NEEDS_RESET);
>> +       PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FAILED);
>> +       seq_puts(seq, "\n");
>> +}
>> +
>> +#define PRINT_FBIT_NAME(__seq, __f, __name)                \
>> +       do {                                               \
>> +               if ((__f) & BIT_ULL(__name))                 \
>> +                       seq_printf(__seq, " %s", #__name); \
>> +       } while (0)
>> +
>> +static void print_feature_bits(struct seq_file *seq, u64 features)
>> +{
>> +       seq_puts(seq, "features:");
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CSUM);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_CSUM);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MTU);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MAC);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_TSO4);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_TSO6);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_ECN);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_UFO);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_TSO4);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_TSO6);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_ECN);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_UFO);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MRG_RXBUF);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_STATUS);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_VQ);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_RX);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_VLAN);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_RX_EXTRA);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_ANNOUNCE);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MQ);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_MAC_ADDR);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HASH_REPORT);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_RSS);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_RSC_EXT);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_STANDBY);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_SPEED_DUPLEX);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_NOTIFY_ON_EMPTY);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_ANY_LAYOUT);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_VERSION_1);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_ACCESS_PLATFORM);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_RING_PACKED);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_ORDER_PLATFORM);
>> +       PRINT_FBIT_NAME(seq, features, VIRTIO_F_SR_IOV);
>> +       seq_puts(seq, "\n");
> 
> Should we print the features that are not understood here?

Probably not a bad idea, if we keep this around.  I might end up just 
yanking it out.

> 
>> +}
>> +
>> +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux)
>> +{
>> +       vdpa_aux->dentry = debugfs_create_dir(pci_name(vdpa_aux->padev->vf->pdev), dbfs_dir);
>> +}
>> +
>> +static int identity_show(struct seq_file *seq, void *v)
>> +{
>> +       struct pds_vdpa_aux *vdpa_aux = seq->private;
>> +       struct vdpa_mgmt_dev *mgmt;
>> +
>> +       seq_printf(seq, "aux_dev:            %s\n",
>> +                  dev_name(&vdpa_aux->padev->aux_dev.dev));
>> +
>> +       mgmt = &vdpa_aux->vdpa_mdev;
>> +       seq_printf(seq, "max_vqs:            %d\n", mgmt->max_supported_vqs);
>> +       seq_printf(seq, "config_attr_mask:   %#llx\n", mgmt->config_attr_mask);
>> +       seq_printf(seq, "supported_features: %#llx\n", mgmt->supported_features);
>> +       print_feature_bits(seq, mgmt->supported_features);
>> +
>> +       return 0;
>> +}
>> +DEFINE_SHOW_ATTRIBUTE(identity);
>> +
>> +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux)
>> +{
>> +       debugfs_create_file("identity", 0400, vdpa_aux->dentry,
>> +                           vdpa_aux, &identity_fops);
>> +}
>> +
>> +static int config_show(struct seq_file *seq, void *v)
>> +{
>> +       struct pds_vdpa_device *pdsv = seq->private;
>> +       struct virtio_net_config vc;
>> +
>> +       memcpy_fromio(&vc, pdsv->vdpa_aux->vd_mdev.device,
>> +                     sizeof(struct virtio_net_config));
>> +
>> +       seq_printf(seq, "mac:                  %pM\n", vc.mac);
>> +       seq_printf(seq, "max_virtqueue_pairs:  %d\n",
>> +                  __virtio16_to_cpu(true, vc.max_virtqueue_pairs));
>> +       seq_printf(seq, "mtu:                  %d\n", __virtio16_to_cpu(true, vc.mtu));
>> +       seq_printf(seq, "speed:                %d\n", le32_to_cpu(vc.speed));
>> +       seq_printf(seq, "duplex:               %d\n", vc.duplex);
>> +       seq_printf(seq, "rss_max_key_size:     %d\n", vc.rss_max_key_size);
>> +       seq_printf(seq, "rss_max_indirection_table_length: %d\n",
>> +                  le16_to_cpu(vc.rss_max_indirection_table_length));
>> +       seq_printf(seq, "supported_hash_types: %#x\n",
>> +                  le32_to_cpu(vc.supported_hash_types));
>> +       seq_printf(seq, "vn_status:            %#x\n",
>> +                  __virtio16_to_cpu(true, vc.status));
>> +       print_status_bits(seq, __virtio16_to_cpu(true, vc.status));
>> +
>> +       seq_printf(seq, "req_features:         %#llx\n", pdsv->req_features);
>> +       print_feature_bits(seq, pdsv->req_features);
>> +       seq_printf(seq, "actual_features:      %#llx\n", pdsv->actual_features);
>> +       print_feature_bits(seq, pdsv->actual_features);
>> +       seq_printf(seq, "vdpa_index:           %d\n", pdsv->vdpa_index);
>> +       seq_printf(seq, "num_vqs:              %d\n", pdsv->num_vqs);
>> +
>> +       return 0;
>> +}
>> +DEFINE_SHOW_ATTRIBUTE(config);
>> +
>> +static int vq_show(struct seq_file *seq, void *v)
>> +{
>> +       struct pds_vdpa_vq_info *vq = seq->private;
>> +
>> +       seq_printf(seq, "ready:      %d\n", vq->ready);
>> +       seq_printf(seq, "desc_addr:  %#llx\n", vq->desc_addr);
>> +       seq_printf(seq, "avail_addr: %#llx\n", vq->avail_addr);
>> +       seq_printf(seq, "used_addr:  %#llx\n", vq->used_addr);
>> +       seq_printf(seq, "q_len:      %d\n", vq->q_len);
>> +       seq_printf(seq, "qid:        %d\n", vq->qid);
>> +
>> +       seq_printf(seq, "doorbell:   %#llx\n", vq->doorbell);
>> +       seq_printf(seq, "avail_idx:  %d\n", vq->avail_idx);
>> +       seq_printf(seq, "used_idx:   %d\n", vq->used_idx);
>> +       seq_printf(seq, "irq:        %d\n", vq->irq);
>> +       seq_printf(seq, "irq-name:   %s\n", vq->irq_name);
>> +
>> +       seq_printf(seq, "hw_qtype:   %d\n", vq->hw_qtype);
>> +       seq_printf(seq, "hw_qindex:  %d\n", vq->hw_qindex);
>> +
>> +       return 0;
>> +}
>> +DEFINE_SHOW_ATTRIBUTE(vq);
>> +
>> +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux)
>> +{
>> +       int i;
>> +
>> +       debugfs_create_file("config", 0400, vdpa_aux->dentry, vdpa_aux->pdsv, &config_fops);
>> +
>> +       for (i = 0; i < vdpa_aux->pdsv->num_vqs; i++) {
>> +               char name[8];
>> +
>> +               snprintf(name, sizeof(name), "vq%02d", i);
>> +               debugfs_create_file(name, 0400, vdpa_aux->dentry,
>> +                                   &vdpa_aux->pdsv->vqs[i], &vq_fops);
>> +       }
>> +}
>> +
>> +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux)
>> +{
>> +       debugfs_remove_recursive(vdpa_aux->dentry);
>> +       vdpa_aux->dentry = NULL;
>> +}
>>   #endif /* CONFIG_DEBUG_FS */
>> diff --git a/drivers/vdpa/pds/debugfs.h b/drivers/vdpa/pds/debugfs.h
>> index fff078a869e5..23e8345add0d 100644
>> --- a/drivers/vdpa/pds/debugfs.h
>> +++ b/drivers/vdpa/pds/debugfs.h
>> @@ -10,9 +10,17 @@
>>
>>   void pds_vdpa_debugfs_create(void);
>>   void pds_vdpa_debugfs_destroy(void);
>> +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux);
>> +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux);
>> +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux);
>> +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux);
>>   #else
>>   static inline void pds_vdpa_debugfs_create(void) { }
>>   static inline void pds_vdpa_debugfs_destroy(void) { }
>> +static inline void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux) { }
>> +static inline void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux) { }
>> +static inline void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux) { }
>> +static inline void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux) { }
>>   #endif
>>
>>   #endif /* _PDS_VDPA_DEBUGFS_H_ */
>> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
>> index 15d623297203..2e0a5078d379 100644
>> --- a/drivers/vdpa/pds/vdpa_dev.c
>> +++ b/drivers/vdpa/pds/vdpa_dev.c
>> @@ -5,6 +5,7 @@
>>   #include <linux/vdpa.h>
>>   #include <uapi/linux/vdpa.h>
>>   #include <linux/virtio_pci_modern.h>
>> +#include <uapi/linux/virtio_pci.h>
>>
>>   #include <linux/pds/pds_core.h>
>>   #include <linux/pds/pds_adminq.h>
>> @@ -13,7 +14,426 @@
>>
>>   #include "vdpa_dev.h"
>>   #include "aux_drv.h"
>> +#include "cmds.h"
>> +#include "debugfs.h"
>>
>> +static struct pds_vdpa_device *vdpa_to_pdsv(struct vdpa_device *vdpa_dev)
>> +{
>> +       return container_of(vdpa_dev, struct pds_vdpa_device, vdpa_dev);
>> +}
>> +
>> +static int pds_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid,
>> +                                  u64 desc_addr, u64 driver_addr, u64 device_addr)
>> +{
>> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
>> +
>> +       pdsv->vqs[qid].desc_addr = desc_addr;
>> +       pdsv->vqs[qid].avail_addr = driver_addr;
>> +       pdsv->vqs[qid].used_addr = device_addr;
>> +
>> +       return 0;
>> +}
>> +
>> +static void pds_vdpa_set_vq_num(struct vdpa_device *vdpa_dev, u16 qid, u32 num)
>> +{
>> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
>> +
>> +       pdsv->vqs[qid].q_len = num;
>> +}
>> +
>> +static void pds_vdpa_kick_vq(struct vdpa_device *vdpa_dev, u16 qid)
>> +{
>> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
>> +
>> +       iowrite16(qid, pdsv->vqs[qid].notify);
>> +}
>> +
>> +static void pds_vdpa_set_vq_cb(struct vdpa_device *vdpa_dev, u16 qid,
>> +                              struct vdpa_callback *cb)
>> +{
>> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
>> +
>> +       pdsv->vqs[qid].event_cb = *cb;
>> +}
>> +
>> +static irqreturn_t pds_vdpa_isr(int irq, void *data)
>> +{
>> +       struct pds_vdpa_vq_info *vq;
>> +
>> +       vq = data;
>> +       if (vq->event_cb.callback)
>> +               vq->event_cb.callback(vq->event_cb.private);
>> +
>> +       return IRQ_HANDLED;
>> +}
>> +
>> +static void pds_vdpa_release_irq(struct pds_vdpa_device *pdsv, int qid)
>> +{
>> +       if (pdsv->vqs[qid].irq == VIRTIO_MSI_NO_VECTOR)
>> +               return;
>> +
>> +       free_irq(pdsv->vqs[qid].irq, &pdsv->vqs[qid]);
>> +       pdsv->vqs[qid].irq = VIRTIO_MSI_NO_VECTOR;
>> +}
>> +
>> +static void pds_vdpa_set_vq_ready(struct vdpa_device *vdpa_dev, u16 qid, bool ready)
>> +{
>> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
>> +       struct pci_dev *pdev = pdsv->vdpa_aux->padev->vf->pdev;
>> +       struct device *dev = &pdsv->vdpa_dev.dev;
>> +       int irq;
>> +       int err;
>> +
>> +       dev_dbg(dev, "%s: qid %d ready %d => %d\n",
>> +               __func__, qid, pdsv->vqs[qid].ready, ready);
>> +       if (ready == pdsv->vqs[qid].ready)
>> +               return;
>> +
>> +       if (ready) {
>> +               irq = pci_irq_vector(pdev, qid);
>> +               snprintf(pdsv->vqs[qid].irq_name, sizeof(pdsv->vqs[qid].irq_name),
>> +                        "vdpa-%s-%d", dev_name(dev), qid);
>> +
>> +               err = request_irq(irq, pds_vdpa_isr, 0,
>> +                                 pdsv->vqs[qid].irq_name, &pdsv->vqs[qid]);
>> +               if (err) {
>> +                       dev_err(dev, "%s: no irq for qid %d: %pe\n",
>> +                               __func__, qid, ERR_PTR(err));
>> +                       return;
>> +               }
>> +               pdsv->vqs[qid].irq = irq;
>> +
>> +               /* Pass vq setup info to DSC */
>> +               err = pds_vdpa_cmd_init_vq(pdsv, qid, &pdsv->vqs[qid]);
>> +               if (err) {
>> +                       pds_vdpa_release_irq(pdsv, qid);
>> +                       ready = false;
>> +               }
>> +       } else {
>> +               err = pds_vdpa_cmd_reset_vq(pdsv, qid);
>> +               if (err)
>> +                       dev_err(dev, "%s: reset_vq failed qid %d: %pe\n",
>> +                               __func__, qid, ERR_PTR(err));
>> +               pds_vdpa_release_irq(pdsv, qid);
>> +       }
>> +
>> +       pdsv->vqs[qid].ready = ready;
>> +}
>> +
>> +static bool pds_vdpa_get_vq_ready(struct vdpa_device *vdpa_dev, u16 qid)
>> +{
>> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
>> +
>> +       return pdsv->vqs[qid].ready;
>> +}
>> +
>> +static int pds_vdpa_set_vq_state(struct vdpa_device *vdpa_dev, u16 qid,
>> +                                const struct vdpa_vq_state *state)
>> +{
>> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
>> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
>> +       struct device *dev = &padev->aux_dev.dev;
>> +       struct pds_vdpa_vq_set_state_cmd cmd = {
>> +               .opcode = PDS_VDPA_CMD_VQ_SET_STATE,
>> +               .vdpa_index = pdsv->vdpa_index,
>> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
>> +               .qid = cpu_to_le16(qid),
>> +       };
>> +       struct pds_vdpa_comp comp = {0};
>> +       int err;
>> +
>> +       dev_dbg(dev, "%s: qid %d avail %#x\n",
>> +               __func__, qid, state->packed.last_avail_idx);
>> +
>> +       if (pdsv->actual_features & VIRTIO_F_RING_PACKED) {
>> +               cmd.avail = cpu_to_le16(state->packed.last_avail_idx |
>> +                                       (state->packed.last_avail_counter << 15));
>> +               cmd.used = cpu_to_le16(state->packed.last_used_idx |
>> +                                      (state->packed.last_used_counter << 15));
>> +       } else {
>> +               cmd.avail = cpu_to_le16(state->split.avail_index);
>> +               /* state->split does not provide a used_index:
>> +                * the vq will be set to "empty" here, and the vq will read
>> +                * the current used index the next time the vq is kicked.
>> +                */
>> +               cmd.used = cpu_to_le16(state->split.avail_index);
>> +       }
>> +
>> +       err = padev->ops->adminq_cmd(padev,
>> +                                    (union pds_core_adminq_cmd *)&cmd,
>> +                                    sizeof(cmd),
>> +                                    (union pds_core_adminq_comp *)&comp,
>> +                                    0);
> 
> I had one question for adminq command. I think we should use PF
> instead of VF but in __pdsc_adminq_post() I saw:
> 
>          q_info->dest = comp;
>          memcpy(q_info->desc, cmd, sizeof(*cmd));
> 
> So cmd should be fine since it is copied to the q_info->desc which is
> already mapped. But q_info->dest look suspicious, where did it mapped?

The queue descriptors get allocated and mapped as a large single block 
in pdsc_qcq_alloc() with a call to dma_alloc_coherent(), then 
pdsc_q_map() sets up the q_info[].dest pointers.


> 
> Thanks
> 
> 
>> +       if (err)
>> +               dev_err(dev, "Failed to set vq state qid %u, status %d: %pe\n",
>> +                       qid, comp.status, ERR_PTR(err));
>> +
>> +       return err;
>> +}
>> +
>> +static int pds_vdpa_get_vq_state(struct vdpa_device *vdpa_dev, u16 qid,
>> +                                struct vdpa_vq_state *state)
>> +{
>> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
>> +       struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev;
>> +       struct device *dev = &padev->aux_dev.dev;
>> +       struct pds_vdpa_vq_get_state_cmd cmd = {
>> +               .opcode = PDS_VDPA_CMD_VQ_GET_STATE,
>> +               .vdpa_index = pdsv->vdpa_index,
>> +               .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id),
>> +               .qid = cpu_to_le16(qid),
>> +       };
>> +       struct pds_vdpa_vq_get_state_comp comp = {0};
>> +       int err;
>> +
>> +       dev_dbg(dev, "%s: qid %d\n", __func__, qid);
>> +
>> +       err = padev->ops->adminq_cmd(padev,
>> +                                    (union pds_core_adminq_cmd *)&cmd,
>> +                                    sizeof(cmd),
>> +                                    (union pds_core_adminq_comp *)&comp,
>> +                                    0);
>> +       if (err) {
>> +               dev_err(dev, "Failed to get vq state qid %u, status %d: %pe\n",
>> +                       qid, comp.status, ERR_PTR(err));
>> +               return err;
>> +       }
>> +
>> +       if (pdsv->actual_features & VIRTIO_F_RING_PACKED) {
>> +               state->packed.last_avail_idx = le16_to_cpu(comp.avail) & 0x7fff;
>> +               state->packed.last_avail_counter = le16_to_cpu(comp.avail) >> 15;
>> +       } else {
>> +               state->split.avail_index = le16_to_cpu(comp.avail);
>> +               /* state->split does not provide a used_index. */
>> +       }
>> +
>> +       return err;
>> +}
>> +
>> +static struct vdpa_notification_area
>> +pds_vdpa_get_vq_notification(struct vdpa_device *vdpa_dev, u16 qid)
>> +{
>> +       struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev);
>> +       struct virtio_pci_modern_device *vd_mdev;
>> +       struct vdpa_notification_area area;
>> +
>> +       area.addr = pdsv->vqs[qid].notify_pa;
>> +
>> +       vd_mdev = &pdsv->vdpa_aux->vd_mdev;
>> +       if (!vd_mdev->notify_offset_multiplier)
>> +               area.size = PAGE_SIZE;
> 
> Note that PAGE_SIZE varies among archs, I doubt we should use a fixed size here.

Yeah, good thought, I'll fix that up.

> 
> Others look good.
> 
> Thanks
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig
  2023-03-15  7:05     ` Jason Wang
  (?)
@ 2023-03-16  3:25     ` Shannon Nelson
  2023-03-17  3:54         ` Jason Wang
  -1 siblings, 1 reply; 36+ messages in thread
From: Shannon Nelson @ 2023-03-16  3:25 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On 3/15/23 12:05 AM, Jason Wang wrote:
> On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>>
>> Add the documentation and Kconfig entry for pds_vdpa driver.
>>
>> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
>> ---
>>   .../ethernet/pensando/pds_vdpa.rst            | 84 +++++++++++++++++++
>>   MAINTAINERS                                   |  4 +
>>   drivers/vdpa/Kconfig                          |  8 ++
>>   3 files changed, 96 insertions(+)
>>   create mode 100644 Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
>>
>> diff --git a/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
>> new file mode 100644
>> index 000000000000..d41f6dd66e3e
>> --- /dev/null
>> +++ b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
>> @@ -0,0 +1,84 @@
>> +.. SPDX-License-Identifier: GPL-2.0+
>> +.. note: can be edited and viewed with /usr/bin/formiko-vim
>> +
>> +==========================================================
>> +PCI vDPA driver for the AMD/Pensando(R) DSC adapter family
>> +==========================================================
>> +
>> +AMD/Pensando vDPA VF Device Driver
>> +Copyright(c) 2023 Advanced Micro Devices, Inc
>> +
>> +Overview
>> +========
>> +
>> +The ``pds_vdpa`` driver is an auxiliary bus driver that supplies
>> +a vDPA device for use by the virtio network stack.  It is used with
>> +the Pensando Virtual Function devices that offer vDPA and virtio queue
>> +services.  It depends on the ``pds_core`` driver and hardware for the PF
>> +and VF PCI handling as well as for device configuration services.
>> +
>> +Using the device
>> +================
>> +
>> +The ``pds_vdpa`` device is enabled via multiple configuration steps and
>> +depends on the ``pds_core`` driver to create and enable SR-IOV Virtual
>> +Function devices.
>> +
>> +Shown below are the steps to bind the driver to a VF and also to the
>> +associated auxiliary device created by the ``pds_core`` driver.
>> +
>> +.. code-block:: bash
>> +
>> +  #!/bin/bash
>> +
>> +  modprobe pds_core
>> +  modprobe vdpa
>> +  modprobe pds_vdpa
>> +
>> +  PF_BDF=`grep -H "vDPA.*1" /sys/kernel/debug/pds_core/*/viftypes | head -1 | awk -F / '{print $6}'`
>> +
>> +  # Enable vDPA VF auxiliary device(s) in the PF
>> +  devlink dev param set pci/$PF_BDF name enable_vnet value true cmode runtime
>> +
> 
> Does this mean we can't do per VF configuration for vDPA enablement
> (e.g VF0 for vdpa VF1 to other type)?

For now, yes, a PF only supports one VF type at a time.  We've thought 
about possibilities for some heterogeneous configurations, and tried to 
do some planning for future flexibility, but our current needs don't go 
that far.  If and when we get there, we might look at how Guatam's group 
did their VF personalities in their EF100 driver, or some other 
possibilities.

Thanks for looking through these, I appreciate your time and comments.

sln


> 
> Thanks
> 
> 
>> +  # Create a VF for vDPA use
>> +  echo 1 > /sys/bus/pci/drivers/pds_core/$PF_BDF/sriov_numvfs
>> +
>> +  # Find the vDPA services/devices available
>> +  PDS_VDPA_MGMT=`vdpa mgmtdev show | grep vDPA | head -1 | cut -d: -f1`
>> +
>> +  # Create a vDPA device for use in virtio network configurations
>> +  vdpa dev add name vdpa1 mgmtdev $PDS_VDPA_MGMT mac 00:11:22:33:44:55
>> +
>> +  # Set up an ethernet interface on the vdpa device
>> +  modprobe virtio_vdpa
>> +
>> +
>> +
>> +Enabling the driver
>> +===================
>> +
>> +The driver is enabled via the standard kernel configuration system,
>> +using the make command::
>> +
>> +  make oldconfig/menuconfig/etc.
>> +
>> +The driver is located in the menu structure at:
>> +
>> +  -> Device Drivers
>> +    -> Network device support (NETDEVICES [=y])
>> +      -> Ethernet driver support
>> +        -> Pensando devices
>> +          -> Pensando Ethernet PDS_VDPA Support
>> +
>> +Support
>> +=======
>> +
>> +For general Linux networking support, please use the netdev mailing
>> +list, which is monitored by Pensando personnel::
>> +
>> +  netdev@vger.kernel.org
>> +
>> +For more specific support needs, please use the Pensando driver support
>> +email::
>> +
>> +  drivers@pensando.io
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index cb21dcd3a02a..da981c5bc830 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -22120,6 +22120,10 @@ SNET DPU VIRTIO DATA PATH ACCELERATOR
>>   R:     Alvaro Karsz <alvaro.karsz@solid-run.com>
>>   F:     drivers/vdpa/solidrun/
>>
>> +PDS DSC VIRTIO DATA PATH ACCELERATOR
>> +R:     Shannon Nelson <shannon.nelson@amd.com>
>> +F:     drivers/vdpa/pds/
>> +
>>   VIRTIO BALLOON
>>   M:     "Michael S. Tsirkin" <mst@redhat.com>
>>   M:     David Hildenbrand <david@redhat.com>
>> diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig
>> index cd6ad92f3f05..c910cb119c1b 100644
>> --- a/drivers/vdpa/Kconfig
>> +++ b/drivers/vdpa/Kconfig
>> @@ -116,4 +116,12 @@ config ALIBABA_ENI_VDPA
>>            This driver includes a HW monitor device that
>>            reads health values from the DPU.
>>
>> +config PDS_VDPA
>> +       tristate "vDPA driver for AMD/Pensando DSC devices"
>> +       depends on PDS_CORE
>> +       help
>> +         VDPA network driver for AMD/Pensando's PDS Core devices.
>> +         With this driver, the VirtIO dataplane can be
>> +         offloaded to an AMD/Pensando DSC device.
>> +
>>   endif # VDPA
>> --
>> 2.17.1
>>
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 2/7] pds_vdpa: get vdpa management info
  2023-03-16  3:25     ` Shannon Nelson
@ 2023-03-17  3:33         ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-17  3:33 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: brett.creeley, mst, netdev, virtualization, kuba, drivers, davem

On Thu, Mar 16, 2023 at 11:25 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> On 3/15/23 12:05 AM, Jason Wang wrote:
> > On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
> >>
> >> Find the vDPA management information from the DSC in order to
> >> advertise it to the vdpa subsystem.
> >>
> >> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> >> ---
> >>   drivers/vdpa/pds/Makefile    |   3 +-
> >>   drivers/vdpa/pds/aux_drv.c   |  13 ++++
> >>   drivers/vdpa/pds/aux_drv.h   |   7 +++
> >>   drivers/vdpa/pds/debugfs.c   |   3 +
> >>   drivers/vdpa/pds/vdpa_dev.c  | 113 +++++++++++++++++++++++++++++++++++
> >>   drivers/vdpa/pds/vdpa_dev.h  |  15 +++++
> >>   include/linux/pds/pds_vdpa.h |  92 ++++++++++++++++++++++++++++
> >>   7 files changed, 245 insertions(+), 1 deletion(-)
> >>   create mode 100644 drivers/vdpa/pds/vdpa_dev.c
> >>   create mode 100644 drivers/vdpa/pds/vdpa_dev.h
> >>
> >> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
> >> index a9cd2f450ae1..13b50394ec64 100644
> >> --- a/drivers/vdpa/pds/Makefile
> >> +++ b/drivers/vdpa/pds/Makefile
> >> @@ -3,6 +3,7 @@
> >>
> >>   obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
> >>
> >> -pds_vdpa-y := aux_drv.o
> >> +pds_vdpa-y := aux_drv.o \
> >> +             vdpa_dev.o
> >>
> >>   pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
> >> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
> >> index b3f36170253c..63e40ae68211 100644
> >> --- a/drivers/vdpa/pds/aux_drv.c
> >> +++ b/drivers/vdpa/pds/aux_drv.c
> >> @@ -2,6 +2,8 @@
> >>   /* Copyright(c) 2023 Advanced Micro Devices, Inc */
> >>
> >>   #include <linux/auxiliary_bus.h>
> >> +#include <linux/pci.h>
> >> +#include <linux/vdpa.h>
> >>
> >>   #include <linux/pds/pds_core.h>
> >>   #include <linux/pds/pds_auxbus.h>
> >> @@ -9,6 +11,7 @@
> >>
> >>   #include "aux_drv.h"
> >>   #include "debugfs.h"
> >> +#include "vdpa_dev.h"
> >>
> >>   static const struct auxiliary_device_id pds_vdpa_id_table[] = {
> >>          { .name = PDS_VDPA_DEV_NAME, },
> >> @@ -30,6 +33,7 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
> >>                  return -ENOMEM;
> >>
> >>          vdpa_aux->padev = padev;
> >> +       vdpa_aux->vf_id = pci_iov_vf_id(padev->vf->pdev);
> >>          auxiliary_set_drvdata(aux_dev, vdpa_aux);
> >>
> >>          /* Register our PDS client with the pds_core */
> >> @@ -40,8 +44,15 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
> >>                  goto err_free_mem;
> >>          }
> >>
> >> +       /* Get device ident info and set up the vdpa_mgmt_dev */
> >> +       err = pds_vdpa_get_mgmt_info(vdpa_aux);
> >> +       if (err)
> >> +               goto err_aux_unreg;
> >> +
> >>          return 0;
> >>
> >> +err_aux_unreg:
> >> +       padev->ops->unregister_client(padev);
> >>   err_free_mem:
> >>          kfree(vdpa_aux);
> >>          auxiliary_set_drvdata(aux_dev, NULL);
> >> @@ -54,6 +65,8 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
> >>          struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
> >>          struct device *dev = &aux_dev->dev;
> >>
> >> +       pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
> >> +
> >>          vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
> >>
> >>          kfree(vdpa_aux);
> >> diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
> >> index 14e465944dfd..94ba7abcaa43 100644
> >> --- a/drivers/vdpa/pds/aux_drv.h
> >> +++ b/drivers/vdpa/pds/aux_drv.h
> >> @@ -10,6 +10,13 @@
> >>   struct pds_vdpa_aux {
> >>          struct pds_auxiliary_dev *padev;
> >>
> >> +       struct vdpa_mgmt_dev vdpa_mdev;
> >> +
> >> +       struct pds_vdpa_ident ident;
> >> +
> >> +       int vf_id;
> >>          struct dentry *dentry;
> >> +
> >> +       int nintrs;
> >>   };
> >>   #endif /* _AUX_DRV_H_ */
> >> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
> >> index 3c163dc7b66f..7b7e90fd6578 100644
> >> --- a/drivers/vdpa/pds/debugfs.c
> >> +++ b/drivers/vdpa/pds/debugfs.c
> >> @@ -1,7 +1,10 @@
> >>   // SPDX-License-Identifier: GPL-2.0-only
> >>   /* Copyright(c) 2023 Advanced Micro Devices, Inc */
> >>
> >> +#include <linux/vdpa.h>
> >> +
> >>   #include <linux/pds/pds_core.h>
> >> +#include <linux/pds/pds_vdpa.h>
> >>   #include <linux/pds/pds_auxbus.h>
> >>
> >>   #include "aux_drv.h"
> >> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
> >> new file mode 100644
> >> index 000000000000..bd840688503c
> >> --- /dev/null
> >> +++ b/drivers/vdpa/pds/vdpa_dev.c
> >> @@ -0,0 +1,113 @@
> >> +// SPDX-License-Identifier: GPL-2.0-only
> >> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> >> +
> >> +#include <linux/pci.h>
> >> +#include <linux/vdpa.h>
> >> +#include <uapi/linux/vdpa.h>
> >> +
> >> +#include <linux/pds/pds_core.h>
> >> +#include <linux/pds/pds_adminq.h>
> >> +#include <linux/pds/pds_auxbus.h>
> >> +#include <linux/pds/pds_vdpa.h>
> >> +
> >> +#include "vdpa_dev.h"
> >> +#include "aux_drv.h"
> >> +
> >> +static struct virtio_device_id pds_vdpa_id_table[] = {
> >> +       {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID},
> >> +       {0},
> >> +};
> >> +
> >> +static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
> >> +                           const struct vdpa_dev_set_config *add_config)
> >> +{
> >> +       return -EOPNOTSUPP;
> >> +}
> >> +
> >> +static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev,
> >> +                            struct vdpa_device *vdpa_dev)
> >> +{
> >> +}
> >> +
> >> +static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = {
> >> +       .dev_add = pds_vdpa_dev_add,
> >> +       .dev_del = pds_vdpa_dev_del
> >> +};
> >> +
> >> +int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux)
> >> +{
> >> +       struct pds_vdpa_ident_cmd ident_cmd = {
> >> +               .opcode = PDS_VDPA_CMD_IDENT,
> >> +               .vf_id = cpu_to_le16(vdpa_aux->vf_id),
> >> +       };
> >> +       struct pds_vdpa_comp ident_comp = {0};
> >> +       struct vdpa_mgmt_dev *mgmt;
> >> +       struct device *pf_dev;
> >> +       struct pci_dev *pdev;
> >> +       dma_addr_t ident_pa;
> >> +       struct device *dev;
> >> +       u16 max_vqs;
> >> +       int err;
> >> +
> >> +       dev = &vdpa_aux->padev->aux_dev.dev;
> >> +       pdev = vdpa_aux->padev->vf->pdev;
> >> +       mgmt = &vdpa_aux->vdpa_mdev;
> >> +
> >> +       /* Get resource info through the PF's adminq.  It is a block of info,
> >> +        * so we need to map some memory for PF to make available to the
> >> +        * firmware for writing the data.
> >> +        */
> >
> > It looks to me pds_vdpa_ident is not very large:
> >
> > struct pds_vdpa_ident {
> >          __le64 hw_features;
> >          __le16 max_vqs;
> >          __le16 max_qlen;
> >          __le16 min_qlen;
> > };
> >
> > Any reason it is not packed into some type of the comp structure of adminq?
>
> Unfortunately, the completion structs are limited to 16 bytes, with 4 up
> front and 1 at the end already spoken for.  I suppose we could shrink
> max_vqs to a single byte and squeeze this into the comp, but then we'd
> have no ability to add to it if needed.  I'd rather leave it as it is
> for now.

Fine.

Thanks

>
> sln
>
> >
> > Others look good.
> >
> > Thanks
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 2/7] pds_vdpa: get vdpa management info
@ 2023-03-17  3:33         ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-17  3:33 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On Thu, Mar 16, 2023 at 11:25 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> On 3/15/23 12:05 AM, Jason Wang wrote:
> > On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
> >>
> >> Find the vDPA management information from the DSC in order to
> >> advertise it to the vdpa subsystem.
> >>
> >> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> >> ---
> >>   drivers/vdpa/pds/Makefile    |   3 +-
> >>   drivers/vdpa/pds/aux_drv.c   |  13 ++++
> >>   drivers/vdpa/pds/aux_drv.h   |   7 +++
> >>   drivers/vdpa/pds/debugfs.c   |   3 +
> >>   drivers/vdpa/pds/vdpa_dev.c  | 113 +++++++++++++++++++++++++++++++++++
> >>   drivers/vdpa/pds/vdpa_dev.h  |  15 +++++
> >>   include/linux/pds/pds_vdpa.h |  92 ++++++++++++++++++++++++++++
> >>   7 files changed, 245 insertions(+), 1 deletion(-)
> >>   create mode 100644 drivers/vdpa/pds/vdpa_dev.c
> >>   create mode 100644 drivers/vdpa/pds/vdpa_dev.h
> >>
> >> diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile
> >> index a9cd2f450ae1..13b50394ec64 100644
> >> --- a/drivers/vdpa/pds/Makefile
> >> +++ b/drivers/vdpa/pds/Makefile
> >> @@ -3,6 +3,7 @@
> >>
> >>   obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o
> >>
> >> -pds_vdpa-y := aux_drv.o
> >> +pds_vdpa-y := aux_drv.o \
> >> +             vdpa_dev.o
> >>
> >>   pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o
> >> diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c
> >> index b3f36170253c..63e40ae68211 100644
> >> --- a/drivers/vdpa/pds/aux_drv.c
> >> +++ b/drivers/vdpa/pds/aux_drv.c
> >> @@ -2,6 +2,8 @@
> >>   /* Copyright(c) 2023 Advanced Micro Devices, Inc */
> >>
> >>   #include <linux/auxiliary_bus.h>
> >> +#include <linux/pci.h>
> >> +#include <linux/vdpa.h>
> >>
> >>   #include <linux/pds/pds_core.h>
> >>   #include <linux/pds/pds_auxbus.h>
> >> @@ -9,6 +11,7 @@
> >>
> >>   #include "aux_drv.h"
> >>   #include "debugfs.h"
> >> +#include "vdpa_dev.h"
> >>
> >>   static const struct auxiliary_device_id pds_vdpa_id_table[] = {
> >>          { .name = PDS_VDPA_DEV_NAME, },
> >> @@ -30,6 +33,7 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
> >>                  return -ENOMEM;
> >>
> >>          vdpa_aux->padev = padev;
> >> +       vdpa_aux->vf_id = pci_iov_vf_id(padev->vf->pdev);
> >>          auxiliary_set_drvdata(aux_dev, vdpa_aux);
> >>
> >>          /* Register our PDS client with the pds_core */
> >> @@ -40,8 +44,15 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev,
> >>                  goto err_free_mem;
> >>          }
> >>
> >> +       /* Get device ident info and set up the vdpa_mgmt_dev */
> >> +       err = pds_vdpa_get_mgmt_info(vdpa_aux);
> >> +       if (err)
> >> +               goto err_aux_unreg;
> >> +
> >>          return 0;
> >>
> >> +err_aux_unreg:
> >> +       padev->ops->unregister_client(padev);
> >>   err_free_mem:
> >>          kfree(vdpa_aux);
> >>          auxiliary_set_drvdata(aux_dev, NULL);
> >> @@ -54,6 +65,8 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev)
> >>          struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev);
> >>          struct device *dev = &aux_dev->dev;
> >>
> >> +       pci_free_irq_vectors(vdpa_aux->padev->vf->pdev);
> >> +
> >>          vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev);
> >>
> >>          kfree(vdpa_aux);
> >> diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h
> >> index 14e465944dfd..94ba7abcaa43 100644
> >> --- a/drivers/vdpa/pds/aux_drv.h
> >> +++ b/drivers/vdpa/pds/aux_drv.h
> >> @@ -10,6 +10,13 @@
> >>   struct pds_vdpa_aux {
> >>          struct pds_auxiliary_dev *padev;
> >>
> >> +       struct vdpa_mgmt_dev vdpa_mdev;
> >> +
> >> +       struct pds_vdpa_ident ident;
> >> +
> >> +       int vf_id;
> >>          struct dentry *dentry;
> >> +
> >> +       int nintrs;
> >>   };
> >>   #endif /* _AUX_DRV_H_ */
> >> diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
> >> index 3c163dc7b66f..7b7e90fd6578 100644
> >> --- a/drivers/vdpa/pds/debugfs.c
> >> +++ b/drivers/vdpa/pds/debugfs.c
> >> @@ -1,7 +1,10 @@
> >>   // SPDX-License-Identifier: GPL-2.0-only
> >>   /* Copyright(c) 2023 Advanced Micro Devices, Inc */
> >>
> >> +#include <linux/vdpa.h>
> >> +
> >>   #include <linux/pds/pds_core.h>
> >> +#include <linux/pds/pds_vdpa.h>
> >>   #include <linux/pds/pds_auxbus.h>
> >>
> >>   #include "aux_drv.h"
> >> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
> >> new file mode 100644
> >> index 000000000000..bd840688503c
> >> --- /dev/null
> >> +++ b/drivers/vdpa/pds/vdpa_dev.c
> >> @@ -0,0 +1,113 @@
> >> +// SPDX-License-Identifier: GPL-2.0-only
> >> +/* Copyright(c) 2023 Advanced Micro Devices, Inc */
> >> +
> >> +#include <linux/pci.h>
> >> +#include <linux/vdpa.h>
> >> +#include <uapi/linux/vdpa.h>
> >> +
> >> +#include <linux/pds/pds_core.h>
> >> +#include <linux/pds/pds_adminq.h>
> >> +#include <linux/pds/pds_auxbus.h>
> >> +#include <linux/pds/pds_vdpa.h>
> >> +
> >> +#include "vdpa_dev.h"
> >> +#include "aux_drv.h"
> >> +
> >> +static struct virtio_device_id pds_vdpa_id_table[] = {
> >> +       {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID},
> >> +       {0},
> >> +};
> >> +
> >> +static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
> >> +                           const struct vdpa_dev_set_config *add_config)
> >> +{
> >> +       return -EOPNOTSUPP;
> >> +}
> >> +
> >> +static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev,
> >> +                            struct vdpa_device *vdpa_dev)
> >> +{
> >> +}
> >> +
> >> +static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = {
> >> +       .dev_add = pds_vdpa_dev_add,
> >> +       .dev_del = pds_vdpa_dev_del
> >> +};
> >> +
> >> +int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux)
> >> +{
> >> +       struct pds_vdpa_ident_cmd ident_cmd = {
> >> +               .opcode = PDS_VDPA_CMD_IDENT,
> >> +               .vf_id = cpu_to_le16(vdpa_aux->vf_id),
> >> +       };
> >> +       struct pds_vdpa_comp ident_comp = {0};
> >> +       struct vdpa_mgmt_dev *mgmt;
> >> +       struct device *pf_dev;
> >> +       struct pci_dev *pdev;
> >> +       dma_addr_t ident_pa;
> >> +       struct device *dev;
> >> +       u16 max_vqs;
> >> +       int err;
> >> +
> >> +       dev = &vdpa_aux->padev->aux_dev.dev;
> >> +       pdev = vdpa_aux->padev->vf->pdev;
> >> +       mgmt = &vdpa_aux->vdpa_mdev;
> >> +
> >> +       /* Get resource info through the PF's adminq.  It is a block of info,
> >> +        * so we need to map some memory for PF to make available to the
> >> +        * firmware for writing the data.
> >> +        */
> >
> > It looks to me pds_vdpa_ident is not very large:
> >
> > struct pds_vdpa_ident {
> >          __le64 hw_features;
> >          __le16 max_vqs;
> >          __le16 max_qlen;
> >          __le16 min_qlen;
> > };
> >
> > Any reason it is not packed into some type of the comp structure of adminq?
>
> Unfortunately, the completion structs are limited to 16 bytes, with 4 up
> front and 1 at the end already spoken for.  I suppose we could shrink
> max_vqs to a single byte and squeeze this into the comp, but then we'd
> have no ability to add to it if needed.  I'd rather leave it as it is
> for now.

Fine.

Thanks

>
> sln
>
> >
> > Others look good.
> >
> > Thanks
> >
>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 4/7] pds_vdpa: add vdpa config client commands
  2023-03-16  3:25     ` Shannon Nelson
@ 2023-03-17  3:36         ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-17  3:36 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: brett.creeley, mst, netdev, virtualization, kuba, drivers, davem

On Thu, Mar 16, 2023 at 11:25 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> On 3/15/23 12:05 AM, Jason Wang wrote:
> > On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
> >>
> >> These are the adminq commands that will be needed for
> >> setting up and using the vDPA device.
> >
> > It's better to explain under which case the driver should use adminq,
> > I see some functions overlap with common configuration capability.
> > More below.
>
> Yes, I agree this needs to be more clearly stated.  The overlap is
> because the original FW didn't have the virtio device as well modeled
> and we had to go through adminq calls to get things done.

Does this mean the device could be actually probed by a virtio-pci driver?

>  Now that we
> have a reasonable virtio emulation and can use the virtio_net_config, we
> have a lot less need for the adminq calls.

Please add those in the changelog. Btw, adminq should be more flexible
since it's easier to extend for new features. If there's no plan to
model a virtio-pci driver we can even avoid mapping PCI capabilities
which may simplify the codes.

Thanks

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 4/7] pds_vdpa: add vdpa config client commands
@ 2023-03-17  3:36         ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-17  3:36 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On Thu, Mar 16, 2023 at 11:25 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> On 3/15/23 12:05 AM, Jason Wang wrote:
> > On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
> >>
> >> These are the adminq commands that will be needed for
> >> setting up and using the vDPA device.
> >
> > It's better to explain under which case the driver should use adminq,
> > I see some functions overlap with common configuration capability.
> > More below.
>
> Yes, I agree this needs to be more clearly stated.  The overlap is
> because the original FW didn't have the virtio device as well modeled
> and we had to go through adminq calls to get things done.

Does this mean the device could be actually probed by a virtio-pci driver?

>  Now that we
> have a reasonable virtio emulation and can use the virtio_net_config, we
> have a lot less need for the adminq calls.

Please add those in the changelog. Btw, adminq should be more flexible
since it's easier to extend for new features. If there's no plan to
model a virtio-pci driver we can even avoid mapping PCI capabilities
which may simplify the codes.

Thanks


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 3/7] pds_vdpa: virtio bar setup for vdpa
  2023-03-16  3:25     ` Shannon Nelson
@ 2023-03-17  3:37         ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-17  3:37 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: brett.creeley, mst, netdev, virtualization, kuba, drivers, davem

On Thu, Mar 16, 2023 at 11:25 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> On 3/15/23 12:05 AM, Jason Wang wrote:
> > On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
> >>
> >> The PDS vDPA device has a virtio BAR for describing itself, and
> >> the pds_vdpa driver needs to access it.  Here we copy liberally
> >> from the existing drivers/virtio/virtio_pci_modern_dev.c as it
> >> has what we need, but we need to modify it so that it can work
> >> with our device id and so we can use our own DMA mask.
> >
> > By passing a pointer to a customized id probing routine to vp_modern_probe()?
>
> The only real differences are that we needed to cut out the device id
> checks to use our vDPA VF device id, and remove
> dma_set_mask_and_coherent() because we need a different DMA_BIT_MASK().
>
> Maybe a function pointer to something that can validate the device id,
> and a bitmask for setting DMA mapping; if they are 0/NULL, use the
> default device id check and DMA mask.
>
> Adding them as extra arguments to the function call seems a bit messy,
> maybe add them to the struct virtio_pci_modern_device and the caller can
> set them as overrides if needed?
>
> struct virtio_pci_modern_device {
>
>         ...
>
>         int (*device_id_check_override(struct pci_dev *pdev));
>         u64 dma_mask_override;
> }

Looks fine.

Thanks

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 3/7] pds_vdpa: virtio bar setup for vdpa
@ 2023-03-17  3:37         ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-17  3:37 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On Thu, Mar 16, 2023 at 11:25 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> On 3/15/23 12:05 AM, Jason Wang wrote:
> > On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
> >>
> >> The PDS vDPA device has a virtio BAR for describing itself, and
> >> the pds_vdpa driver needs to access it.  Here we copy liberally
> >> from the existing drivers/virtio/virtio_pci_modern_dev.c as it
> >> has what we need, but we need to modify it so that it can work
> >> with our device id and so we can use our own DMA mask.
> >
> > By passing a pointer to a customized id probing routine to vp_modern_probe()?
>
> The only real differences are that we needed to cut out the device id
> checks to use our vDPA VF device id, and remove
> dma_set_mask_and_coherent() because we need a different DMA_BIT_MASK().
>
> Maybe a function pointer to something that can validate the device id,
> and a bitmask for setting DMA mapping; if they are 0/NULL, use the
> default device id check and DMA mask.
>
> Adding them as extra arguments to the function call seems a bit messy,
> maybe add them to the struct virtio_pci_modern_device and the caller can
> set them as overrides if needed?
>
> struct virtio_pci_modern_device {
>
>         ...
>
>         int (*device_id_check_override(struct pci_dev *pdev));
>         u64 dma_mask_override;
> }

Looks fine.

Thanks


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig
  2023-03-16  3:25     ` Shannon Nelson
@ 2023-03-17  3:54         ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-17  3:54 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: brett.creeley, mst, netdev, virtualization, kuba, drivers, davem

On Thu, Mar 16, 2023 at 11:25 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> On 3/15/23 12:05 AM, Jason Wang wrote:
> > On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
> >>
> >> Add the documentation and Kconfig entry for pds_vdpa driver.
> >>
> >> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> >> ---
> >>   .../ethernet/pensando/pds_vdpa.rst            | 84 +++++++++++++++++++
> >>   MAINTAINERS                                   |  4 +
> >>   drivers/vdpa/Kconfig                          |  8 ++
> >>   3 files changed, 96 insertions(+)
> >>   create mode 100644 Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
> >>
> >> diff --git a/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
> >> new file mode 100644
> >> index 000000000000..d41f6dd66e3e
> >> --- /dev/null
> >> +++ b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
> >> @@ -0,0 +1,84 @@
> >> +.. SPDX-License-Identifier: GPL-2.0+
> >> +.. note: can be edited and viewed with /usr/bin/formiko-vim
> >> +
> >> +==========================================================
> >> +PCI vDPA driver for the AMD/Pensando(R) DSC adapter family
> >> +==========================================================
> >> +
> >> +AMD/Pensando vDPA VF Device Driver
> >> +Copyright(c) 2023 Advanced Micro Devices, Inc
> >> +
> >> +Overview
> >> +========
> >> +
> >> +The ``pds_vdpa`` driver is an auxiliary bus driver that supplies
> >> +a vDPA device for use by the virtio network stack.  It is used with
> >> +the Pensando Virtual Function devices that offer vDPA and virtio queue
> >> +services.  It depends on the ``pds_core`` driver and hardware for the PF
> >> +and VF PCI handling as well as for device configuration services.
> >> +
> >> +Using the device
> >> +================
> >> +
> >> +The ``pds_vdpa`` device is enabled via multiple configuration steps and
> >> +depends on the ``pds_core`` driver to create and enable SR-IOV Virtual
> >> +Function devices.
> >> +
> >> +Shown below are the steps to bind the driver to a VF and also to the
> >> +associated auxiliary device created by the ``pds_core`` driver.
> >> +
> >> +.. code-block:: bash
> >> +
> >> +  #!/bin/bash
> >> +
> >> +  modprobe pds_core
> >> +  modprobe vdpa
> >> +  modprobe pds_vdpa
> >> +
> >> +  PF_BDF=`grep -H "vDPA.*1" /sys/kernel/debug/pds_core/*/viftypes | head -1 | awk -F / '{print $6}'`
> >> +
> >> +  # Enable vDPA VF auxiliary device(s) in the PF
> >> +  devlink dev param set pci/$PF_BDF name enable_vnet value true cmode runtime
> >> +
> >
> > Does this mean we can't do per VF configuration for vDPA enablement
> > (e.g VF0 for vdpa VF1 to other type)?
>
> For now, yes, a PF only supports one VF type at a time.  We've thought
> about possibilities for some heterogeneous configurations, and tried to
> do some planning for future flexibility, but our current needs don't go
> that far.  If and when we get there, we might look at how Guatam's group
> did their VF personalities in their EF100 driver, or some other
> possibilities.

That's fine.


>
> Thanks for looking through these, I appreciate your time and comments.

You are welcome.

Thanks

>
> sln
>
>
> >
> > Thanks
> >
> >
> >> +  # Create a VF for vDPA use
> >> +  echo 1 > /sys/bus/pci/drivers/pds_core/$PF_BDF/sriov_numvfs
> >> +
> >> +  # Find the vDPA services/devices available
> >> +  PDS_VDPA_MGMT=`vdpa mgmtdev show | grep vDPA | head -1 | cut -d: -f1`
> >> +
> >> +  # Create a vDPA device for use in virtio network configurations
> >> +  vdpa dev add name vdpa1 mgmtdev $PDS_VDPA_MGMT mac 00:11:22:33:44:55
> >> +
> >> +  # Set up an ethernet interface on the vdpa device
> >> +  modprobe virtio_vdpa
> >> +
> >> +
> >> +
> >> +Enabling the driver
> >> +===================
> >> +
> >> +The driver is enabled via the standard kernel configuration system,
> >> +using the make command::
> >> +
> >> +  make oldconfig/menuconfig/etc.
> >> +
> >> +The driver is located in the menu structure at:
> >> +
> >> +  -> Device Drivers
> >> +    -> Network device support (NETDEVICES [=y])
> >> +      -> Ethernet driver support
> >> +        -> Pensando devices
> >> +          -> Pensando Ethernet PDS_VDPA Support
> >> +
> >> +Support
> >> +=======
> >> +
> >> +For general Linux networking support, please use the netdev mailing
> >> +list, which is monitored by Pensando personnel::
> >> +
> >> +  netdev@vger.kernel.org
> >> +
> >> +For more specific support needs, please use the Pensando driver support
> >> +email::
> >> +
> >> +  drivers@pensando.io
> >> diff --git a/MAINTAINERS b/MAINTAINERS
> >> index cb21dcd3a02a..da981c5bc830 100644
> >> --- a/MAINTAINERS
> >> +++ b/MAINTAINERS
> >> @@ -22120,6 +22120,10 @@ SNET DPU VIRTIO DATA PATH ACCELERATOR
> >>   R:     Alvaro Karsz <alvaro.karsz@solid-run.com>
> >>   F:     drivers/vdpa/solidrun/
> >>
> >> +PDS DSC VIRTIO DATA PATH ACCELERATOR
> >> +R:     Shannon Nelson <shannon.nelson@amd.com>
> >> +F:     drivers/vdpa/pds/
> >> +
> >>   VIRTIO BALLOON
> >>   M:     "Michael S. Tsirkin" <mst@redhat.com>
> >>   M:     David Hildenbrand <david@redhat.com>
> >> diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig
> >> index cd6ad92f3f05..c910cb119c1b 100644
> >> --- a/drivers/vdpa/Kconfig
> >> +++ b/drivers/vdpa/Kconfig
> >> @@ -116,4 +116,12 @@ config ALIBABA_ENI_VDPA
> >>            This driver includes a HW monitor device that
> >>            reads health values from the DPU.
> >>
> >> +config PDS_VDPA
> >> +       tristate "vDPA driver for AMD/Pensando DSC devices"
> >> +       depends on PDS_CORE
> >> +       help
> >> +         VDPA network driver for AMD/Pensando's PDS Core devices.
> >> +         With this driver, the VirtIO dataplane can be
> >> +         offloaded to an AMD/Pensando DSC device.
> >> +
> >>   endif # VDPA
> >> --
> >> 2.17.1
> >>
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig
@ 2023-03-17  3:54         ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2023-03-17  3:54 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: mst, virtualization, brett.creeley, davem, netdev, kuba, drivers

On Thu, Mar 16, 2023 at 11:25 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
>
> On 3/15/23 12:05 AM, Jason Wang wrote:
> > On Thu, Mar 9, 2023 at 9:31 AM Shannon Nelson <shannon.nelson@amd.com> wrote:
> >>
> >> Add the documentation and Kconfig entry for pds_vdpa driver.
> >>
> >> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
> >> ---
> >>   .../ethernet/pensando/pds_vdpa.rst            | 84 +++++++++++++++++++
> >>   MAINTAINERS                                   |  4 +
> >>   drivers/vdpa/Kconfig                          |  8 ++
> >>   3 files changed, 96 insertions(+)
> >>   create mode 100644 Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
> >>
> >> diff --git a/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
> >> new file mode 100644
> >> index 000000000000..d41f6dd66e3e
> >> --- /dev/null
> >> +++ b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst
> >> @@ -0,0 +1,84 @@
> >> +.. SPDX-License-Identifier: GPL-2.0+
> >> +.. note: can be edited and viewed with /usr/bin/formiko-vim
> >> +
> >> +==========================================================
> >> +PCI vDPA driver for the AMD/Pensando(R) DSC adapter family
> >> +==========================================================
> >> +
> >> +AMD/Pensando vDPA VF Device Driver
> >> +Copyright(c) 2023 Advanced Micro Devices, Inc
> >> +
> >> +Overview
> >> +========
> >> +
> >> +The ``pds_vdpa`` driver is an auxiliary bus driver that supplies
> >> +a vDPA device for use by the virtio network stack.  It is used with
> >> +the Pensando Virtual Function devices that offer vDPA and virtio queue
> >> +services.  It depends on the ``pds_core`` driver and hardware for the PF
> >> +and VF PCI handling as well as for device configuration services.
> >> +
> >> +Using the device
> >> +================
> >> +
> >> +The ``pds_vdpa`` device is enabled via multiple configuration steps and
> >> +depends on the ``pds_core`` driver to create and enable SR-IOV Virtual
> >> +Function devices.
> >> +
> >> +Shown below are the steps to bind the driver to a VF and also to the
> >> +associated auxiliary device created by the ``pds_core`` driver.
> >> +
> >> +.. code-block:: bash
> >> +
> >> +  #!/bin/bash
> >> +
> >> +  modprobe pds_core
> >> +  modprobe vdpa
> >> +  modprobe pds_vdpa
> >> +
> >> +  PF_BDF=`grep -H "vDPA.*1" /sys/kernel/debug/pds_core/*/viftypes | head -1 | awk -F / '{print $6}'`
> >> +
> >> +  # Enable vDPA VF auxiliary device(s) in the PF
> >> +  devlink dev param set pci/$PF_BDF name enable_vnet value true cmode runtime
> >> +
> >
> > Does this mean we can't do per VF configuration for vDPA enablement
> > (e.g VF0 for vdpa VF1 to other type)?
>
> For now, yes, a PF only supports one VF type at a time.  We've thought
> about possibilities for some heterogeneous configurations, and tried to
> do some planning for future flexibility, but our current needs don't go
> that far.  If and when we get there, we might look at how Guatam's group
> did their VF personalities in their EF100 driver, or some other
> possibilities.

That's fine.


>
> Thanks for looking through these, I appreciate your time and comments.

You are welcome.

Thanks

>
> sln
>
>
> >
> > Thanks
> >
> >
> >> +  # Create a VF for vDPA use
> >> +  echo 1 > /sys/bus/pci/drivers/pds_core/$PF_BDF/sriov_numvfs
> >> +
> >> +  # Find the vDPA services/devices available
> >> +  PDS_VDPA_MGMT=`vdpa mgmtdev show | grep vDPA | head -1 | cut -d: -f1`
> >> +
> >> +  # Create a vDPA device for use in virtio network configurations
> >> +  vdpa dev add name vdpa1 mgmtdev $PDS_VDPA_MGMT mac 00:11:22:33:44:55
> >> +
> >> +  # Set up an ethernet interface on the vdpa device
> >> +  modprobe virtio_vdpa
> >> +
> >> +
> >> +
> >> +Enabling the driver
> >> +===================
> >> +
> >> +The driver is enabled via the standard kernel configuration system,
> >> +using the make command::
> >> +
> >> +  make oldconfig/menuconfig/etc.
> >> +
> >> +The driver is located in the menu structure at:
> >> +
> >> +  -> Device Drivers
> >> +    -> Network device support (NETDEVICES [=y])
> >> +      -> Ethernet driver support
> >> +        -> Pensando devices
> >> +          -> Pensando Ethernet PDS_VDPA Support
> >> +
> >> +Support
> >> +=======
> >> +
> >> +For general Linux networking support, please use the netdev mailing
> >> +list, which is monitored by Pensando personnel::
> >> +
> >> +  netdev@vger.kernel.org
> >> +
> >> +For more specific support needs, please use the Pensando driver support
> >> +email::
> >> +
> >> +  drivers@pensando.io
> >> diff --git a/MAINTAINERS b/MAINTAINERS
> >> index cb21dcd3a02a..da981c5bc830 100644
> >> --- a/MAINTAINERS
> >> +++ b/MAINTAINERS
> >> @@ -22120,6 +22120,10 @@ SNET DPU VIRTIO DATA PATH ACCELERATOR
> >>   R:     Alvaro Karsz <alvaro.karsz@solid-run.com>
> >>   F:     drivers/vdpa/solidrun/
> >>
> >> +PDS DSC VIRTIO DATA PATH ACCELERATOR
> >> +R:     Shannon Nelson <shannon.nelson@amd.com>
> >> +F:     drivers/vdpa/pds/
> >> +
> >>   VIRTIO BALLOON
> >>   M:     "Michael S. Tsirkin" <mst@redhat.com>
> >>   M:     David Hildenbrand <david@redhat.com>
> >> diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig
> >> index cd6ad92f3f05..c910cb119c1b 100644
> >> --- a/drivers/vdpa/Kconfig
> >> +++ b/drivers/vdpa/Kconfig
> >> @@ -116,4 +116,12 @@ config ALIBABA_ENI_VDPA
> >>            This driver includes a HW monitor device that
> >>            reads health values from the DPU.
> >>
> >> +config PDS_VDPA
> >> +       tristate "vDPA driver for AMD/Pensando DSC devices"
> >> +       depends on PDS_CORE
> >> +       help
> >> +         VDPA network driver for AMD/Pensando's PDS Core devices.
> >> +         With this driver, the VirtIO dataplane can be
> >> +         offloaded to an AMD/Pensando DSC device.
> >> +
> >>   endif # VDPA
> >> --
> >> 2.17.1
> >>
> >
>


^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2023-03-17  3:55 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-09  1:30 [PATCH RFC v2 virtio 0/7] pds_vdpa driver Shannon Nelson
2023-03-09  1:30 ` [PATCH RFC v2 virtio 1/7] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC Shannon Nelson
2023-03-12 14:06   ` Simon Horman
2023-03-12 14:35     ` Simon Horman
2023-03-13 16:13       ` Shannon Nelson
2023-03-13 16:26         ` Simon Horman
2023-03-09  1:30 ` [PATCH RFC v2 virtio 2/7] pds_vdpa: get vdpa management info Shannon Nelson
2023-03-15  7:05   ` Jason Wang
2023-03-15  7:05     ` Jason Wang
2023-03-16  3:25     ` Shannon Nelson
2023-03-17  3:33       ` Jason Wang
2023-03-17  3:33         ` Jason Wang
2023-03-09  1:30 ` [PATCH RFC v2 virtio 3/7] pds_vdpa: virtio bar setup for vdpa Shannon Nelson
2023-03-15  7:05   ` Jason Wang
2023-03-15  7:05     ` Jason Wang
2023-03-16  3:25     ` Shannon Nelson
2023-03-17  3:37       ` Jason Wang
2023-03-17  3:37         ` Jason Wang
2023-03-09  1:30 ` [PATCH RFC v2 virtio 4/7] pds_vdpa: add vdpa config client commands Shannon Nelson
2023-03-15  7:05   ` Jason Wang
2023-03-15  7:05     ` Jason Wang
2023-03-16  3:25     ` Shannon Nelson
2023-03-17  3:36       ` Jason Wang
2023-03-17  3:36         ` Jason Wang
2023-03-09  1:30 ` [PATCH RFC v2 virtio 5/7] pds_vdpa: add support for vdpa and vdpamgmt interfaces Shannon Nelson
2023-03-15  7:05   ` Jason Wang
2023-03-15  7:05     ` Jason Wang
2023-03-16  3:25     ` Shannon Nelson
2023-03-09  1:30 ` [PATCH RFC v2 virtio 6/7] pds_vdpa: subscribe to the pds_core events Shannon Nelson
2023-03-09  1:30 ` [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig Shannon Nelson
2023-03-15  7:05   ` Jason Wang
2023-03-15  7:05     ` Jason Wang
2023-03-16  3:25     ` Shannon Nelson
2023-03-17  3:54       ` Jason Wang
2023-03-17  3:54         ` Jason Wang
2023-03-15 18:10   ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.