netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19
@ 2020-05-20  7:02 Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 01/12] Implementation of Virtual Bus Jeff Kirsher
                   ` (12 more replies)
  0 siblings, 13 replies; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Jeff Kirsher, netdev, linux-rdma, nhorman, sassmann, jgg, parav,
	galpress, selvin.xavier, sriharsha.basavapatna, benve, bharat,
	xavier.huwei, yishaih, leonro, mkalderon, aditr,
	ranjani.sridharan, pierre-louis.bossart

This series contains the initial implementation of the Virtual Bus,
virtbus_device, virtbus_driver, updates to 'ice' and 'i40e' to use the new
Virtual Bus.

The primary purpose of the Virtual bus is to put devices on it and hook the
devices up to drivers.  This will allow drivers, like the RDMA drivers, to
hook up to devices via this Virtual bus.

The associated irdma driver designed to use this new interface, is still
in RFC currently and was sent in a separate series.  The latest RFC
series follows this series, named "Intel RDMA Driver Updates 2020-05-19".  

This series currently builds against net-next tree.

Revision history:
v2: Made changes based on community feedback, like Pierre-Louis's and
    Jason's comments to update virtual bus interface.
v3: Updated the virtual bus interface based on feedback from Jason and
    Greg KH.  Also updated the initial ice driver patch to handle the
    virtual bus changes and changes requested by Jason and Greg KH.
v4: Updated the kernel documentation based on feedback from Greg KH.
    Also added PM interface updates to satisfy the sound driver
    requirements.  Added the sound driver changes that makes use of the
    virtual bus.

The following are changes since commit 2de499258659823b3c7207c5bda089c822b67d69:
  Merge branch 's390-next'
and are available in the git repository at:
  git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue 100GbE

Dave Ertman (7):
  Implementation of Virtual Bus
  ice: Create and register virtual bus for RDMA
  ice: Complete RDMA peer registration
  ice: Support resource allocation requests
  ice: Enable event notifications
  ice: Allow reset operations
  ice: Pass through communications to VF

Ranjani Sridharan (3):
  ASoC: SOF: Introduce descriptors for SOF client
  ASoC: SOF: Create client driver for IPC test
  ASoC: SOF: ops: Add new op for client registration

Shiraz Saleem (2):
  i40e: Move client header location
  i40e: Register a virtbus device to provide RDMA

 Documentation/driver-api/index.rst            |    1 +
 Documentation/driver-api/virtual_bus.rst      |   93 ++
 MAINTAINERS                                   |    1 +
 drivers/bus/Kconfig                           |   10 +
 drivers/bus/Makefile                          |    2 +
 drivers/bus/virtual_bus.c                     |  215 +++
 drivers/infiniband/hw/i40iw/Makefile          |    1 -
 drivers/infiniband/hw/i40iw/i40iw.h           |    2 +-
 drivers/net/ethernet/intel/Kconfig            |    2 +
 drivers/net/ethernet/intel/i40e/i40e.h        |    2 +-
 drivers/net/ethernet/intel/i40e/i40e_client.c |  133 +-
 drivers/net/ethernet/intel/ice/Makefile       |    1 +
 drivers/net/ethernet/intel/ice/ice.h          |   14 +
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |   33 +
 drivers/net/ethernet/intel/ice/ice_common.c   |  206 ++-
 drivers/net/ethernet/intel/ice/ice_common.h   |    9 +
 drivers/net/ethernet/intel/ice/ice_dcb_lib.c  |   68 +
 drivers/net/ethernet/intel/ice/ice_dcb_lib.h  |    3 +
 .../net/ethernet/intel/ice/ice_hw_autogen.h   |    1 +
 drivers/net/ethernet/intel/ice/ice_idc.c      | 1344 +++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_idc_int.h  |  105 ++
 drivers/net/ethernet/intel/ice/ice_lib.c      |   50 +
 drivers/net/ethernet/intel/ice/ice_lib.h      |    4 +
 drivers/net/ethernet/intel/ice/ice_main.c     |  105 +-
 drivers/net/ethernet/intel/ice/ice_sched.c    |   69 +-
 drivers/net/ethernet/intel/ice/ice_switch.c   |   27 +
 drivers/net/ethernet/intel/ice/ice_switch.h   |    4 +
 drivers/net/ethernet/intel/ice/ice_type.h     |    4 +
 .../net/ethernet/intel/ice/ice_virtchnl_pf.c  |   59 +-
 include/linux/mod_devicetable.h               |    8 +
 .../linux/net/intel}/i40e_client.h            |   15 +
 include/linux/net/intel/iidc.h                |  337 +++++
 include/linux/virtual_bus.h                   |   62 +
 scripts/mod/devicetable-offsets.c             |    3 +
 scripts/mod/file2alias.c                      |    7 +
 sound/soc/sof/Kconfig                         |   30 +
 sound/soc/sof/Makefile                        |    6 +-
 sound/soc/sof/core.c                          |   10 +
 sound/soc/sof/intel/Kconfig                   |    1 +
 sound/soc/sof/intel/apl.c                     |   26 +
 sound/soc/sof/intel/bdw.c                     |   25 +
 sound/soc/sof/intel/byt.c                     |   28 +
 sound/soc/sof/intel/cnl.c                     |   26 +
 sound/soc/sof/ops.h                           |   34 +
 sound/soc/sof/sof-client.c                    |   91 ++
 sound/soc/sof/sof-client.h                    |   84 ++
 sound/soc/sof/sof-ipc-test-client.c           |  325 ++++
 sound/soc/sof/sof-priv.h                      |    9 +
 48 files changed, 3630 insertions(+), 65 deletions(-)
 create mode 100644 Documentation/driver-api/virtual_bus.rst
 create mode 100644 drivers/bus/virtual_bus.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_idc.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_idc_int.h
 rename {drivers/net/ethernet/intel/i40e => include/linux/net/intel}/i40e_client.h (94%)
 create mode 100644 include/linux/net/intel/iidc.h
 create mode 100644 include/linux/virtual_bus.h
 create mode 100644 sound/soc/sof/sof-client.c
 create mode 100644 sound/soc/sof/sof-client.h
 create mode 100644 sound/soc/sof/sof-ipc-test-client.c

-- 
2.26.2


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [net-next v4 01/12] Implementation of Virtual Bus
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-21 14:57   ` Parav Pandit
  2020-05-20  7:02 ` [net-next v4 02/12] ice: Create and register virtual bus for RDMA Jeff Kirsher
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Dave Ertman, netdev, linux-rdma, nhorman, sassmann, jgg, parav,
	galpress, selvin.xavier, sriharsha.basavapatna, benve, bharat,
	xavier.huwei, yishaih, leonro, mkalderon, aditr,
	ranjani.sridharan, pierre-louis.bossart, Kiran Patil,
	Andrew Bowers, Jeff Kirsher

From: Dave Ertman <david.m.ertman@intel.com>

This is the initial implementation of the Virtual Bus,
virtbus_device and virtbus_driver.  The virtual bus is
a software based bus intended to support registering
virtbus_devices and virtbus_drivers and provide matching
between them and probing of the registered drivers.

The bus will support probe/remove shutdown and
suspend/resume callbacks.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 Documentation/driver-api/index.rst       |   1 +
 Documentation/driver-api/virtual_bus.rst |  93 ++++++++++
 drivers/bus/Kconfig                      |  10 ++
 drivers/bus/Makefile                     |   2 +
 drivers/bus/virtual_bus.c                | 215 +++++++++++++++++++++++
 include/linux/mod_devicetable.h          |   8 +
 include/linux/virtual_bus.h              |  62 +++++++
 scripts/mod/devicetable-offsets.c        |   3 +
 scripts/mod/file2alias.c                 |   7 +
 9 files changed, 401 insertions(+)
 create mode 100644 Documentation/driver-api/virtual_bus.rst
 create mode 100644 drivers/bus/virtual_bus.c
 create mode 100644 include/linux/virtual_bus.h

diff --git a/Documentation/driver-api/index.rst b/Documentation/driver-api/index.rst
index d4e78cb3ef4d..4e628a6b8408 100644
--- a/Documentation/driver-api/index.rst
+++ b/Documentation/driver-api/index.rst
@@ -101,6 +101,7 @@ available subsections can be seen below.
    sync_file
    vfio-mediated-device
    vfio
+   virtual_bus
    xilinx/index
    xillybus
    zorro
diff --git a/Documentation/driver-api/virtual_bus.rst b/Documentation/driver-api/virtual_bus.rst
new file mode 100644
index 000000000000..c01fb2f079d5
--- /dev/null
+++ b/Documentation/driver-api/virtual_bus.rst
@@ -0,0 +1,93 @@
+===============================
+Virtual Bus Devices and Drivers
+===============================
+
+See <linux/virtual_bus.h> for the models for virtbus_device and virtbus_driver.
+
+This bus is meant to be a minimalist software-based bus used for
+connecting devices (that may not physically exist) to be able to
+communicate with each other.
+
+
+Memory Allocation Lifespan and Model
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The memory for a virtbus_device or virtbus_driver needs to be
+allocated before registering them on the virtual bus.
+
+The memory for the virtual_device is expected to remain viable until the
+device's mandatory .release() callback which is invoked when the device
+is unregistered by calling virtbus_unregister_device().
+
+Memory associated with a virtbus_driver is expected to remain viable
+until the driver's .remove() or .shutdown() callbacks are invoked
+during module insertion or removal.
+
+Device Enumeration
+~~~~~~~~~~~~~~~~~~
+
+The virtbus device is enumerated when it is attached to the bus. The
+device is assigned a unique ID that will be appended to its name
+making it unique.  If two virtbus_devices both named "foo" are
+registered onto the bus, they will have a sub-device names of "foo.x"
+and "foo.y" where x and y are unique integers.
+
+Common Usage and Structure Design
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The virtbus_device and virtbus_driver need to have a common header
+file.
+
+In the common header file outside of the virtual_bus infrastructure,
+define struct virtbus_object:
+
+.. code-block:: c
+
+        struct virtbus_object {
+                virtbus_device vdev;
+                struct my_private_struct *my_stuff;
+        }
+
+When the virtbus_device vdev is passed to the virtbus_driver's probe
+callback, it can then get access to the struct my_stuff.
+
+An example of the driver encapsulation:
+
+.. code-block:: c
+
+	struct custom_driver {
+		struct virtbus_driver virtbus_drv;
+		const struct custom_driver_ops ops;
+	}
+
+An example of this usage would be :
+
+.. code-block:: c
+
+	struct custom_driver custom_drv = {
+		.virtbus_drv = {
+			.driver = {
+				.name = "sof-ipc-test-virtbus-drv",
+			},
+			.id_table = custom_virtbus_id_table,
+			.probe = custom_probe,
+			.remove = custom_remove,
+			.shutdown = custom_shutdown,
+		},
+		.ops = custom_ops,
+	};
+
+Mandatory Elements
+~~~~~~~~~~~~~~~~~~
+
+virtbus_device:
+
+- .release() callback must not be NULL and is expected to perform memory cleanup.
+- .match_name must be populated to be able to match with a driver
+
+virtbus_driver:
+
+- .probe() callback must not be NULL
+- .remove() callback must not be NULL
+- .shutdown() callback must not be NULL
+- .id_table must not be NULL, used to perform matching
diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
index 6d4e4497b59b..00553c78510c 100644
--- a/drivers/bus/Kconfig
+++ b/drivers/bus/Kconfig
@@ -203,4 +203,14 @@ config DA8XX_MSTPRI
 source "drivers/bus/fsl-mc/Kconfig"
 source "drivers/bus/mhi/Kconfig"
 
+config VIRTUAL_BUS
+       tristate "Software based Virtual Bus"
+       help
+         Provides a software bus for virtbus_devices to be added to it
+         and virtbus_drivers to be registered on it.  It matches driver
+         and device based on id and calls the driver's probe routine.
+         One example is the irdma driver needing to connect with various
+         PCI LAN drivers to request resources (queues) to be able to perform
+         its function.
+
 endmenu
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 05f32cd694a4..d30828a4768c 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -37,3 +37,5 @@ obj-$(CONFIG_DA8XX_MSTPRI)	+= da8xx-mstpri.o
 
 # MHI
 obj-$(CONFIG_MHI_BUS)		+= mhi/
+
+obj-$(CONFIG_VIRTUAL_BUS)	+= virtual_bus.o
diff --git a/drivers/bus/virtual_bus.c b/drivers/bus/virtual_bus.c
new file mode 100644
index 000000000000..b70023d5b58a
--- /dev/null
+++ b/drivers/bus/virtual_bus.c
@@ -0,0 +1,215 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * virtual_bus.c - lightweight software based bus for virtual devices
+ *
+ * Copyright (c) 2019-2020 Intel Corporation
+ *
+ * Please see Documentation/driver-api/virtual_bus.rst for
+ * more information
+ */
+
+#include <linux/string.h>
+#include <linux/virtual_bus.h>
+#include <linux/of_irq.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/pm_runtime.h>
+#include <linux/pm_domain.h>
+#include <linux/acpi.h>
+#include <linux/device.h>
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Virtual Bus");
+MODULE_AUTHOR("David Ertman <david.m.ertman@intel.com>");
+MODULE_AUTHOR("Kiran Patil <kiran.patil@intel.com>");
+
+static DEFINE_IDA(virtbus_dev_ida);
+#define VIRTBUS_INVALID_ID	0xFFFFFFFF
+
+static const
+struct virtbus_dev_id *virtbus_match_id(const struct virtbus_dev_id *id,
+					struct virtbus_device *vdev)
+{
+	while (id->name[0]) {
+		if (!strcmp(vdev->match_name, id->name))
+			return id;
+		id++;
+	}
+	return NULL;
+}
+
+static int virtbus_match(struct device *dev, struct device_driver *drv)
+{
+	struct virtbus_driver *vdrv = to_virtbus_drv(drv);
+	struct virtbus_device *vdev = to_virtbus_dev(dev);
+
+	return virtbus_match_id(vdrv->id_table, vdev) != NULL;
+}
+
+static int virtbus_uevent(struct device *dev, struct kobj_uevent_env *env)
+{
+	struct virtbus_device *vdev = to_virtbus_dev(dev);
+
+	if (add_uevent_var(env, "MODALIAS=%s%s", "virtbus:", vdev->match_name))
+		return -ENOMEM;
+
+	return 0;
+}
+
+static const struct dev_pm_ops virtbus_dev_pm_ops = {
+	SET_RUNTIME_PM_OPS(pm_generic_runtime_suspend,
+			   pm_generic_runtime_resume, NULL)
+#ifdef CONFIG_PM_SLEEP
+	SET_SYSTEM_SLEEP_PM_OPS(pm_generic_suspend, pm_generic_resume)
+#endif
+};
+
+struct bus_type virtual_bus_type = {
+	.name = "virtbus",
+	.match = virtbus_match,
+	.uevent = virtbus_uevent,
+	.pm = &virtbus_dev_pm_ops,
+};
+
+/**
+ * virtbus_release_device - Destroy a virtbus device
+ * @_dev: device to release
+ */
+static void virtbus_release_device(struct device *_dev)
+{
+	struct virtbus_device *vdev = to_virtbus_dev(_dev);
+	u32 ida = vdev->id;
+
+	vdev->release(vdev);
+	if (ida != VIRTBUS_INVALID_ID)
+		ida_simple_remove(&virtbus_dev_ida, ida);
+}
+
+/**
+ * virtbus_register_device - add a virtual bus device
+ * @vdev: virtual bus device to add
+ */
+int virtbus_register_device(struct virtbus_device *vdev)
+{
+	int ret;
+
+	if (WARN_ON(!vdev->release))
+		return -EINVAL;
+
+	/* All error paths out of this function after the device_initialize
+	 * must perform a put_device() so that the .release() callback is
+	 * called for an error condition.
+	 */
+	device_initialize(&vdev->dev);
+
+	vdev->dev.bus = &virtual_bus_type;
+	vdev->dev.release = virtbus_release_device;
+
+	/* All device IDs are automatically allocated */
+	ret = ida_simple_get(&virtbus_dev_ida, 0, 0, GFP_KERNEL);
+
+	if (ret < 0) {
+		vdev->id = VIRTBUS_INVALID_ID;
+		dev_err(&vdev->dev, "get IDA idx for virtbus device failed!\n");
+		goto device_add_err;
+	}
+
+	vdev->id = ret;
+
+	ret = dev_set_name(&vdev->dev, "%s.%d", vdev->match_name, vdev->id);
+	if (ret) {
+		dev_err(&vdev->dev, "dev_set_name failed for device\n");
+		goto device_add_err;
+	}
+
+	dev_dbg(&vdev->dev, "Registering virtbus device '%s'\n",
+		dev_name(&vdev->dev));
+
+	ret = device_add(&vdev->dev);
+	if (ret)
+		goto device_add_err;
+
+	return 0;
+
+device_add_err:
+	dev_err(&vdev->dev, "Add device to virtbus failed!: %d\n", ret);
+	put_device(&vdev->dev);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(virtbus_register_device);
+
+static int virtbus_probe_driver(struct device *_dev)
+{
+	struct virtbus_driver *vdrv = to_virtbus_drv(_dev->driver);
+	struct virtbus_device *vdev = to_virtbus_dev(_dev);
+	int ret;
+
+	ret = dev_pm_domain_attach(_dev, true);
+	if (ret) {
+		dev_warn(_dev, "Failed to attach to PM Domain : %d\n", ret);
+		return ret;
+	}
+
+	ret = vdrv->probe(vdev);
+	if (ret) {
+		dev_err(&vdev->dev, "Probe returned error\n");
+		dev_pm_domain_detach(_dev, true);
+	}
+
+	return ret;
+}
+
+static int virtbus_remove_driver(struct device *_dev)
+{
+	struct virtbus_driver *vdrv = to_virtbus_drv(_dev->driver);
+	struct virtbus_device *vdev = to_virtbus_dev(_dev);
+	int ret = 0;
+
+	ret = vdrv->remove(vdev);
+	dev_pm_domain_detach(_dev, true);
+
+	return ret;
+}
+
+static void virtbus_shutdown_driver(struct device *_dev)
+{
+	struct virtbus_driver *vdrv = to_virtbus_drv(_dev->driver);
+	struct virtbus_device *vdev = to_virtbus_dev(_dev);
+
+	vdrv->shutdown(vdev);
+}
+
+/**
+ * __virtbus_register_driver - register a driver for virtual bus devices
+ * @vdrv: virtbus_driver structure
+ * @owner: owning module/driver
+ */
+int __virtbus_register_driver(struct virtbus_driver *vdrv, struct module *owner)
+{
+	if (!vdrv->probe || !vdrv->remove || !vdrv->shutdown || !vdrv->id_table)
+		return -EINVAL;
+
+	vdrv->driver.owner = owner;
+	vdrv->driver.bus = &virtual_bus_type;
+	vdrv->driver.probe = virtbus_probe_driver;
+	vdrv->driver.remove = virtbus_remove_driver;
+	vdrv->driver.shutdown = virtbus_shutdown_driver;
+
+	return driver_register(&vdrv->driver);
+}
+EXPORT_SYMBOL_GPL(__virtbus_register_driver);
+
+static int __init virtual_bus_init(void)
+{
+	return bus_register(&virtual_bus_type);
+}
+
+static void __exit virtual_bus_exit(void)
+{
+	bus_unregister(&virtual_bus_type);
+	ida_destroy(&virtbus_dev_ida);
+}
+
+module_init(virtual_bus_init);
+module_exit(virtual_bus_exit);
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index 4c2ddd0941a7..60bcfe75fb94 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -832,4 +832,12 @@ struct mhi_device_id {
 	kernel_ulong_t driver_data;
 };
 
+#define VIRTBUS_NAME_SIZE 20
+#define VIRTBUS_MODULE_PREFIX "virtbus:"
+
+struct virtbus_dev_id {
+	char name[VIRTBUS_NAME_SIZE];
+	kernel_ulong_t driver_data;
+};
+
 #endif /* LINUX_MOD_DEVICETABLE_H */
diff --git a/include/linux/virtual_bus.h b/include/linux/virtual_bus.h
new file mode 100644
index 000000000000..4872fd5a9218
--- /dev/null
+++ b/include/linux/virtual_bus.h
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * virtual_bus.h - lightweight software bus
+ *
+ * Copyright (c) 2019-2020 Intel Corporation
+ *
+ * Please see Documentation/driver-api/virtual_bus.rst for more information
+ */
+
+#ifndef _VIRTUAL_BUS_H_
+#define _VIRTUAL_BUS_H_
+
+#include <linux/device.h>
+
+struct virtbus_device {
+	struct device dev;
+	const char *match_name;
+	void (*release)(struct virtbus_device *);
+	u32 id;
+};
+
+struct virtbus_driver {
+	int (*probe)(struct virtbus_device *);
+	int (*remove)(struct virtbus_device *);
+	void (*shutdown)(struct virtbus_device *);
+	int (*suspend)(struct virtbus_device *, pm_message_t);
+	int (*resume)(struct virtbus_device *);
+	struct device_driver driver;
+	const struct virtbus_dev_id *id_table;
+};
+
+static inline
+struct virtbus_device *to_virtbus_dev(struct device *dev)
+{
+	return container_of(dev, struct virtbus_device, dev);
+}
+
+static inline
+struct virtbus_driver *to_virtbus_drv(struct device_driver *drv)
+{
+	return container_of(drv, struct virtbus_driver, driver);
+}
+
+int virtbus_register_device(struct virtbus_device *vdev);
+
+int
+__virtbus_register_driver(struct virtbus_driver *vdrv, struct module *owner);
+
+#define virtbus_register_driver(vdrv) \
+	__virtbus_register_driver(vdrv, THIS_MODULE)
+
+static inline void virtbus_unregister_device(struct virtbus_device *vdev)
+{
+	device_unregister(&vdev->dev);
+}
+
+static inline void virtbus_unregister_driver(struct virtbus_driver *vdrv)
+{
+	driver_unregister(&vdrv->driver);
+}
+
+#endif /* _VIRTUAL_BUS_H_ */
diff --git a/scripts/mod/devicetable-offsets.c b/scripts/mod/devicetable-offsets.c
index 010be8ba2116..0c8e0e3a7c84 100644
--- a/scripts/mod/devicetable-offsets.c
+++ b/scripts/mod/devicetable-offsets.c
@@ -241,5 +241,8 @@ int main(void)
 	DEVID(mhi_device_id);
 	DEVID_FIELD(mhi_device_id, chan);
 
+	DEVID(virtbus_dev_id);
+	DEVID_FIELD(virtbus_dev_id, name);
+
 	return 0;
 }
diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
index 02d5d79da284..7d78fa3fba34 100644
--- a/scripts/mod/file2alias.c
+++ b/scripts/mod/file2alias.c
@@ -1358,7 +1358,13 @@ static int do_mhi_entry(const char *filename, void *symval, char *alias)
 {
 	DEF_FIELD_ADDR(symval, mhi_device_id, chan);
 	sprintf(alias, MHI_DEVICE_MODALIAS_FMT, *chan);
+	return 1;
+}
 
+static int do_virtbus_entry(const char *filename, void *symval, char *alias)
+{
+	DEF_FIELD_ADDR(symval, virtbus_dev_id, name);
+	sprintf(alias, VIRTBUS_MODULE_PREFIX "%s", *name);
 	return 1;
 }
 
@@ -1436,6 +1442,7 @@ static const struct devtable devtable[] = {
 	{"tee", SIZE_tee_client_device_id, do_tee_entry},
 	{"wmi", SIZE_wmi_device_id, do_wmi_entry},
 	{"mhi", SIZE_mhi_device_id, do_mhi_entry},
+	{"virtbus", SIZE_virtbus_dev_id, do_virtbus_entry},
 };
 
 /* Create MODULE_ALIAS() statements.
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [net-next v4 02/12] ice: Create and register virtual bus for RDMA
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 01/12] Implementation of Virtual Bus Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 03/12] ice: Complete RDMA peer registration Jeff Kirsher
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Dave Ertman, netdev, linux-rdma, nhorman, sassmann, jgg,
	ranjani.sridharan, pierre-louis.bossart, Tony Nguyen,
	Andrew Bowers, Jeff Kirsher

From: Dave Ertman <david.m.ertman@intel.com>

The RDMA block does not have its own PCI function, instead it must utilize
the ice driver to gain access to the PCI device. Create a virtual bus
device so the irdma driver can register a virtual bus driver to bind to it
and receive device data. The device data contains all of the relevant
information that the irdma peer will need to access this PF's IIDC API
callbacks.

Note the header file iidc.h is located under include/linux/net/intel
as this is a unified header file to be used by all consumers of the
IIDC interface.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 MAINTAINERS                                   |   1 +
 drivers/net/ethernet/intel/Kconfig            |   1 +
 drivers/net/ethernet/intel/ice/Makefile       |   1 +
 drivers/net/ethernet/intel/ice/ice.h          |  12 +
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |   1 +
 drivers/net/ethernet/intel/ice/ice_common.c   |  18 +-
 drivers/net/ethernet/intel/ice/ice_dcb_lib.c  |  31 ++
 drivers/net/ethernet/intel/ice/ice_dcb_lib.h  |   3 +
 .../net/ethernet/intel/ice/ice_hw_autogen.h   |   1 +
 drivers/net/ethernet/intel/ice/ice_idc.c      | 417 ++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_idc_int.h  |  67 +++
 drivers/net/ethernet/intel/ice/ice_lib.c      |  11 +
 drivers/net/ethernet/intel/ice/ice_lib.h      |   2 +
 drivers/net/ethernet/intel/ice/ice_main.c     |  57 ++-
 drivers/net/ethernet/intel/ice/ice_type.h     |   1 +
 include/linux/net/intel/iidc.h                | 337 ++++++++++++++
 16 files changed, 958 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/ethernet/intel/ice/ice_idc.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_idc_int.h
 create mode 100644 include/linux/net/intel/iidc.h

diff --git a/MAINTAINERS b/MAINTAINERS
index b7844f6cfa4a..853d6bba8d78 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8546,6 +8546,7 @@ F:	Documentation/networking/device_drivers/intel/ixgbevf.rst
 F:	drivers/net/ethernet/intel/
 F:	drivers/net/ethernet/intel/*/
 F:	include/linux/avf/virtchnl.h
+F:	include/linux/net/intel/iidc.h
 
 INTEL FRAMEBUFFER DRIVER (excluding 810 and 815)
 M:	Maik Broemme <mbroemme@libmpq.org>
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index ad34e4335df2..814d6dcf8137 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -295,6 +295,7 @@ config ICE
 	default n
 	depends on PCI_MSI
 	select NET_DEVLINK
+	select VIRTUAL_BUS
 	---help---
 	  This driver supports Intel(R) Ethernet Connection E800 Series of
 	  devices.  For more information on how to identify your adapter, go
diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile
index 29c6c6743450..73909045da1c 100644
--- a/drivers/net/ethernet/intel/ice/Makefile
+++ b/drivers/net/ethernet/intel/ice/Makefile
@@ -20,6 +20,7 @@ ice-y := ice_main.o	\
 	 ice_flex_pipe.o \
 	 ice_flow.o	\
 	 ice_devlink.o	\
+	 ice_idc.o	\
 	 ice_ethtool.o
 ice-$(CONFIG_PCI_IOV) += ice_virtchnl_pf.o ice_sriov.o
 ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o
diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 5c11448bfbb3..73366009ef03 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -44,6 +44,7 @@
 #include "ice_switch.h"
 #include "ice_common.h"
 #include "ice_sched.h"
+#include "ice_idc_int.h"
 #include "ice_virtchnl_pf.h"
 #include "ice_sriov.h"
 #include "ice_xsk.h"
@@ -72,6 +73,8 @@ extern const char ice_drv_ver[];
 #define ICE_MAX_LG_RSS_QS	256
 #define ICE_RES_VALID_BIT	0x8000
 #define ICE_RES_MISC_VEC_ID	(ICE_RES_VALID_BIT - 1)
+#define ICE_RDMA_NUM_VECS	4
+#define ICE_RES_RDMA_VEC_ID	(ICE_RES_MISC_VEC_ID - 1)
 #define ICE_INVAL_Q_INDEX	0xffff
 #define ICE_INVAL_VFID		256
 
@@ -330,11 +333,13 @@ struct ice_q_vector {
 
 enum ice_pf_flags {
 	ICE_FLAG_FLTR_SYNC,
+	ICE_FLAG_IWARP_ENA,
 	ICE_FLAG_RSS_ENA,
 	ICE_FLAG_SRIOV_ENA,
 	ICE_FLAG_SRIOV_CAPABLE,
 	ICE_FLAG_DCB_CAPABLE,
 	ICE_FLAG_DCB_ENA,
+	ICE_FLAG_PEER_ENA,
 	ICE_FLAG_ADV_FEATURES,
 	ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA,
 	ICE_FLAG_NO_MEDIA,
@@ -384,6 +389,8 @@ struct ice_pf {
 	struct mutex sw_mutex;		/* lock for protecting VSI alloc flow */
 	struct mutex tc_mutex;		/* lock to protect TC changes */
 	u32 msg_enable;
+	u32 num_rdma_msix;	/* Total MSIX vectors for RDMA driver */
+	u32 rdma_base_vector;
 	u32 hw_csum_rx_error;
 	u32 oicr_idx;		/* Other interrupt cause MSIX vector index */
 	u32 num_avail_sw_msix;	/* remaining MSIX SW vectors left unclaimed */
@@ -410,6 +417,7 @@ struct ice_pf {
 	unsigned long tx_timeout_last_recovery;
 	u32 tx_timeout_recovery_level;
 	char int_name[ICE_INT_NAME_STR_LEN];
+	struct ice_peer_dev_int **peers;
 	u32 sw_int_count;
 };
 
@@ -523,6 +531,10 @@ int ice_get_rss(struct ice_vsi *vsi, u8 *seed, u8 *lut, u16 lut_size);
 void ice_fill_rss_lut(u8 *lut, u16 rss_table_size, u16 rss_size);
 int ice_schedule_reset(struct ice_pf *pf, enum ice_reset_req reset);
 void ice_print_link_msg(struct ice_vsi *vsi, bool isup);
+int ice_init_peer_devices(struct ice_pf *pf);
+int
+ice_for_each_peer(struct ice_pf *pf, void *data,
+		  int (*fn)(struct ice_peer_dev_int *, void *));
 int ice_open(struct net_device *netdev);
 int ice_stop(struct net_device *netdev);
 
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 2381b4014ed6..51baab0621a2 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -108,6 +108,7 @@ struct ice_aqc_list_caps_elem {
 #define ICE_AQC_CAPS_TXQS				0x0042
 #define ICE_AQC_CAPS_MSIX				0x0043
 #define ICE_AQC_CAPS_MAX_MTU				0x0047
+#define ICE_AQC_CAPS_IWARP				0x0051
 
 	u8 major_ver;
 	u8 minor_ver;
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 2c0d8fd3d5cd..2dca49aed5bb 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -825,7 +825,8 @@ enum ice_status ice_check_reset(struct ice_hw *hw)
 				 GLNVM_ULD_POR_DONE_1_M |\
 				 GLNVM_ULD_PCIER_DONE_2_M)
 
-	uld_mask = ICE_RESET_DONE_MASK;
+	uld_mask = ICE_RESET_DONE_MASK | (hw->func_caps.common_cap.iwarp ?
+					  GLNVM_ULD_PE_DONE_M : 0);
 
 	/* Device is Active; check Global Reset processes are done */
 	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
@@ -1678,6 +1679,11 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 				  "%s: msix_vector_first_id = %d\n", prefix,
 				  caps->msix_vector_first_id);
 			break;
+		case ICE_AQC_CAPS_IWARP:
+			caps->iwarp = (number == 1);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "%s: iwarp = %d\n", prefix, caps->iwarp);
+			break;
 		case ICE_AQC_CAPS_MAX_MTU:
 			caps->max_mtu = number;
 			ice_debug(hw, ICE_DBG_INIT, "%s: max_mtu = %d\n",
@@ -1701,6 +1707,16 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 		ice_debug(hw, ICE_DBG_INIT,
 			  "%s: maxtc = %d (based on #ports)\n", prefix,
 			  caps->maxtc);
+		if (caps->iwarp) {
+			ice_debug(hw, ICE_DBG_INIT, "%s: forcing RDMA off\n",
+				  prefix);
+			caps->iwarp = 0;
+		}
+
+		/* print message only when processing device capabilities */
+		if (dev_p)
+			dev_info(ice_hw_to_dev(hw),
+				 "RDMA functionality is not available with the current device configuration.\n");
 	}
 }
 
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
index 7bea09363b42..24c0a60fe172 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
@@ -763,6 +763,37 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_ring *tx_ring,
 	return 0;
 }
 
+/**
+ * ice_setup_dcb_qos_info - Setup DCB QoS information
+ * @pf: ptr to ice_pf
+ * @qos_info: QoS param instance
+ */
+void ice_setup_dcb_qos_info(struct ice_pf *pf, struct iidc_qos_params *qos_info)
+{
+	struct ice_dcbx_cfg *dcbx_cfg;
+	u32 up2tc;
+	int i;
+
+	dcbx_cfg = &pf->hw.port_info->local_dcbx_cfg;
+	up2tc = rd32(&pf->hw, PRTDCB_TUP2TC);
+	qos_info->num_apps = dcbx_cfg->numapps;
+
+	qos_info->num_tc = ice_dcb_get_num_tc(dcbx_cfg);
+
+	for (i = 0; i < IIDC_MAX_USER_PRIORITY; i++)
+		qos_info->up2tc[i] = (up2tc >> (i * 3)) & 0x7;
+
+	for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+		qos_info->tc_info[i].rel_bw =
+			dcbx_cfg->etscfg.tcbwtable[i];
+
+	for (i = 0; i < qos_info->num_apps; i++) {
+		qos_info->apps[i].priority = dcbx_cfg->app[i].priority;
+		qos_info->apps[i].prot_id = dcbx_cfg->app[i].prot_id;
+		qos_info->apps[i].selector = dcbx_cfg->app[i].selector;
+	}
+}
+
 /**
  * ice_dcb_process_lldp_set_mib_change - Process MIB change
  * @pf: ptr to ice_pf
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.h b/drivers/net/ethernet/intel/ice/ice_dcb_lib.h
index 37680e815b02..11457b6ba145 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.h
@@ -29,6 +29,8 @@ int
 ice_tx_prepare_vlan_flags_dcb(struct ice_ring *tx_ring,
 			      struct ice_tx_buf *first);
 void
+ice_setup_dcb_qos_info(struct ice_pf *pf, struct iidc_qos_params *qos_info);
+void
 ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
 				    struct ice_rq_event_info *event);
 void ice_vsi_cfg_netdev_tc(struct ice_vsi *vsi, u8 ena_tc);
@@ -82,6 +84,7 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_ring __always_unused *tx_ring,
 #define ice_update_dcb_stats(pf) do {} while (0)
 #define ice_pf_dcb_recfg(pf) do {} while (0)
 #define ice_vsi_cfg_dcb_rings(vsi) do {} while (0)
+#define ice_setup_dcb_qos_info(pf, qos_info) do {} while (0)
 #define ice_dcb_process_lldp_set_mib_change(pf, event) do {} while (0)
 #define ice_set_cgd_num(tlan_ctx, ring) do {} while (0)
 #define ice_vsi_cfg_netdev_tc(vsi, ena_tc) do {} while (0)
diff --git a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
index 1d37a9f02c1c..3f40736a8295 100644
--- a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
+++ b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
@@ -58,6 +58,7 @@
 #define PRTDCB_GENS				0x00083020
 #define PRTDCB_GENS_DCBX_STATUS_S		0
 #define PRTDCB_GENS_DCBX_STATUS_M		ICE_M(0x7, 0)
+#define PRTDCB_TUP2TC				0x001D26C0
 #define GL_PREEXT_L2_PMASK0(_i)			(0x0020F0FC + ((_i) * 4))
 #define GL_PREEXT_L2_PMASK1(_i)			(0x0020F108 + ((_i) * 4))
 #define GLFLXP_RXDID_FLX_WRD_0(_i)		(0x0045c800 + ((_i) * 4))
diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
new file mode 100644
index 000000000000..68d6b524d6d4
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_idc.c
@@ -0,0 +1,417 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019, Intel Corporation. */
+
+/* Inter-Driver Communication */
+#include <linux/virtual_bus.h>
+#include "ice.h"
+#include "ice_lib.h"
+#include "ice_dcb_lib.h"
+
+static struct peer_dev_id ice_peers[] = ASSIGN_PEER_INFO;
+
+/**
+ * ice_peer_state_change - manage state machine for peer
+ * @peer_dev: pointer to peer's configuration
+ * @new_state: the state requested to transition into
+ * @locked: boolean to determine if call made with mutex held
+ *
+ * Any function that calls this is responsible for verifying that
+ * the peer_dev_int struct is valid and capable of handling a
+ * state change
+ *
+ * This function handles all state transitions for peer devices.
+ * The state machine is as follows:
+ *
+ *     +<-----------------------+<-----------------------------+
+ *				|<-------+<----------+	       +
+ *				\/	 +	     +	       +
+ *    INIT  --------------> PROBED --> OPENING	  CLOSED --> REMOVED
+ *					 +           +
+ *				       OPENED --> CLOSING
+ *					 +	     +
+ *				       PREP_RST	     +
+ *					 +	     +
+ *				      PREPPED	     +
+ *					 +---------->+
+ */
+static void
+ice_peer_state_change(struct ice_peer_dev_int *peer_dev, long new_state,
+		      bool locked)
+{
+	struct device *dev = &peer_dev->peer_dev.vdev->dev;
+
+	if (!locked)
+		mutex_lock(&peer_dev->peer_dev_state_mutex);
+
+	switch (new_state) {
+	case ICE_PEER_DEV_STATE_INIT:
+		if (test_and_clear_bit(ICE_PEER_DEV_STATE_REMOVED,
+				       peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_INIT, peer_dev->state);
+			dev_dbg(dev, "state transition from _REMOVED to _INIT\n");
+		} else {
+			set_bit(ICE_PEER_DEV_STATE_INIT, peer_dev->state);
+			if (dev)
+				dev_dbg(dev, "state set to _INIT\n");
+		}
+		break;
+	case ICE_PEER_DEV_STATE_PROBED:
+		if (test_and_clear_bit(ICE_PEER_DEV_STATE_INIT,
+				       peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_PROBED, peer_dev->state);
+			dev_dbg(dev, "state transition from _INIT to _PROBED\n");
+		} else if (test_and_clear_bit(ICE_PEER_DEV_STATE_REMOVED,
+					      peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_PROBED, peer_dev->state);
+			dev_dbg(dev, "state transition from _REMOVED to _PROBED\n");
+		} else if (test_and_clear_bit(ICE_PEER_DEV_STATE_OPENING,
+					      peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_PROBED, peer_dev->state);
+			dev_dbg(dev, "state transition from _OPENING to _PROBED\n");
+		}
+		break;
+	case ICE_PEER_DEV_STATE_OPENING:
+		if (test_and_clear_bit(ICE_PEER_DEV_STATE_PROBED,
+				       peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_OPENING, peer_dev->state);
+			dev_dbg(dev, "state transition from _PROBED to _OPENING\n");
+		} else if (test_and_clear_bit(ICE_PEER_DEV_STATE_CLOSED,
+					      peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_OPENING, peer_dev->state);
+			dev_dbg(dev, "state transition from _CLOSED to _OPENING\n");
+		}
+		break;
+	case ICE_PEER_DEV_STATE_OPENED:
+		if (test_and_clear_bit(ICE_PEER_DEV_STATE_OPENING,
+				       peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_OPENED, peer_dev->state);
+			dev_dbg(dev, "state transition from _OPENING to _OPENED\n");
+		}
+		break;
+	case ICE_PEER_DEV_STATE_PREP_RST:
+		if (test_and_clear_bit(ICE_PEER_DEV_STATE_OPENED,
+				       peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_PREP_RST, peer_dev->state);
+			dev_dbg(dev, "state transition from _OPENED to _PREP_RST\n");
+		}
+		break;
+	case ICE_PEER_DEV_STATE_PREPPED:
+		if (test_and_clear_bit(ICE_PEER_DEV_STATE_PREP_RST,
+				       peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_PREPPED, peer_dev->state);
+			dev_dbg(dev, "state transition _PREP_RST to _PREPPED\n");
+		}
+		break;
+	case ICE_PEER_DEV_STATE_CLOSING:
+		if (test_and_clear_bit(ICE_PEER_DEV_STATE_OPENED,
+				       peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_CLOSING, peer_dev->state);
+			dev_dbg(dev, "state transition from _OPENED to _CLOSING\n");
+		}
+		if (test_and_clear_bit(ICE_PEER_DEV_STATE_PREPPED,
+				       peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_CLOSING, peer_dev->state);
+			dev_dbg(dev, "state transition _PREPPED to _CLOSING\n");
+		}
+		/* NOTE - up to peer to handle this situation correctly */
+		if (test_and_clear_bit(ICE_PEER_DEV_STATE_PREP_RST,
+				       peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_CLOSING, peer_dev->state);
+			dev_warn(dev, "WARN: Peer state PREP_RST to _CLOSING\n");
+		}
+		break;
+	case ICE_PEER_DEV_STATE_CLOSED:
+		if (test_and_clear_bit(ICE_PEER_DEV_STATE_CLOSING,
+				       peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_CLOSED, peer_dev->state);
+			dev_dbg(dev, "state transition from _CLOSING to _CLOSED\n");
+		}
+		break;
+	case ICE_PEER_DEV_STATE_REMOVED:
+		if (test_and_clear_bit(ICE_PEER_DEV_STATE_OPENED,
+				       peer_dev->state) ||
+		    test_and_clear_bit(ICE_PEER_DEV_STATE_CLOSED,
+				       peer_dev->state)) {
+			set_bit(ICE_PEER_DEV_STATE_REMOVED, peer_dev->state);
+			dev_dbg(dev, "state from _OPENED/_CLOSED to _REMOVED\n");
+			/* Clear registration for events when peer removed */
+			bitmap_zero(peer_dev->events, ICE_PEER_DEV_STATE_NBITS);
+		}
+		break;
+	default:
+		break;
+	}
+
+	if (!locked)
+		mutex_unlock(&peer_dev->peer_dev_state_mutex);
+}
+
+/**
+ * ice_for_each_peer - iterate across and call function for each peer dev
+ * @pf: pointer to private board struct
+ * @data: data to pass to function on each call
+ * @fn: pointer to function to call for each peer
+ */
+int
+ice_for_each_peer(struct ice_pf *pf, void *data,
+		  int (*fn)(struct ice_peer_dev_int *, void *))
+{
+	unsigned int i;
+
+	if (!pf->peers)
+		return 0;
+
+	for (i = 0; i < ARRAY_SIZE(ice_peers); i++) {
+		struct ice_peer_dev_int *peer_dev_int;
+
+		peer_dev_int = pf->peers[i];
+		if (peer_dev_int) {
+			int ret = fn(peer_dev_int, data);
+
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * ice_unreg_peer_device - unregister specified device
+ * @peer_dev_int: ptr to peer device internal
+ * @data: ptr to opaque data
+ *
+ * This function invokes device unregistration, removes ID associated with
+ * the specified device.
+ */
+int
+ice_unreg_peer_device(struct ice_peer_dev_int *peer_dev_int,
+		      void __always_unused *data)
+{
+	struct ice_peer_drv_int *peer_drv_int;
+
+	if (!peer_dev_int)
+		return 0;
+
+	virtbus_unregister_device(peer_dev_int->peer_dev.vdev);
+
+	peer_drv_int = peer_dev_int->peer_drv_int;
+
+	if (peer_dev_int->ice_peer_wq) {
+		if (peer_dev_int->peer_prep_task.func)
+			cancel_work_sync(&peer_dev_int->peer_prep_task);
+		destroy_workqueue(peer_dev_int->ice_peer_wq);
+	}
+
+	kfree(peer_drv_int);
+
+	kfree(peer_dev_int);
+
+	return 0;
+}
+
+/**
+ * ice_unroll_peer - destroy peers and peer_wq in case of error
+ * @peer_dev_int: ptr to peer device internal struct
+ * @data: ptr to opaque data
+ *
+ * This function releases resources in the event of a failure in creating
+ * peer devices or their individual work_queues. Meant to be called from
+ * a ice_for_each_peer invocation
+ */
+int
+ice_unroll_peer(struct ice_peer_dev_int *peer_dev_int,
+		void __always_unused *data)
+{
+	if (peer_dev_int->ice_peer_wq)
+		destroy_workqueue(peer_dev_int->ice_peer_wq);
+	kfree(peer_dev_int);
+
+	return 0;
+}
+
+/**
+ * ice_reserve_peer_qvector - Reserve vector resources for peer drivers
+ * @pf: board private structure to initialize
+ */
+static int ice_reserve_peer_qvector(struct ice_pf *pf)
+{
+	if (test_bit(ICE_FLAG_IWARP_ENA, pf->flags)) {
+		int index;
+
+		index = ice_get_res(pf, pf->irq_tracker, pf->num_rdma_msix,
+				    ICE_RES_RDMA_VEC_ID);
+		if (index < 0)
+			return index;
+		pf->num_avail_sw_msix -= pf->num_rdma_msix;
+		pf->rdma_base_vector = index;
+	}
+	return 0;
+}
+
+/**
+ * ice_peer_vdev_release - function to map to virtbus_devices release callback
+ * @vdev: pointer to virtbus_device to free
+ */
+static void ice_peer_vdev_release(struct virtbus_device *vdev)
+{
+	struct iidc_virtbus_object *vbo;
+
+	vbo = container_of(vdev, struct iidc_virtbus_object, vdev);
+	kfree(vbo);
+}
+
+/**
+ * ice_init_peer_devices - initializes peer devices
+ * @pf: ptr to ice_pf
+ *
+ * This function initializes peer devices on the virtual bus.
+ */
+int ice_init_peer_devices(struct ice_pf *pf)
+{
+	struct ice_vsi *vsi = pf->vsi[0];
+	struct pci_dev *pdev = pf->pdev;
+	struct device *dev = &pdev->dev;
+	int status = 0;
+	unsigned int i, n;
+
+	/* Reserve vector resources */
+	status = ice_reserve_peer_qvector(pf);
+	if (status < 0) {
+		dev_err(dev, "failed to reserve vectors for peer drivers\n");
+		return status;
+	}
+	for (i = 0; i < ARRAY_SIZE(ice_peers); i++) {
+		struct ice_peer_dev_int *peer_dev_int;
+		struct ice_peer_drv_int *peer_drv_int;
+		struct iidc_qos_params *qos_info;
+		struct iidc_virtbus_object *vbo;
+		struct msix_entry *entry = NULL;
+		struct iidc_peer_dev *peer_dev;
+		struct virtbus_device *vdev;
+		int j;
+
+		/* structure layout needed for container_of's looks like:
+		 * ice_peer_dev_int (internal only ice peer superstruct)
+		 * |--> iidc_peer_dev
+		 * |--> *ice_peer_drv_int
+		 *
+		 * iidc_virtbus_object (container_of parent for vdev)
+		 * |--> virtbus_device
+		 * |--> *iidc_peer_dev (pointer from internal struct)
+		 *
+		 * ice_peer_drv_int (internal only peer_drv struct)
+		 */
+		peer_dev_int = kzalloc(sizeof(*peer_dev_int), GFP_KERNEL);
+		if (!peer_dev_int)
+			goto unroll_prev_peers;
+
+		vbo = kzalloc(sizeof(*vbo), GFP_KERNEL);
+		if (!vbo) {
+			kfree(peer_dev_int);
+			goto unroll_prev_peers;
+		}
+
+		peer_drv_int = kzalloc(sizeof(*peer_drv_int), GFP_KERNEL);
+		if (!peer_drv_int) {
+			kfree(peer_dev_int);
+			kfree(vbo);
+			goto unroll_prev_peers;
+		}
+
+		pf->peers[i] = peer_dev_int;
+		vbo->peer_dev = &peer_dev_int->peer_dev;
+		peer_dev_int->peer_drv_int = peer_drv_int;
+		peer_dev_int->peer_dev.vdev = &vbo->vdev;
+
+		/* Initialize driver values */
+		for (j = 0; j < IIDC_EVENT_NBITS; j++)
+			bitmap_zero(peer_drv_int->current_events[j].type,
+				    IIDC_EVENT_NBITS);
+
+		mutex_init(&peer_dev_int->peer_dev_state_mutex);
+
+		peer_dev = &peer_dev_int->peer_dev;
+		peer_dev->peer_ops = NULL;
+		peer_dev->hw_addr = (u8 __iomem *)pf->hw.hw_addr;
+		peer_dev->peer_dev_id = ice_peers[i].id;
+		peer_dev->pf_vsi_num = vsi->vsi_num;
+		peer_dev->netdev = vsi->netdev;
+
+		peer_dev_int->ice_peer_wq =
+			alloc_ordered_workqueue("ice_peer_wq_%d", WQ_UNBOUND,
+						i);
+		if (!peer_dev_int->ice_peer_wq) {
+			kfree(peer_dev_int);
+			kfree(peer_drv_int);
+			kfree(vbo);
+			goto unroll_prev_peers;
+		}
+
+		peer_dev->pdev = pdev;
+		qos_info = &peer_dev->initial_qos_info;
+
+		/* setup qos_info fields with defaults */
+		qos_info->num_apps = 0;
+		qos_info->num_tc = 1;
+
+		for (j = 0; j < IIDC_MAX_USER_PRIORITY; j++)
+			qos_info->up2tc[j] = 0;
+
+		qos_info->tc_info[0].rel_bw = 100;
+		for (j = 1; j < IEEE_8021QAZ_MAX_TCS; j++)
+			qos_info->tc_info[j].rel_bw = 0;
+
+		/* for DCB, override the qos_info defaults. */
+		ice_setup_dcb_qos_info(pf, qos_info);
+
+		/* make sure peer specific resources such as msix_count and
+		 * msix_entries are initialized
+		 */
+		switch (ice_peers[i].id) {
+		case IIDC_PEER_RDMA_ID:
+			if (test_bit(ICE_FLAG_IWARP_ENA, pf->flags)) {
+				peer_dev->msix_count = pf->num_rdma_msix;
+				entry = &pf->msix_entries[pf->rdma_base_vector];
+			}
+			break;
+		default:
+			break;
+		}
+
+		peer_dev->msix_entries = entry;
+		ice_peer_state_change(peer_dev_int, ICE_PEER_DEV_STATE_INIT,
+				      false);
+
+		vdev = &vbo->vdev;
+		vdev->match_name = ice_peers[i].name;
+		vdev->release = ice_peer_vdev_release;
+		vdev->dev.parent = &pdev->dev;
+
+		status = virtbus_register_device(vdev);
+		if (status) {
+			kfree(peer_dev_int);
+			kfree(peer_drv_int);
+			goto unroll_prev_peers;
+		}
+	}
+
+	return status;
+
+unroll_prev_peers:
+	for (n = 0; n < i; n++) {
+		struct ice_peer_dev_int *prev_peer_dev_int;
+		struct ice_peer_drv_int *prev_peer_drv_int;
+		struct virtbus_device *vdev;
+
+		prev_peer_dev_int = pf->peers[n];
+		prev_peer_drv_int = prev_peer_dev_int->peer_drv_int;
+		vdev = prev_peer_dev_int->peer_dev.vdev;
+
+		virtbus_unregister_device(vdev);
+
+		kfree(prev_peer_dev_int);
+		kfree(prev_peer_drv_int);
+	}
+	return -ENOMEM;
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_idc_int.h b/drivers/net/ethernet/intel/ice/ice_idc_int.h
new file mode 100644
index 000000000000..daac19c45490
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_idc_int.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2019, Intel Corporation. */
+
+#ifndef _ICE_IDC_INT_H_
+#define _ICE_IDC_INT_H_
+
+#include <linux/net/intel/iidc.h>
+#include "ice.h"
+
+enum ice_peer_dev_state {
+	ICE_PEER_DEV_STATE_INIT,
+	ICE_PEER_DEV_STATE_PROBED,
+	ICE_PEER_DEV_STATE_OPENING,
+	ICE_PEER_DEV_STATE_OPENED,
+	ICE_PEER_DEV_STATE_PREP_RST,
+	ICE_PEER_DEV_STATE_PREPPED,
+	ICE_PEER_DEV_STATE_CLOSING,
+	ICE_PEER_DEV_STATE_CLOSED,
+	ICE_PEER_DEV_STATE_REMOVED,
+	ICE_PEER_DEV_STATE_API_RDY,
+	ICE_PEER_DEV_STATE_NBITS,               /* must be last */
+};
+
+enum ice_peer_drv_state {
+	ICE_PEER_DRV_STATE_MBX_RDY,
+	ICE_PEER_DRV_STATE_NBITS,               /* must be last */
+};
+
+struct ice_peer_drv_int {
+	struct iidc_peer_drv *peer_drv;
+
+	/* States associated with peer driver */
+	DECLARE_BITMAP(state, ICE_PEER_DRV_STATE_NBITS);
+
+	/* if this peer_dev is the originator of an event, these are the
+	 * most recent events of each type
+	 */
+	struct iidc_event current_events[IIDC_EVENT_NBITS];
+};
+
+struct ice_peer_dev_int {
+	struct ice_peer_drv_int *peer_drv_int; /* driver private structure */
+	struct iidc_peer_dev peer_dev;
+
+	/* if this peer_dev is the originator of an event, these are the
+	 * most recent events of each type
+	 */
+	struct iidc_event current_events[IIDC_EVENT_NBITS];
+	/* Events a peer has registered to be notified about */
+	DECLARE_BITMAP(events, IIDC_EVENT_NBITS);
+
+	/* States associated with peer device */
+	DECLARE_BITMAP(state, ICE_PEER_DEV_STATE_NBITS);
+	struct mutex peer_dev_state_mutex; /* peer_dev state mutex */
+
+	/* per peer workqueue */
+	struct workqueue_struct *ice_peer_wq;
+
+	struct work_struct peer_prep_task;
+	struct work_struct peer_close_task;
+
+	enum iidc_close_reason rst_type;
+};
+
+int ice_unroll_peer(struct ice_peer_dev_int *peer_dev_int, void *data);
+int ice_unreg_peer_device(struct ice_peer_dev_int *peer_dev_int, void *data);
+#endif /* !_ICE_IDC_INT_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 2f256bf45efc..205ac5900551 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -504,6 +504,17 @@ bool ice_is_safe_mode(struct ice_pf *pf)
 	return !test_bit(ICE_FLAG_ADV_FEATURES, pf->flags);
 }
 
+/**
+ * ice_is_peer_ena
+ * @pf: pointer to the PF struct
+ *
+ * returns true if peer devices/drivers are supported, false otherwise
+ */
+bool ice_is_peer_ena(struct ice_pf *pf)
+{
+	return test_bit(ICE_FLAG_PEER_ENA, pf->flags);
+}
+
 /**
  * ice_vsi_clean_rss_flow_fld - Delete RSS configuration
  * @vsi: the VSI being cleaned up
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
index 04ca00799364..db07cc065b10 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_lib.h
@@ -104,6 +104,8 @@ ice_vsi_cfg_mac_fltr(struct ice_vsi *vsi, const u8 *macaddr, bool set);
 
 bool ice_is_safe_mode(struct ice_pf *pf);
 
+bool ice_is_peer_ena(struct ice_pf *pf);
+
 bool ice_is_dflt_vsi_in_use(struct ice_sw *sw);
 
 bool ice_is_vsi_dflt_vsi(struct ice_sw *sw, struct ice_vsi *vsi);
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 5b190c257124..033e463bcdf1 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -5,6 +5,7 @@
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
+#include <linux/virtual_bus.h>
 #include "ice.h"
 #include "ice_base.h"
 #include "ice_lib.h"
@@ -2690,6 +2691,12 @@ static void ice_set_pf_caps(struct ice_pf *pf)
 {
 	struct ice_hw_func_caps *func_caps = &pf->hw.func_caps;
 
+	clear_bit(ICE_FLAG_IWARP_ENA, pf->flags);
+	clear_bit(ICE_FLAG_PEER_ENA, pf->flags);
+	if (func_caps->common_cap.iwarp) {
+		set_bit(ICE_FLAG_IWARP_ENA, pf->flags);
+		set_bit(ICE_FLAG_PEER_ENA, pf->flags);
+	}
 	clear_bit(ICE_FLAG_DCB_CAPABLE, pf->flags);
 	if (func_caps->common_cap.dcb)
 		set_bit(ICE_FLAG_DCB_CAPABLE, pf->flags);
@@ -2769,6 +2776,16 @@ static int ice_ena_msix_range(struct ice_pf *pf)
 	v_budget += needed;
 	v_left -= needed;
 
+	/* reserve vectors for RDMA peer driver */
+	if (test_bit(ICE_FLAG_IWARP_ENA, pf->flags)) {
+		needed = ICE_RDMA_NUM_VECS;
+		if (v_left < needed)
+			goto no_hw_vecs_left_err;
+		pf->num_rdma_msix = needed;
+		v_budget += needed;
+		v_left -= needed;
+	}
+
 	pf->msix_entries = devm_kcalloc(dev, v_budget,
 					sizeof(*pf->msix_entries), GFP_KERNEL);
 
@@ -2793,16 +2810,19 @@ static int ice_ena_msix_range(struct ice_pf *pf)
 	if (v_actual < v_budget) {
 		dev_warn(dev, "not enough OS MSI-X vectors. requested = %d, obtained = %d\n",
 			 v_budget, v_actual);
-/* 2 vectors for LAN (traffic + OICR) */
+/* 2 vectors for LAN and RDMA (traffic + OICR) */
 #define ICE_MIN_LAN_VECS 2
+#define ICE_MIN_RDMA_VECS 2
+#define ICE_MIN_VECS (ICE_MIN_LAN_VECS + ICE_MIN_RDMA_VECS)
 
-		if (v_actual < ICE_MIN_LAN_VECS) {
+		if (v_actual < ICE_MIN_VECS) {
 			/* error if we can't get minimum vectors */
 			pci_disable_msix(pf->pdev);
 			err = -ERANGE;
 			goto msix_err;
 		} else {
 			pf->num_lan_msix = ICE_MIN_LAN_VECS;
+			pf->num_rdma_msix = ICE_MIN_RDMA_VECS;
 		}
 	}
 
@@ -2818,6 +2838,7 @@ static int ice_ena_msix_range(struct ice_pf *pf)
 	err = -ERANGE;
 exit_err:
 	pf->num_lan_msix = 0;
+	pf->num_rdma_msix = 0;
 	return err;
 }
 
@@ -3362,6 +3383,26 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
 
 	/* initialize DDP driven features */
 
+	/* init peers only if supported */
+	if (ice_is_peer_ena(pf)) {
+		pf->peers = devm_kcalloc(dev, IIDC_MAX_NUM_PEERS,
+					 sizeof(*pf->peers), GFP_KERNEL);
+		if (!pf->peers) {
+			err = -ENOMEM;
+			goto err_init_peer_unroll;
+		}
+
+		err = ice_init_peer_devices(pf);
+		if (err) {
+			dev_err(dev, "Failed to initialize peer devices: 0x%x\n",
+				err);
+			err = -EIO;
+			goto err_init_peer_unroll;
+		}
+	} else {
+		dev_warn(dev, "RDMA is not supported on this device\n");
+	}
+
 	/* Note: DCB init failure is non-fatal to load */
 	if (ice_init_pf_dcb(pf, false)) {
 		clear_bit(ICE_FLAG_DCB_CAPABLE, pf->flags);
@@ -3375,6 +3416,14 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
 
 	return 0;
 
+err_init_peer_unroll:
+	if (ice_is_peer_ena(pf)) {
+		ice_for_each_peer(pf, NULL, ice_unroll_peer);
+		if (pf->peers) {
+			devm_kfree(dev, pf->peers);
+			pf->peers = NULL;
+		}
+	}
 err_alloc_sw_unroll:
 	ice_devlink_destroy_port(pf);
 	set_bit(__ICE_SERVICE_DIS, pf->state);
@@ -3423,6 +3472,10 @@ static void ice_remove(struct pci_dev *pdev)
 
 	ice_devlink_destroy_port(pf);
 	ice_vsi_release_all(pf);
+	if (ice_is_peer_ena(pf)) {
+		ice_for_each_peer(pf, NULL, ice_unreg_peer_device);
+		devm_kfree(&pdev->dev, pf->peers);
+	}
 	ice_free_irq_msix_misc(pf);
 	ice_for_each_vsi(pf, i) {
 		if (!pf->vsi[i])
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index 4ce5f92fca4a..42b2d700bc1f 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -189,6 +189,7 @@ struct ice_hw_common_caps {
 	u8 rss_table_entry_width;	/* RSS Entry width in bits */
 
 	u8 dcb;
+	u8 iwarp;
 };
 
 /* Function specific capabilities */
diff --git a/include/linux/net/intel/iidc.h b/include/linux/net/intel/iidc.h
new file mode 100644
index 000000000000..8056e6d8c4cc
--- /dev/null
+++ b/include/linux/net/intel/iidc.h
@@ -0,0 +1,337 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2019, Intel Corporation. */
+
+#ifndef _IIDC_H_
+#define _IIDC_H_
+
+#include <linux/dcbnl.h>
+#include <linux/device.h>
+#include <linux/if_ether.h>
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/virtual_bus.h>
+
+enum iidc_event_type {
+	IIDC_EVENT_LINK_CHANGE,
+	IIDC_EVENT_MTU_CHANGE,
+	IIDC_EVENT_TC_CHANGE,
+	IIDC_EVENT_API_CHANGE,
+	IIDC_EVENT_MBX_CHANGE,
+	IIDC_EVENT_NBITS		/* must be last */
+};
+
+enum iidc_res_type {
+	IIDC_INVAL_RES,
+	IIDC_VSI,
+	IIDC_VEB,
+	IIDC_EVENT_Q,
+	IIDC_EGRESS_CMPL_Q,
+	IIDC_CMPL_EVENT_Q,
+	IIDC_ASYNC_EVENT_Q,
+	IIDC_DOORBELL_Q,
+	IIDC_RDMA_QSETS_TXSCHED,
+};
+
+enum iidc_peer_reset_type {
+	IIDC_PEER_PFR,
+	IIDC_PEER_CORER,
+	IIDC_PEER_CORER_SW_CORE,
+	IIDC_PEER_CORER_SW_FULL,
+	IIDC_PEER_GLOBR,
+};
+
+/* reason notified to peer driver as part of event handling */
+enum iidc_close_reason {
+	IIDC_REASON_INVAL,
+	IIDC_REASON_HW_UNRESPONSIVE,
+	IIDC_REASON_INTERFACE_DOWN, /* Administrative down */
+	IIDC_REASON_PEER_DRV_UNREG, /* peer driver getting unregistered */
+	IIDC_REASON_PEER_DEV_UNINIT,
+	IIDC_REASON_GLOBR_REQ,
+	IIDC_REASON_CORER_REQ,
+	/* Reason #7 reserved */
+	IIDC_REASON_PFR_REQ = 8,
+	IIDC_REASON_HW_RESET_PENDING,
+	IIDC_REASON_RECOVERY_MODE,
+	IIDC_REASON_PARAM_CHANGE,
+};
+
+enum iidc_rdma_filter {
+	IIDC_RDMA_FILTER_INVAL,
+	IIDC_RDMA_FILTER_IWARP,
+	IIDC_RDMA_FILTER_ROCEV2,
+	IIDC_RDMA_FILTER_BOTH,
+};
+
+/* Struct to hold per DCB APP info */
+struct iidc_dcb_app_info {
+	u8  priority;
+	u8  selector;
+	u16 prot_id;
+};
+
+struct iidc_peer_dev;
+
+#define IIDC_MAX_USER_PRIORITY		8
+#define IIDC_MAX_APPS			8
+
+/* Struct to hold per RDMA Qset info */
+struct iidc_rdma_qset_params {
+	u32 teid;	/* qset TEID */
+	u16 qs_handle; /* RDMA driver provides this */
+	u16 vsi_id; /* VSI index */
+	u8 tc; /* TC branch the QSet should belong to */
+	u8 reserved[3];
+};
+
+struct iidc_res_base {
+	/* Union for future provision e.g. other res_type */
+	union {
+		struct iidc_rdma_qset_params qsets;
+	} res;
+};
+
+struct iidc_res {
+	/* Type of resource. Filled by peer driver */
+	enum iidc_res_type res_type;
+	/* Count requested by peer driver */
+	u16 cnt_req;
+
+	/* Number of resources allocated. Filled in by callee.
+	 * Based on this value, caller to fill up "resources"
+	 */
+	u16 res_allocated;
+
+	/* Unique handle to resources allocated. Zero if call fails.
+	 * Allocated by callee and for now used by caller for internal
+	 * tracking purpose.
+	 */
+	u32 res_handle;
+
+	/* Peer driver has to allocate sufficient memory, to accommodate
+	 * cnt_requested before calling this function.
+	 * Memory has to be zero initialized. It is input/output param.
+	 * As a result of alloc_res API, this structures will be populated.
+	 */
+	struct iidc_res_base res[1];
+};
+
+struct iidc_qos_info {
+	u64 tc_ctx;
+	u8 rel_bw;
+	u8 prio_type;
+	u8 egress_virt_up;
+	u8 ingress_virt_up;
+};
+
+/* Struct to hold QoS info */
+struct iidc_qos_params {
+	struct iidc_qos_info tc_info[IEEE_8021QAZ_MAX_TCS];
+	u8 up2tc[IIDC_MAX_USER_PRIORITY];
+	u8 vsi_relative_bw;
+	u8 vsi_priority_type;
+	u32 num_apps;
+	struct iidc_dcb_app_info apps[IIDC_MAX_APPS];
+	u8 num_tc;
+};
+
+union iidc_event_info {
+	/* IIDC_EVENT_LINK_CHANGE */
+	struct {
+		struct net_device *lwr_nd;
+		u16 vsi_num; /* HW index of VSI corresponding to lwr ndev */
+		u8 new_link_state;
+		u8 lport;
+	} link_info;
+	/* IIDC_EVENT_MTU_CHANGE */
+	u16 mtu;
+	/* IIDC_EVENT_TC_CHANGE */
+	struct iidc_qos_params port_qos;
+	/* IIDC_EVENT_API_CHANGE */
+	u8 api_rdy;
+	/* IIDC_EVENT_MBX_CHANGE */
+	u8 mbx_rdy;
+};
+
+/* iidc_event elements are to be passed back and forth between the device
+ * owner and the peer drivers. They are to be used to both register/unregister
+ * for event reporting and to report an event (events can be either device
+ * owner generated or peer generated).
+ *
+ * For (un)registering for events, the structure needs to be populated with:
+ *   reporter - pointer to the iidc_peer_dev struct of the peer (un)registering
+ *   type - bitmap with bits set for event types to (un)register for
+ *
+ * For reporting events, the structure needs to be populated with:
+ *   reporter - pointer to peer that generated the event (NULL for ice)
+ *   type - bitmap with single bit set for this event type
+ *   info - union containing data relevant to this event type
+ */
+struct iidc_event {
+	struct iidc_peer_dev *reporter;
+	DECLARE_BITMAP(type, IIDC_EVENT_NBITS);
+	union iidc_event_info info;
+};
+
+/* Following APIs are implemented by device owner and invoked by peer
+ * drivers
+ */
+struct iidc_ops {
+	/* APIs to allocate resources such as VEB, VSI, Doorbell queues,
+	 * completion queues, Tx/Rx queues, etc...
+	 */
+	int (*alloc_res)(struct iidc_peer_dev *peer_dev,
+			 struct iidc_res *res,
+			 int partial_acceptable);
+	int (*free_res)(struct iidc_peer_dev *peer_dev,
+			struct iidc_res *res);
+
+	int (*is_vsi_ready)(struct iidc_peer_dev *peer_dev);
+	int (*peer_register)(struct iidc_peer_dev *peer_dev);
+	int (*peer_unregister)(struct iidc_peer_dev *peer_dev);
+	int (*request_reset)(struct iidc_peer_dev *dev,
+			     enum iidc_peer_reset_type reset_type);
+
+	void (*notify_state_change)(struct iidc_peer_dev *dev,
+				    struct iidc_event *event);
+
+	/* Notification APIs */
+	void (*reg_for_notification)(struct iidc_peer_dev *dev,
+				     struct iidc_event *event);
+	void (*unreg_for_notification)(struct iidc_peer_dev *dev,
+				       struct iidc_event *event);
+	int (*update_vsi_filter)(struct iidc_peer_dev *peer_dev,
+				 enum iidc_rdma_filter filter, bool enable);
+	int (*vc_send)(struct iidc_peer_dev *peer_dev, u32 vf_id, u8 *msg,
+		       u16 len);
+};
+
+/* Following APIs are implemented by peer drivers and invoked by device
+ * owner
+ */
+struct iidc_peer_ops {
+	void (*event_handler)(struct iidc_peer_dev *peer_dev,
+			      struct iidc_event *event);
+
+	/* Why we have 'open' and when it is expected to be called:
+	 * 1. symmetric set of API w.r.t close
+	 * 2. To be invoked form driver initialization path
+	 *     - call peer_driver:open once device owner is fully
+	 *     initialized
+	 * 3. To be invoked upon RESET complete
+	 */
+	int (*open)(struct iidc_peer_dev *peer_dev);
+
+	/* Peer's close function is to be called when the peer needs to be
+	 * quiesced. This can be for a variety of reasons (enumerated in the
+	 * iidc_close_reason enum struct). A call to close will only be
+	 * followed by a call to either remove or open. No IDC calls from the
+	 * peer should be accepted until it is re-opened.
+	 *
+	 * The *reason* parameter is the reason for the call to close. This
+	 * can be for any reason enumerated in the iidc_close_reason struct.
+	 * It's primary reason is for the peer's bookkeeping and in case the
+	 * peer want to perform any different tasks dictated by the reason.
+	 */
+	void (*close)(struct iidc_peer_dev *peer_dev,
+		      enum iidc_close_reason reason);
+
+	int (*vc_receive)(struct iidc_peer_dev *peer_dev, u32 vf_id, u8 *msg,
+			  u16 len);
+	/* tell RDMA peer to prepare for TC change in a blocking call
+	 * that will directly precede the change event
+	 */
+	void (*prep_tc_change)(struct iidc_peer_dev *peer_dev);
+};
+
+#define IIDC_PEER_RDMA_NAME	"intel,ice,rdma"
+#define IIDC_PEER_RDMA_ID	0x00000010
+#define IIDC_MAX_NUM_PEERS	4
+
+/* The const struct that instantiates peer_dev_id needs to be initialized
+ * in the .c with the macro ASSIGN_PEER_INFO.
+ * For example:
+ * static const struct peer_dev_id peer_dev_ids[] = ASSIGN_PEER_INFO;
+ */
+struct peer_dev_id {
+	char *name;
+	int id;
+};
+
+#define ASSIGN_PEER_INFO						\
+{									\
+	{ .name = IIDC_PEER_RDMA_NAME, .id = IIDC_PEER_RDMA_ID },	\
+}
+
+#define iidc_peer_priv(x) ((x)->peer_priv)
+
+/* Structure representing peer specific information, each peer using the IIDC
+ * interface will have an instance of this struct dedicated to it.
+ */
+struct iidc_peer_dev {
+	struct pci_dev *pdev; /* PCI device of corresponding to main function */
+	struct virtbus_device *vdev; /* virtual device for this peer */
+	/* KVA / Linear address corresponding to BAR0 of underlying
+	 * pci_device.
+	 */
+	u8 __iomem *hw_addr;
+	int peer_dev_id;
+
+	/* Opaque pointer for peer specific data tracking.  This memory will
+	 * be alloc'd and freed by the peer driver and used for private data
+	 * accessible only to the specific peer.  It is stored here so that
+	 * when this struct is passed to the peer via an IDC call, the data
+	 * can be accessed by the peer at that time.
+	 * The peers should only retrieve the pointer by the macro:
+	 *    iidc_peer_priv(struct iidc_peer_dev *)
+	 */
+	void *peer_priv;
+
+	u8 ftype;	/* PF(false) or VF (true) */
+
+	/* Data VSI created by driver */
+	u16 pf_vsi_num;
+
+	struct iidc_qos_params initial_qos_info;
+	struct net_device *netdev;
+
+	/* Based on peer driver type, this shall point to corresponding MSIx
+	 * entries in pf->msix_entries (which were allocated as part of driver
+	 * initialization) e.g. for RDMA driver, msix_entries reserved will be
+	 * num_online_cpus + 1.
+	 */
+	u16 msix_count; /* How many vectors are reserved for this device */
+	struct msix_entry *msix_entries;
+
+	/* Following struct contains function pointers to be initialized
+	 * by device owner and called by peer driver
+	 */
+	const struct iidc_ops *ops;
+
+	/* Following struct contains function pointers to be initialized
+	 * by peer driver and called by device owner
+	 */
+	const struct iidc_peer_ops *peer_ops;
+
+	/* Pointer to peer_drv struct to be populated by peer driver */
+	struct iidc_peer_drv *peer_drv;
+};
+
+struct iidc_virtbus_object {
+	struct virtbus_device vdev;
+	struct iidc_peer_dev *peer_dev;
+};
+
+/* structure representing peer driver
+ * Peer driver to initialize those function ptrs and it will be invoked
+ * by device owner as part of driver_registration via bus infrastructure
+ */
+struct iidc_peer_drv {
+	u16 driver_id;
+#define IIDC_PEER_DEVICE_OWNER		0
+#define IIDC_PEER_RDMA_DRIVER		4
+
+	const char *name;
+
+};
+#endif /* _IIDC_H_*/
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [net-next v4 03/12] ice: Complete RDMA peer registration
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 01/12] Implementation of Virtual Bus Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 02/12] ice: Create and register virtual bus for RDMA Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 04/12] ice: Support resource allocation requests Jeff Kirsher
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Dave Ertman, netdev, linux-rdma, nhorman, sassmann, jgg,
	ranjani.sridharan, pierre-louis.bossart, Tony Nguyen,
	Andrew Bowers, Jeff Kirsher

From: Dave Ertman <david.m.ertman@intel.com>

Ensure that the peer supports the minimal set of operations required
for operation and, if so, open the connection to the peer.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_idc.c      | 288 ++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_idc_int.h  |  36 +++
 drivers/net/ethernet/intel/ice/ice_lib.c      |  33 ++
 drivers/net/ethernet/intel/ice/ice_lib.h      |   2 +
 drivers/net/ethernet/intel/ice/ice_main.c     |  18 +-
 drivers/net/ethernet/intel/ice/ice_switch.c   |  23 ++
 drivers/net/ethernet/intel/ice/ice_switch.h   |   2 +
 .../net/ethernet/intel/ice/ice_virtchnl_pf.c  |  25 --
 8 files changed, 401 insertions(+), 26 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
index 68d6b524d6d4..499c1b77dfc9 100644
--- a/drivers/net/ethernet/intel/ice/ice_idc.c
+++ b/drivers/net/ethernet/intel/ice/ice_idc.c
@@ -146,6 +146,78 @@ ice_peer_state_change(struct ice_peer_dev_int *peer_dev, long new_state,
 		mutex_unlock(&peer_dev->peer_dev_state_mutex);
 }
 
+/**
+ * ice_peer_close - close a peer device
+ * @peer_dev_int: device to close
+ * @data: pointer to opaque data
+ *
+ * This function will also set the state bit for the peer to CLOSED. This
+ * function is meant to be called from a ice_for_each_peer().
+ */
+int ice_peer_close(struct ice_peer_dev_int *peer_dev_int, void *data)
+{
+	enum iidc_close_reason reason = *(enum iidc_close_reason *)(data);
+	struct iidc_peer_dev *peer_dev;
+	struct ice_pf *pf;
+	int i;
+
+	peer_dev = &peer_dev_int->peer_dev;
+	/* return 0 so ice_for_each_peer will continue closing other peers */
+	if (!ice_validate_peer_dev(peer_dev))
+		return 0;
+	pf = pci_get_drvdata(peer_dev->pdev);
+
+	if (test_bit(__ICE_DOWN, pf->state) ||
+	    test_bit(__ICE_SUSPENDED, pf->state) ||
+	    test_bit(__ICE_NEEDS_RESTART, pf->state))
+		return 0;
+
+	mutex_lock(&peer_dev_int->peer_dev_state_mutex);
+
+	/* no peer driver, already closed, closing or opening nothing to do */
+	if (test_bit(ICE_PEER_DEV_STATE_CLOSED, peer_dev_int->state) ||
+	    test_bit(ICE_PEER_DEV_STATE_CLOSING, peer_dev_int->state) ||
+	    test_bit(ICE_PEER_DEV_STATE_OPENING, peer_dev_int->state) ||
+	    test_bit(ICE_PEER_DEV_STATE_REMOVED, peer_dev_int->state))
+		goto peer_close_out;
+
+	/* Set the peer state to CLOSING */
+	ice_peer_state_change(peer_dev_int, ICE_PEER_DEV_STATE_CLOSING, true);
+
+	for (i = 0; i < IIDC_EVENT_NBITS; i++)
+		bitmap_zero(peer_dev_int->current_events[i].type,
+			    IIDC_EVENT_NBITS);
+
+	if (peer_dev->peer_ops && peer_dev->peer_ops->close)
+		peer_dev->peer_ops->close(peer_dev, reason);
+
+	/* Set the peer state to CLOSED */
+	ice_peer_state_change(peer_dev_int, ICE_PEER_DEV_STATE_CLOSED, true);
+
+peer_close_out:
+	mutex_unlock(&peer_dev_int->peer_dev_state_mutex);
+
+	return 0;
+}
+
+/**
+ * ice_peer_update_vsi - update the pf_vsi info in peer_dev struct
+ * @peer_dev_int: pointer to peer dev internal struct
+ * @data: opaque pointer - VSI to be updated
+ */
+int ice_peer_update_vsi(struct ice_peer_dev_int *peer_dev_int, void *data)
+{
+	struct ice_vsi *vsi = (struct ice_vsi *)data;
+	struct iidc_peer_dev *peer_dev;
+
+	peer_dev = &peer_dev_int->peer_dev;
+	if (!peer_dev)
+		return 0;
+
+	peer_dev->pf_vsi_num = vsi->vsi_num;
+	return 0;
+}
+
 /**
  * ice_for_each_peer - iterate across and call function for each peer dev
  * @pf: pointer to private board struct
@@ -176,6 +248,89 @@ ice_for_each_peer(struct ice_pf *pf, void *data,
 	return 0;
 }
 
+/**
+ * ice_finish_init_peer_device - complete peer device initialization
+ * @peer_dev_int: ptr to peer device internal struct
+ * @data: ptr to opaque data
+ *
+ * This function completes remaining initialization of peer_devices
+ */
+int
+ice_finish_init_peer_device(struct ice_peer_dev_int *peer_dev_int,
+			    void __always_unused *data)
+{
+	struct iidc_peer_dev *peer_dev;
+	struct iidc_peer_drv *peer_drv;
+	struct device *dev;
+	struct ice_pf *pf;
+	int ret = 0;
+
+	peer_dev = &peer_dev_int->peer_dev;
+	/* peer_dev will not always be populated at the time of this check */
+	if (!ice_validate_peer_dev(peer_dev))
+		return ret;
+
+	peer_drv = peer_dev->peer_drv;
+	pf = pci_get_drvdata(peer_dev->pdev);
+	dev = ice_pf_to_dev(pf);
+	/* There will be several assessments of the peer_dev's state in this
+	 * chunk of logic.  We need to hold the peer_dev_int's state mutex
+	 * for the entire part so that the flow progresses without another
+	 * context changing things mid-flow
+	 */
+	mutex_lock(&peer_dev_int->peer_dev_state_mutex);
+
+	if (!peer_dev->peer_ops) {
+		dev_err(dev, "peer_ops not defined on peer dev\n");
+		goto init_unlock;
+	}
+
+	if (!peer_dev->peer_ops->open) {
+		dev_err(dev, "peer_ops:open not defined on peer dev\n");
+		goto init_unlock;
+	}
+
+	if (!peer_dev->peer_ops->close) {
+		dev_err(dev, "peer_ops:close not defined on peer dev\n");
+		goto init_unlock;
+	}
+
+	/* Peer driver expected to set driver_id during registration */
+	if (!peer_drv->driver_id) {
+		dev_err(dev, "Peer driver did not set driver_id\n");
+		goto init_unlock;
+	}
+
+	if ((test_bit(ICE_PEER_DEV_STATE_CLOSED, peer_dev_int->state) ||
+	     test_bit(ICE_PEER_DEV_STATE_PROBED, peer_dev_int->state)) &&
+	    ice_pf_state_is_nominal(pf)) {
+		/* If the RTNL is locked, we defer opening the peer
+		 * until the next time this function is called by the
+		 * service task.
+		 */
+		if (rtnl_is_locked())
+			goto init_unlock;
+		ice_peer_state_change(peer_dev_int, ICE_PEER_DEV_STATE_OPENING,
+				      true);
+		ret = peer_dev->peer_ops->open(peer_dev);
+		if (ret) {
+			dev_err(dev, "Peer %d failed to open\n",
+				peer_dev->peer_dev_id);
+			ice_peer_state_change(peer_dev_int,
+					      ICE_PEER_DEV_STATE_PROBED, true);
+			goto init_unlock;
+		}
+
+		ice_peer_state_change(peer_dev_int, ICE_PEER_DEV_STATE_OPENED,
+				      true);
+	}
+
+init_unlock:
+	mutex_unlock(&peer_dev_int->peer_dev_state_mutex);
+
+	return ret;
+}
+
 /**
  * ice_unreg_peer_device - unregister specified device
  * @peer_dev_int: ptr to peer device internal
@@ -200,6 +355,9 @@ ice_unreg_peer_device(struct ice_peer_dev_int *peer_dev_int,
 	if (peer_dev_int->ice_peer_wq) {
 		if (peer_dev_int->peer_prep_task.func)
 			cancel_work_sync(&peer_dev_int->peer_prep_task);
+
+		if (peer_dev_int->peer_close_task.func)
+			cancel_work_sync(&peer_dev_int->peer_close_task);
 		destroy_workqueue(peer_dev_int->ice_peer_wq);
 	}
 
@@ -230,6 +388,134 @@ ice_unroll_peer(struct ice_peer_dev_int *peer_dev_int,
 	return 0;
 }
 
+/**
+ * ice_peer_unregister - request to unregister peer
+ * @peer_dev: peer device
+ *
+ * This function triggers close/remove on peer_dev allowing peer
+ * to unregister.
+ */
+static int ice_peer_unregister(struct iidc_peer_dev *peer_dev)
+{
+	enum iidc_close_reason reason = IIDC_REASON_PEER_DEV_UNINIT;
+	struct ice_peer_dev_int *peer_dev_int;
+	struct ice_pf *pf;
+	int ret;
+
+	if (!ice_validate_peer_dev(peer_dev))
+		return -EINVAL;
+
+	pf = pci_get_drvdata(peer_dev->pdev);
+	if (ice_is_reset_in_progress(pf->state))
+		return -EBUSY;
+
+	peer_dev_int = peer_to_ice_dev_int(peer_dev);
+
+	ret = ice_peer_close(peer_dev_int, &reason);
+	if (ret)
+		return ret;
+
+	peer_dev->peer_ops = NULL;
+
+	ice_peer_state_change(peer_dev_int, ICE_PEER_DEV_STATE_REMOVED, false);
+
+	return 0;
+}
+
+/**
+ * ice_peer_register - Called by peer to open communication with LAN
+ * @peer_dev: ptr to peer device
+ *
+ * registering peer is expected to populate the ice_peerdrv->name field
+ * before calling this function.
+ */
+static int ice_peer_register(struct iidc_peer_dev *peer_dev)
+{
+	struct ice_peer_drv_int *peer_drv_int;
+	struct ice_peer_dev_int *peer_dev_int;
+	struct iidc_peer_drv *peer_drv;
+
+	if (!peer_dev) {
+		pr_err("Failed to reg peer dev: peer_dev ptr NULL\n");
+		return -EINVAL;
+	}
+
+	if (!peer_dev->pdev) {
+		pr_err("Failed to reg peer dev: peer dev pdev NULL\n");
+		return -EINVAL;
+	}
+
+	if (!peer_dev->peer_ops || !peer_dev->ops) {
+		pr_err("Failed to reg peer dev: peer dev peer_ops/ops NULL\n");
+		return -EINVAL;
+	}
+
+	peer_drv = peer_dev->peer_drv;
+	if (!peer_drv) {
+		pr_err("Failed to reg peer dev: peer drv NULL\n");
+		return -EINVAL;
+	}
+
+	peer_dev_int = peer_to_ice_dev_int(peer_dev);
+	peer_drv_int = peer_dev_int->peer_drv_int;
+	if (!peer_drv_int) {
+		pr_err("Failed to match peer_drv_int to peer_dev\n");
+		return -EINVAL;
+	}
+
+	peer_drv_int->peer_drv = peer_drv;
+
+	ice_peer_state_change(peer_dev_int, ICE_PEER_DEV_STATE_PROBED, false);
+
+	return 0;
+}
+
+/**
+ * ice_peer_update_vsi_filter - update main VSI filters for RDMA
+ * @peer_dev: pointer to RDMA peer device
+ * @filter: selection of filters to enable or disable
+ * @enable: bool whether to enable or disable filters
+ */
+static int
+ice_peer_update_vsi_filter(struct iidc_peer_dev *peer_dev,
+			   enum iidc_rdma_filter __always_unused filter,
+			   bool enable)
+{
+	struct ice_vsi *vsi;
+	struct ice_pf *pf;
+	int ret;
+
+	if (!ice_validate_peer_dev(peer_dev))
+		return -EINVAL;
+
+	pf = pci_get_drvdata(peer_dev->pdev);
+
+	vsi = ice_get_main_vsi(pf);
+	if (!vsi)
+		return -EINVAL;
+
+	ret = ice_cfg_iwarp_fltr(&pf->hw, vsi->idx, enable);
+
+	if (ret) {
+		dev_err(ice_pf_to_dev(pf), "Failed to  %sable iWARP filtering\n",
+			enable ? "en" : "dis");
+	} else {
+		if (enable)
+			vsi->info.q_opt_flags |= ICE_AQ_VSI_Q_OPT_PE_FLTR_EN;
+		else
+			vsi->info.q_opt_flags &= ~ICE_AQ_VSI_Q_OPT_PE_FLTR_EN;
+	}
+
+	return ret;
+}
+
+/* Initialize the ice_ops struct, which is used in 'ice_init_peer_devices' */
+static const struct iidc_ops ops = {
+	.peer_register			= ice_peer_register,
+	.peer_unregister		= ice_peer_unregister,
+	.update_vsi_filter		= ice_peer_update_vsi_filter,
+};
+
 /**
  * ice_reserve_peer_qvector - Reserve vector resources for peer drivers
  * @pf: board private structure to initialize
@@ -364,6 +650,8 @@ int ice_init_peer_devices(struct ice_pf *pf)
 
 		/* for DCB, override the qos_info defaults. */
 		ice_setup_dcb_qos_info(pf, qos_info);
+		/* Initialize ice_ops */
+		peer_dev->ops = &ops;
 
 		/* make sure peer specific resources such as msix_count and
 		 * msix_entries are initialized
diff --git a/drivers/net/ethernet/intel/ice/ice_idc_int.h b/drivers/net/ethernet/intel/ice/ice_idc_int.h
index daac19c45490..d22e6f5bb50e 100644
--- a/drivers/net/ethernet/intel/ice/ice_idc_int.h
+++ b/drivers/net/ethernet/intel/ice/ice_idc_int.h
@@ -62,6 +62,42 @@ struct ice_peer_dev_int {
 	enum iidc_close_reason rst_type;
 };
 
+int ice_peer_update_vsi(struct ice_peer_dev_int *peer_dev_int, void *data);
 int ice_unroll_peer(struct ice_peer_dev_int *peer_dev_int, void *data);
 int ice_unreg_peer_device(struct ice_peer_dev_int *peer_dev_int, void *data);
+int ice_peer_close(struct ice_peer_dev_int *peer_dev_int, void *data);
+int
+ice_finish_init_peer_device(struct ice_peer_dev_int *peer_dev_int, void *data);
+
+static inline struct
+ice_peer_dev_int *peer_to_ice_dev_int(struct iidc_peer_dev *peer_dev)
+{
+	return  container_of(peer_dev, struct ice_peer_dev_int, peer_dev);
+}
+
+static inline bool ice_validate_peer_dev(struct iidc_peer_dev *peer_dev)
+{
+	struct ice_peer_dev_int *peer_dev_int;
+	struct ice_pf *pf;
+
+	if (!peer_dev || !peer_dev->pdev)
+		return false;
+
+	if (!peer_dev->peer_ops)
+		return false;
+
+	pf = pci_get_drvdata(peer_dev->pdev);
+	if (!pf)
+		return false;
+
+	peer_dev_int = peer_to_ice_dev_int(peer_dev);
+	if (!peer_dev_int)
+		return false;
+
+	if (test_bit(ICE_PEER_DEV_STATE_REMOVED, peer_dev_int->state) ||
+	    test_bit(ICE_PEER_DEV_STATE_INIT, peer_dev_int->state))
+		return false;
+
+	return true;
+}
 #endif /* !_ICE_IDC_INT_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 205ac5900551..5043d5ed1b2a 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -1379,6 +1379,30 @@ ice_add_mac_to_list(struct ice_vsi *vsi, struct list_head *add_list,
 	return 0;
 }
 
+/**
+ * ice_pf_state_is_nominal - checks the PF for nominal state
+ * @pf: pointer to PF to check
+ *
+ * Check the PF's state for a collection of bits that would indicate
+ * the PF is in a state that would inhibit normal operation for
+ * driver functionality.
+ *
+ * Returns true if PF is in a nominal state, false otherwise
+ */
+bool ice_pf_state_is_nominal(struct ice_pf *pf)
+{
+	DECLARE_BITMAP(check_bits, __ICE_STATE_NBITS) = { 0 };
+
+	if (!pf)
+		return false;
+
+	bitmap_set(check_bits, 0, __ICE_STATE_NOMINAL_CHECK_BITS);
+	if (bitmap_intersects(pf->state, check_bits, __ICE_STATE_NBITS))
+		return false;
+
+	return true;
+}
+
 /**
  * ice_update_eth_stats - Update VSI-specific ethernet statistics counters
  * @vsi: the VSI to be updated
@@ -2390,6 +2414,15 @@ void ice_vsi_free_rx_rings(struct ice_vsi *vsi)
  */
 void ice_vsi_close(struct ice_vsi *vsi)
 {
+	enum iidc_close_reason reason = IIDC_REASON_INTERFACE_DOWN;
+
+	if (!ice_is_safe_mode(vsi->back) && vsi->type == ICE_VSI_PF) {
+		int ret = ice_for_each_peer(vsi->back, &reason, ice_peer_close);
+
+		if (ret)
+			dev_dbg(ice_pf_to_dev(vsi->back), "Peer device did not implement close function\n");
+	}
+
 	if (!test_and_set_bit(__ICE_DOWN, vsi->state))
 		ice_down(vsi);
 
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
index db07cc065b10..f77ddd6883c3 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_lib.h
@@ -14,6 +14,8 @@ ice_add_mac_to_list(struct ice_vsi *vsi, struct list_head *add_list,
 
 void ice_free_fltr_list(struct device *dev, struct list_head *h);
 
+bool ice_pf_state_is_nominal(struct ice_pf *pf);
+
 void ice_update_eth_stats(struct ice_vsi *vsi);
 
 int ice_vsi_cfg_rxqs(struct ice_vsi *vsi);
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 033e463bcdf1..ac0c6d5b01e4 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -1492,6 +1492,9 @@ static void ice_service_task(struct work_struct *work)
 		return;
 	}
 
+	/* Invoke remaining initialization of peer devices */
+	ice_for_each_peer(pf, NULL, ice_finish_init_peer_device);
+
 	ice_process_vflr_event(pf);
 	ice_clean_mailboxq_subtask(pf);
 
@@ -3451,6 +3454,7 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
 static void ice_remove(struct pci_dev *pdev)
 {
 	struct ice_pf *pf = pci_get_drvdata(pdev);
+	enum iidc_close_reason reason;
 	int i;
 
 	if (!pf)
@@ -3467,8 +3471,12 @@ static void ice_remove(struct pci_dev *pdev)
 		ice_free_vfs(pf);
 	}
 
-	set_bit(__ICE_DOWN, pf->state);
 	ice_service_task_stop(pf);
+	if (ice_is_peer_ena(pf)) {
+		reason = IIDC_REASON_INTERFACE_DOWN;
+		ice_for_each_peer(pf, &reason, ice_peer_close);
+	}
+	set_bit(__ICE_DOWN, pf->state);
 
 	ice_devlink_destroy_port(pf);
 	ice_vsi_release_all(pf);
@@ -4785,7 +4793,15 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
 		dev_err(dev, "PF VSI rebuild failed: %d\n", err);
 		goto err_vsi_rebuild;
 	}
+	if (ice_is_peer_ena(pf)) {
+		struct ice_vsi *vsi = ice_get_main_vsi(pf);
 
+		if (!vsi) {
+			dev_err(dev, "No PF_VSI to update peer\n");
+			goto err_vsi_rebuild;
+		}
+		ice_for_each_peer(pf, vsi, ice_peer_update_vsi);
+	}
 	if (test_bit(ICE_FLAG_SRIOV_ENA, pf->flags)) {
 		err = ice_vsi_rebuild_by_type(pf, ICE_VSI_VF);
 		if (err) {
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
index 51825a203e35..cf8e1553599a 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.c
+++ b/drivers/net/ethernet/intel/ice/ice_switch.c
@@ -430,6 +430,29 @@ ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
 	return ice_aq_update_vsi(hw, vsi_ctx, cd);
 }
 
+/**
+ * ice_cfg_iwarp_fltr - enable/disable iWARP filtering on VSI
+ * @hw: pointer to HW struct
+ * @vsi_handle: VSI SW index
+ * @enable: boolean for enable/disable
+ */
+enum ice_status
+ice_cfg_iwarp_fltr(struct ice_hw *hw, u16 vsi_handle, bool enable)
+{
+	struct ice_vsi_ctx *ctx;
+
+	ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!ctx)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	if (enable)
+		ctx->info.q_opt_flags |= ICE_AQ_VSI_Q_OPT_PE_FLTR_EN;
+	else
+		ctx->info.q_opt_flags &= ~ICE_AQ_VSI_Q_OPT_PE_FLTR_EN;
+
+	return ice_update_vsi(hw, vsi_handle, ctx, NULL);
+}
+
 /**
  * ice_aq_alloc_free_vsi_list
  * @hw: pointer to the HW struct
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h
index fa14b9545dab..96010d3d96fd 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.h
+++ b/drivers/net/ethernet/intel/ice/ice_switch.h
@@ -220,6 +220,8 @@ void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status
 ice_add_vlan(struct ice_hw *hw, struct list_head *m_list);
 enum ice_status ice_remove_vlan(struct ice_hw *hw, struct list_head *v_list);
+enum ice_status
+ice_cfg_iwarp_fltr(struct ice_hw *hw, u16 vsi_handle, bool enable);
 
 /* Promisc/defport setup for VSIs */
 enum ice_status
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index 15191a325918..07f3d4b456c7 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -1375,31 +1375,6 @@ static int ice_alloc_vfs(struct ice_pf *pf, u16 num_alloc_vfs)
 	return ret;
 }
 
-/**
- * ice_pf_state_is_nominal - checks the PF for nominal state
- * @pf: pointer to PF to check
- *
- * Check the PF's state for a collection of bits that would indicate
- * the PF is in a state that would inhibit normal operation for
- * driver functionality.
- *
- * Returns true if PF is in a nominal state.
- * Returns false otherwise
- */
-static bool ice_pf_state_is_nominal(struct ice_pf *pf)
-{
-	DECLARE_BITMAP(check_bits, __ICE_STATE_NBITS) = { 0 };
-
-	if (!pf)
-		return false;
-
-	bitmap_set(check_bits, 0, __ICE_STATE_NOMINAL_CHECK_BITS);
-	if (bitmap_intersects(pf->state, check_bits, __ICE_STATE_NBITS))
-		return false;
-
-	return true;
-}
-
 /**
  * ice_pci_sriov_ena - Enable or change number of VFs
  * @pf: pointer to the PF structure
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [net-next v4 04/12] ice: Support resource allocation requests
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
                   ` (2 preceding siblings ...)
  2020-05-20  7:02 ` [net-next v4 03/12] ice: Complete RDMA peer registration Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 05/12] ice: Enable event notifications Jeff Kirsher
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Dave Ertman, netdev, linux-rdma, nhorman, sassmann, jgg,
	ranjani.sridharan, pierre-louis.bossart, Tony Nguyen,
	Andrew Bowers, Jeff Kirsher

From: Dave Ertman <david.m.ertman@intel.com>

Enable the peer device to request queue sets from the PF.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h          |   1 +
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |  32 +++
 drivers/net/ethernet/intel/ice/ice_common.c   | 188 ++++++++++++++
 drivers/net/ethernet/intel/ice/ice_common.h   |   9 +
 drivers/net/ethernet/intel/ice/ice_idc.c      | 244 ++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_sched.c    |  69 ++++-
 drivers/net/ethernet/intel/ice/ice_switch.c   |   4 +
 drivers/net/ethernet/intel/ice/ice_switch.h   |   2 +
 drivers/net/ethernet/intel/ice/ice_type.h     |   3 +
 9 files changed, 547 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 73366009ef03..6ad1894eca3f 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -296,6 +296,7 @@ struct ice_vsi {
 	u16 req_rxq;			 /* User requested Rx queues */
 	u16 num_rx_desc;
 	u16 num_tx_desc;
+	u16 qset_handle[ICE_MAX_TRAFFIC_CLASS];
 	struct ice_tc_cfg tc_cfg;
 	struct bpf_prog *xdp_prog;
 	struct ice_ring **xdp_rings;	 /* XDP ring array */
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 51baab0621a2..a1066c4bf40d 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -1536,6 +1536,36 @@ struct ice_aqc_dis_txq {
 	struct ice_aqc_dis_txq_item qgrps[1];
 };
 
+/* Add Tx RDMA Queue Set (indirect 0x0C33) */
+struct ice_aqc_add_rdma_qset {
+	u8 num_qset_grps;
+	u8 reserved[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+/* This is the descriptor of each qset entry for the Add Tx RDMA Queue Set
+ * command (0x0C33). Only used within struct ice_aqc_add_rdma_qset.
+ */
+struct ice_aqc_add_tx_rdma_qset_entry {
+	__le16 tx_qset_id;
+	u8 rsvd[2];
+	__le32 qset_teid;
+	struct ice_aqc_txsched_elem info;
+};
+
+/* The format of the command buffer for Add Tx RDMA Queue Set(0x0C33)
+ * is an array of the following structs. Please note that the length of
+ * each struct ice_aqc_add_rdma_qset is variable due to the variable
+ * number of queues in each group!
+ */
+struct ice_aqc_add_rdma_qset_data {
+	__le32 parent_teid;
+	__le16 num_qsets;
+	u8 rsvd[2];
+	struct ice_aqc_add_tx_rdma_qset_entry rdma_qsets[1];
+};
+
 /* Configure Firmware Logging Command (indirect 0xFF09)
  * Logging Information Read Response (indirect 0xFF10)
  * Note: The 0xFF10 command has no input parameters.
@@ -1732,6 +1762,7 @@ struct ice_aq_desc {
 		struct ice_aqc_get_set_rss_key get_set_rss_key;
 		struct ice_aqc_add_txqs add_txqs;
 		struct ice_aqc_dis_txqs dis_txqs;
+		struct ice_aqc_add_rdma_qset add_rdma_qset;
 		struct ice_aqc_add_get_update_free_vsi vsi_cmd;
 		struct ice_aqc_add_update_free_vsi_resp add_update_free_vsi_res;
 		struct ice_aqc_fw_logging fw_logging;
@@ -1867,6 +1898,7 @@ enum ice_adminq_opc {
 	/* Tx queue handling commands/events */
 	ice_aqc_opc_add_txqs				= 0x0C30,
 	ice_aqc_opc_dis_txqs				= 0x0C31,
+	ice_aqc_opc_add_rdma_qset			= 0x0C33,
 
 	/* package commands */
 	ice_aqc_opc_download_pkg			= 0x0C40,
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 2dca49aed5bb..c760fae4aed4 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -2917,6 +2917,59 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
 	return status;
 }
 
+/**
+ * ice_aq_add_rdma_qsets
+ * @hw: pointer to the hardware structure
+ * @num_qset_grps: Number of RDMA Qset groups
+ * @qset_list: list of qset groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add Tx RDMA Qsets (0x0C33)
+ */
+static enum ice_status
+ice_aq_add_rdma_qsets(struct ice_hw *hw, u8 num_qset_grps,
+		      struct ice_aqc_add_rdma_qset_data *qset_list,
+		      u16 buf_size, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_rdma_qset_data *list;
+	u16 i, sum_header_size, sum_q_size = 0;
+	struct ice_aqc_add_rdma_qset *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.add_rdma_qset;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_rdma_qset);
+
+	if (!qset_list)
+		return ICE_ERR_PARAM;
+
+	if (num_qset_grps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	sum_header_size = num_qset_grps *
+		(sizeof(*qset_list) - sizeof(*qset_list->rdma_qsets));
+
+	list = qset_list;
+	for (i = 0; i < num_qset_grps; i++) {
+		struct ice_aqc_add_tx_rdma_qset_entry *qset = list->rdma_qsets;
+		u16 num_qsets = le16_to_cpu(list->num_qsets);
+
+		sum_q_size += num_qsets * sizeof(*qset);
+		list = (struct ice_aqc_add_rdma_qset_data *)
+			(qset + num_qsets);
+	}
+
+	if (buf_size != (sum_header_size + sum_q_size))
+		return ICE_ERR_PARAM;
+
+	desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+
+	cmd->num_qset_grps = num_qset_grps;
+
+	return ice_aq_send_cmd(hw, &desc, qset_list, buf_size, cd);
+}
+
 /* End of FW Admin Queue command wrappers */
 
 /**
@@ -3388,6 +3441,141 @@ ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
 			      ICE_SCHED_NODE_OWNER_LAN);
 }
 
+/**
+ * ice_cfg_vsi_rdma - configure the VSI RDMA queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @max_rdmaqs: max RDMA queues array per TC
+ *
+ * This function adds/updates the VSI RDMA queues per TC.
+ */
+enum ice_status
+ice_cfg_vsi_rdma(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		 u16 *max_rdmaqs)
+{
+	return ice_cfg_vsi_qs(pi, vsi_handle, tc_bitmap, max_rdmaqs,
+			      ICE_SCHED_NODE_OWNER_RDMA);
+}
+
+/**
+ * ice_ena_vsi_rdma_qset
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @rdma_qset: pointer to RDMA qset
+ * @num_qsets: number of RDMA qsets
+ * @qset_teid: pointer to qset node teids
+ *
+ * This function adds RDMA qset
+ */
+enum ice_status
+ice_ena_vsi_rdma_qset(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      u16 *rdma_qset, u16 num_qsets, u32 *qset_teid)
+{
+	struct ice_aqc_txsched_elem_data node = { 0 };
+	struct ice_aqc_add_rdma_qset_data *buf;
+	struct ice_sched_node *parent;
+	enum ice_status status;
+	struct ice_hw *hw;
+	u16 i, buf_size;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+	hw = pi->hw;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	buf_size = struct_size(buf, rdma_qsets, num_qsets - 1);
+	buf = kzalloc(buf_size, GFP_KERNEL);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+	mutex_lock(&pi->sched_lock);
+
+	parent = ice_sched_get_free_qparent(pi, vsi_handle, tc,
+					    ICE_SCHED_NODE_OWNER_RDMA);
+	if (!parent) {
+		status = ICE_ERR_PARAM;
+		goto rdma_error_exit;
+	}
+	buf->parent_teid = parent->info.node_teid;
+	node.parent_teid = parent->info.node_teid;
+
+	buf->num_qsets = cpu_to_le16(num_qsets);
+	for (i = 0; i < num_qsets; i++) {
+		buf->rdma_qsets[i].tx_qset_id = cpu_to_le16(rdma_qset[i]);
+		buf->rdma_qsets[i].info.valid_sections =
+						ICE_AQC_ELEM_VALID_GENERIC;
+	}
+	status = ice_aq_add_rdma_qsets(hw, 1, buf, buf_size, NULL);
+	if (status) {
+		ice_debug(hw, ICE_DBG_RDMA, "add RDMA qset failed\n");
+		goto rdma_error_exit;
+	}
+	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
+	for (i = 0; i < num_qsets; i++) {
+		node.node_teid = buf->rdma_qsets[i].qset_teid;
+		status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1,
+					    &node);
+		if (status)
+			break;
+		qset_teid[i] = le32_to_cpu(node.node_teid);
+	}
+rdma_error_exit:
+	mutex_unlock(&pi->sched_lock);
+	kfree(buf);
+	return status;
+}
+
+/**
+ * ice_dis_vsi_rdma_qset - free RDMA resources
+ * @pi: port_info struct
+ * @count: number of RDMA qsets to free
+ * @qset_teid: TEID of qset node
+ * @q_id: list of queue IDs being disabled
+ */
+enum ice_status
+ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid,
+		      u16 *q_id)
+{
+	struct ice_aqc_dis_txq_item qg_list;
+	enum ice_status status = 0;
+	u16 qg_size;
+	int i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	qg_size = sizeof(qg_list);
+
+	mutex_lock(&pi->sched_lock);
+
+	for (i = 0; i < count; i++) {
+		struct ice_sched_node *node;
+
+		node = ice_sched_find_node_by_teid(pi->root, qset_teid[i]);
+		if (!node)
+			continue;
+
+		qg_list.parent_teid = node->info.parent_teid;
+		qg_list.num_qs = 1;
+		qg_list.q_id[0] =
+			cpu_to_le16(q_id[i] |
+				    ICE_AQC_Q_DIS_BUF_ELEM_TYPE_RDMA_QSET);
+
+		status = ice_aq_dis_lan_txq(pi->hw, 1, &qg_list, qg_size,
+					    ICE_NO_RESET, 0, NULL);
+		if (status)
+			break;
+
+		ice_free_sched_node(pi, node);
+	}
+
+	mutex_unlock(&pi->sched_lock);
+	return status;
+}
+
 /**
  * ice_replay_pre_init - replay pre initialization
  * @hw: pointer to the HW struct
diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
index 8104f3d64d96..db63fd6b5608 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.h
+++ b/drivers/net/ethernet/intel/ice/ice_common.h
@@ -125,6 +125,15 @@ ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr,
 		  bool write, struct ice_sq_cd *cd);
 
 enum ice_status
+ice_cfg_vsi_rdma(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		 u16 *max_rdmaqs);
+enum ice_status
+ice_ena_vsi_rdma_qset(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      u16 *rdma_qset, u16 num_qsets, u32 *qset_teid);
+enum ice_status
+ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid,
+		      u16 *q_id);
+enum ice_status
 ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
 		u16 *q_handle, u16 *q_ids, u32 *q_teids,
 		enum ice_disq_rst_src rst_src, u16 vmvf_num,
diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
index 499c1b77dfc9..05fa5c61e2d3 100644
--- a/drivers/net/ethernet/intel/ice/ice_idc.c
+++ b/drivers/net/ethernet/intel/ice/ice_idc.c
@@ -388,6 +388,248 @@ ice_unroll_peer(struct ice_peer_dev_int *peer_dev_int,
 	return 0;
 }
 
+/**
+ * ice_find_vsi - Find the VSI from VSI ID
+ * @pf: The PF pointer to search in
+ * @vsi_num: The VSI ID to search for
+ */
+static struct ice_vsi *ice_find_vsi(struct ice_pf *pf, u16 vsi_num)
+{
+	int i;
+
+	ice_for_each_vsi(pf, i)
+		if (pf->vsi[i] && pf->vsi[i]->vsi_num == vsi_num)
+			return  pf->vsi[i];
+	return NULL;
+}
+
+/**
+ * ice_peer_alloc_rdma_qsets - Allocate Leaf Nodes for RDMA Qset
+ * @peer_dev: peer that is requesting the Leaf Nodes
+ * @res: Resources to be allocated
+ * @partial_acceptable: If partial allocation is acceptable to the peer
+ *
+ * This function allocates Leaf Nodes for given RDMA Qset resources
+ * for the peer device.
+ */
+static int
+ice_peer_alloc_rdma_qsets(struct iidc_peer_dev *peer_dev, struct iidc_res *res,
+			  int __always_unused partial_acceptable)
+{
+	u16 max_rdmaqs[ICE_MAX_TRAFFIC_CLASS];
+	enum ice_status status;
+	struct ice_vsi *vsi;
+	struct device *dev;
+	struct ice_pf *pf;
+	int i, ret = 0;
+	u32 *qset_teid;
+	u16 *qs_handle;
+
+	if (!ice_validate_peer_dev(peer_dev) || !res)
+		return -EINVAL;
+
+	pf = pci_get_drvdata(peer_dev->pdev);
+	dev = ice_pf_to_dev(pf);
+
+	if (res->cnt_req > ICE_MAX_TXQ_PER_TXQG)
+		return -EINVAL;
+
+	qset_teid = kcalloc(res->cnt_req, sizeof(*qset_teid), GFP_KERNEL);
+	if (!qset_teid)
+		return -ENOMEM;
+
+	qs_handle = kcalloc(res->cnt_req, sizeof(*qs_handle), GFP_KERNEL);
+	if (!qs_handle) {
+		kfree(qset_teid);
+		return -ENOMEM;
+	}
+
+	ice_for_each_traffic_class(i)
+		max_rdmaqs[i] = 0;
+
+	for (i = 0; i < res->cnt_req; i++) {
+		struct iidc_rdma_qset_params *qset;
+
+		qset = &res->res[i].res.qsets;
+		if (qset->vsi_id != peer_dev->pf_vsi_num) {
+			dev_err(dev, "RDMA QSet invalid VSI requested\n");
+			ret = -EINVAL;
+			goto out;
+		}
+		max_rdmaqs[qset->tc]++;
+		qs_handle[i] = qset->qs_handle;
+	}
+
+	vsi = ice_find_vsi(pf, peer_dev->pf_vsi_num);
+	if (!vsi) {
+		dev_err(dev, "RDMA QSet invalid VSI\n");
+		ret = -EINVAL;
+		goto out;
+	}
+
+	status = ice_cfg_vsi_rdma(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,
+				  max_rdmaqs);
+	if (status) {
+		dev_err(dev, "Failed VSI RDMA qset config\n");
+		ret = -EINVAL;
+		goto out;
+	}
+
+	for (i = 0; i < res->cnt_req; i++) {
+		struct iidc_rdma_qset_params *qset;
+
+		qset = &res->res[i].res.qsets;
+		status = ice_ena_vsi_rdma_qset(vsi->port_info, vsi->idx,
+					       qset->tc, &qs_handle[i], 1,
+					       &qset_teid[i]);
+		if (status) {
+			dev_err(dev, "Failed VSI RDMA qset enable\n");
+			ret = -EINVAL;
+			goto out;
+		}
+		vsi->qset_handle[qset->tc] = qset->qs_handle;
+		qset->teid = qset_teid[i];
+	}
+
+out:
+	kfree(qset_teid);
+	kfree(qs_handle);
+	return ret;
+}
+
+/**
+ * ice_peer_free_rdma_qsets - Free leaf nodes for RDMA Qset
+ * @peer_dev: peer that requested qsets to be freed
+ * @res: Resource to be freed
+ */
+static int
+ice_peer_free_rdma_qsets(struct iidc_peer_dev *peer_dev, struct iidc_res *res)
+{
+	enum ice_status status;
+	int count, i, ret = 0;
+	struct ice_vsi *vsi;
+	struct device *dev;
+	struct ice_pf *pf;
+	u16 vsi_id;
+	u32 *teid;
+	u16 *q_id;
+
+	if (!ice_validate_peer_dev(peer_dev) || !res)
+		return -EINVAL;
+
+	pf = pci_get_drvdata(peer_dev->pdev);
+	dev = ice_pf_to_dev(pf);
+
+	count = res->res_allocated;
+	if (count > ICE_MAX_TXQ_PER_TXQG)
+		return -EINVAL;
+
+	teid = kcalloc(count, sizeof(*teid), GFP_KERNEL);
+	if (!teid)
+		return -ENOMEM;
+
+	q_id = kcalloc(count, sizeof(*q_id), GFP_KERNEL);
+	if (!q_id) {
+		kfree(teid);
+		return -ENOMEM;
+	}
+
+	vsi_id = res->res[0].res.qsets.vsi_id;
+	vsi = ice_find_vsi(pf, vsi_id);
+	if (!vsi) {
+		dev_err(dev, "RDMA Invalid VSI\n");
+		ret = -EINVAL;
+		goto rdma_free_out;
+	}
+
+	for (i = 0; i < count; i++) {
+		struct iidc_rdma_qset_params *qset;
+
+		qset = &res->res[i].res.qsets;
+		if (qset->vsi_id != vsi_id) {
+			dev_err(dev, "RDMA Invalid VSI ID\n");
+			ret = -EINVAL;
+			goto rdma_free_out;
+		}
+		q_id[i] = qset->qs_handle;
+		teid[i] = qset->teid;
+
+		vsi->qset_handle[qset->tc] = 0;
+	}
+
+	status = ice_dis_vsi_rdma_qset(vsi->port_info, count, teid, q_id);
+	if (status)
+		ret = -EINVAL;
+
+rdma_free_out:
+	kfree(teid);
+	kfree(q_id);
+
+	return ret;
+}
+
+/**
+ * ice_peer_alloc_res - Allocate requested resources for peer device
+ * @peer_dev: peer that is requesting resources
+ * @res: Resources to be allocated
+ * @partial_acceptable: If partial allocation is acceptable to the peer
+ *
+ * This function allocates requested resources for the peer device.
+ */
+static int
+ice_peer_alloc_res(struct iidc_peer_dev *peer_dev, struct iidc_res *res,
+		   int partial_acceptable)
+{
+	struct ice_pf *pf;
+	int ret;
+
+	if (!ice_validate_peer_dev(peer_dev) || !res)
+		return -EINVAL;
+
+	pf = pci_get_drvdata(peer_dev->pdev);
+	if (!ice_pf_state_is_nominal(pf))
+		return -EBUSY;
+
+	switch (res->res_type) {
+	case IIDC_RDMA_QSETS_TXSCHED:
+		ret = ice_peer_alloc_rdma_qsets(peer_dev, res,
+						partial_acceptable);
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+/**
+ * ice_peer_free_res - Free given resources
+ * @peer_dev: peer that is requesting freeing of resources
+ * @res: Resources to be freed
+ *
+ * Free/Release resources allocated to given peer device.
+ */
+static int
+ice_peer_free_res(struct iidc_peer_dev *peer_dev, struct iidc_res *res)
+{
+	int ret;
+
+	if (!ice_validate_peer_dev(peer_dev) || !res)
+		return -EINVAL;
+
+	switch (res->res_type) {
+	case IIDC_RDMA_QSETS_TXSCHED:
+		ret = ice_peer_free_rdma_qsets(peer_dev, res);
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
 /**
  * ice_peer_unregister - request to unregister peer
  * @peer_dev: peer device
@@ -511,6 +753,8 @@ ice_peer_update_vsi_filter(struct iidc_peer_dev *peer_dev,
 
 /* Initialize the ice_ops struct, which is used in 'ice_init_peer_devices' */
 static const struct iidc_ops ops = {
+	.alloc_res			= ice_peer_alloc_res,
+	.free_res			= ice_peer_free_res,
 	.peer_register			= ice_peer_register,
 	.peer_unregister		= ice_peer_unregister,
 	.update_vsi_filter		= ice_peer_update_vsi_filter,
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index eae707ddf8e8..2f618d051b56 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -577,6 +577,50 @@ ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs)
 	return 0;
 }
 
+/**
+ * ice_alloc_rdma_q_ctx - allocate RDMA queue contexts for the given VSI and TC
+ * @hw: pointer to the HW struct
+ * @vsi_handle: VSI handle
+ * @tc: TC number
+ * @new_numqs: number of queues
+ */
+static enum ice_status
+ice_alloc_rdma_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+	struct ice_q_ctx *q_ctx;
+
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	/* allocate RDMA queue contexts */
+	if (!vsi_ctx->rdma_q_ctx[tc]) {
+		vsi_ctx->rdma_q_ctx[tc] = devm_kcalloc(ice_hw_to_dev(hw),
+						       new_numqs,
+						       sizeof(*q_ctx),
+						       GFP_KERNEL);
+		if (!vsi_ctx->rdma_q_ctx[tc])
+			return ICE_ERR_NO_MEMORY;
+		vsi_ctx->num_rdma_q_entries[tc] = new_numqs;
+		return 0;
+	}
+	/* num queues are increased, update the queue contexts */
+	if (new_numqs > vsi_ctx->num_rdma_q_entries[tc]) {
+		u16 prev_num = vsi_ctx->num_rdma_q_entries[tc];
+
+		q_ctx = devm_kcalloc(ice_hw_to_dev(hw), new_numqs,
+				     sizeof(*q_ctx), GFP_KERNEL);
+		if (!q_ctx)
+			return ICE_ERR_NO_MEMORY;
+		memcpy(q_ctx, vsi_ctx->rdma_q_ctx[tc],
+		       prev_num * sizeof(*q_ctx));
+		devm_kfree(ice_hw_to_dev(hw), vsi_ctx->rdma_q_ctx[tc]);
+		vsi_ctx->rdma_q_ctx[tc] = q_ctx;
+		vsi_ctx->num_rdma_q_entries[tc] = new_numqs;
+	}
+	return 0;
+}
+
 /**
  * ice_aq_rl_profile - performs a rate limiting task
  * @hw: pointer to the HW struct
@@ -1599,13 +1643,22 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 	if (!vsi_ctx)
 		return ICE_ERR_PARAM;
 
-	prev_numqs = vsi_ctx->sched.max_lanq[tc];
+	if (owner == ICE_SCHED_NODE_OWNER_LAN)
+		prev_numqs = vsi_ctx->sched.max_lanq[tc];
+	else
+		prev_numqs = vsi_ctx->sched.max_rdmaq[tc];
 	/* num queues are not changed or less than the previous number */
 	if (new_numqs <= prev_numqs)
 		return status;
-	status = ice_alloc_lan_q_ctx(hw, vsi_handle, tc, new_numqs);
-	if (status)
-		return status;
+	if (owner == ICE_SCHED_NODE_OWNER_LAN) {
+		status = ice_alloc_lan_q_ctx(hw, vsi_handle, tc, new_numqs);
+		if (status)
+			return status;
+	} else {
+		status = ice_alloc_rdma_q_ctx(hw, vsi_handle, tc, new_numqs);
+		if (status)
+			return status;
+	}
 
 	if (new_numqs)
 		ice_sched_calc_vsi_child_nodes(hw, new_numqs, new_num_nodes);
@@ -1620,7 +1673,10 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 					       new_num_nodes, owner);
 	if (status)
 		return status;
-	vsi_ctx->sched.max_lanq[tc] = new_numqs;
+	if (owner == ICE_SCHED_NODE_OWNER_LAN)
+		vsi_ctx->sched.max_lanq[tc] = new_numqs;
+	else
+		vsi_ctx->sched.max_rdmaq[tc] = new_numqs;
 
 	return 0;
 }
@@ -1686,6 +1742,7 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 		 * recreate the child nodes all the time in these cases.
 		 */
 		vsi_ctx->sched.max_lanq[tc] = 0;
+		vsi_ctx->sched.max_rdmaq[tc] = 0;
 	}
 
 	/* update the VSI child nodes */
@@ -1817,6 +1874,8 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
 		}
 		if (owner == ICE_SCHED_NODE_OWNER_LAN)
 			vsi_ctx->sched.max_lanq[i] = 0;
+		else
+			vsi_ctx->sched.max_rdmaq[i] = 0;
 	}
 	status = 0;
 
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
index cf8e1553599a..eeb1b0e6f716 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.c
+++ b/drivers/net/ethernet/intel/ice/ice_switch.c
@@ -310,6 +310,10 @@ static void ice_clear_vsi_q_ctx(struct ice_hw *hw, u16 vsi_handle)
 			devm_kfree(ice_hw_to_dev(hw), vsi->lan_q_ctx[i]);
 			vsi->lan_q_ctx[i] = NULL;
 		}
+		if (vsi->rdma_q_ctx[i]) {
+			devm_kfree(ice_hw_to_dev(hw), vsi->rdma_q_ctx[i]);
+			vsi->rdma_q_ctx[i] = NULL;
+		}
 	}
 }
 
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h
index 96010d3d96fd..acd2f150c30b 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.h
+++ b/drivers/net/ethernet/intel/ice/ice_switch.h
@@ -26,6 +26,8 @@ struct ice_vsi_ctx {
 	u8 vf_num;
 	u16 num_lan_q_entries[ICE_MAX_TRAFFIC_CLASS];
 	struct ice_q_ctx *lan_q_ctx[ICE_MAX_TRAFFIC_CLASS];
+	u16 num_rdma_q_entries[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_q_ctx *rdma_q_ctx[ICE_MAX_TRAFFIC_CLASS];
 };
 
 enum ice_sw_fwd_act_type {
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index 42b2d700bc1f..3ada92536540 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -45,6 +45,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
 #define ICE_DBG_FLOW		BIT_ULL(9)
 #define ICE_DBG_SW		BIT_ULL(13)
 #define ICE_DBG_SCHED		BIT_ULL(14)
+#define ICE_DBG_RDMA		BIT_ULL(15)
 #define ICE_DBG_PKG		BIT_ULL(16)
 #define ICE_DBG_RES		BIT_ULL(17)
 #define ICE_DBG_AQ_MSG		BIT_ULL(24)
@@ -282,6 +283,7 @@ struct ice_sched_node {
 	u8 tc_num;
 	u8 owner;
 #define ICE_SCHED_NODE_OWNER_LAN	0
+#define ICE_SCHED_NODE_OWNER_RDMA	2
 };
 
 /* Access Macros for Tx Sched Elements data */
@@ -353,6 +355,7 @@ struct ice_sched_vsi_info {
 	struct ice_sched_node *ag_node[ICE_MAX_TRAFFIC_CLASS];
 	struct list_head list_entry;
 	u16 max_lanq[ICE_MAX_TRAFFIC_CLASS];
+	u16 max_rdmaq[ICE_MAX_TRAFFIC_CLASS];
 };
 
 /* driver defines the policy */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [net-next v4 05/12] ice: Enable event notifications
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
                   ` (3 preceding siblings ...)
  2020-05-20  7:02 ` [net-next v4 04/12] ice: Support resource allocation requests Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 06/12] ice: Allow reset operations Jeff Kirsher
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Dave Ertman, netdev, linux-rdma, nhorman, sassmann, jgg,
	ranjani.sridharan, pierre-louis.bossart, Tony Nguyen,
	Andrew Bowers, Jeff Kirsher

From: Dave Ertman <david.m.ertman@intel.com>

Enable registration of notifications. Peer devices can register to be
notified of certain events as well as notify the driver of its state
changes.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_dcb_lib.c |  37 ++++
 drivers/net/ethernet/intel/ice/ice_idc.c     | 221 +++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_idc_int.h |   1 +
 drivers/net/ethernet/intel/ice/ice_main.c    |  27 ++-
 4 files changed, 280 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
index 24c0a60fe172..c4f8be0c0b24 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
@@ -168,6 +168,30 @@ void ice_vsi_cfg_dcb_rings(struct ice_vsi *vsi)
 	}
 }
 
+/**
+ * ice_peer_prep_tc_change - Pre-notify RDMA Peer in blocking call of TC change
+ * @peer_dev_int: ptr to peer device internal struct
+ * @data: ptr to opaque data
+ */
+static int
+ice_peer_prep_tc_change(struct ice_peer_dev_int *peer_dev_int,
+			void __always_unused *data)
+{
+	struct iidc_peer_dev *peer_dev;
+
+	peer_dev = &peer_dev_int->peer_dev;
+	if (!ice_validate_peer_dev(peer_dev))
+		return 0;
+
+	if (!test_bit(ICE_PEER_DEV_STATE_OPENED, peer_dev_int->state))
+		return 0;
+
+	if (peer_dev->peer_ops && peer_dev->peer_ops->prep_tc_change)
+		peer_dev->peer_ops->prep_tc_change(peer_dev);
+
+	return 0;
+}
+
 /**
  * ice_dcb_bwchk - check if ETS bandwidth input parameters are correct
  * @pf: pointer to the PF struct
@@ -248,6 +272,9 @@ int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked)
 		return -ENOMEM;
 
 	dev_info(dev, "Commit DCB Configuration to the hardware\n");
+	/* Notify capable peers about impending change to TCs */
+	ice_for_each_peer(pf, NULL, ice_peer_prep_tc_change);
+
 	pf_vsi = ice_get_main_vsi(pf);
 	if (!pf_vsi) {
 		dev_dbg(dev, "PF VSI doesn't exist\n");
@@ -580,6 +607,7 @@ static int ice_dcb_noncontig_cfg(struct ice_pf *pf)
 void ice_pf_dcb_recfg(struct ice_pf *pf)
 {
 	struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->local_dcbx_cfg;
+	struct iidc_event *event;
 	u8 tc_map = 0;
 	int v, ret;
 
@@ -615,6 +643,15 @@ void ice_pf_dcb_recfg(struct ice_pf *pf)
 		if (vsi->type == ICE_VSI_PF)
 			ice_dcbnl_set_all(vsi);
 	}
+	event = kzalloc(sizeof(*event), GFP_KERNEL);
+	if (!event)
+		return;
+
+	set_bit(IIDC_EVENT_TC_CHANGE, event->type);
+	event->reporter = NULL;
+	ice_setup_dcb_qos_info(pf, &event->info.port_qos);
+	ice_for_each_peer(pf, event, ice_peer_check_for_reg);
+	kfree(event);
 }
 
 /**
diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
index 05fa5c61e2d3..0fb1080c19d7 100644
--- a/drivers/net/ethernet/intel/ice/ice_idc.c
+++ b/drivers/net/ethernet/intel/ice/ice_idc.c
@@ -218,6 +218,72 @@ int ice_peer_update_vsi(struct ice_peer_dev_int *peer_dev_int, void *data)
 	return 0;
 }
 
+/**
+ * ice_check_peer_drv_for_events - check peer_drv for events to report
+ * @peer_dev: peer device to report to
+ */
+static void ice_check_peer_drv_for_events(struct iidc_peer_dev *peer_dev)
+{
+	const struct iidc_peer_ops *p_ops = peer_dev->peer_ops;
+	struct ice_peer_dev_int *peer_dev_int;
+	struct ice_peer_drv_int *peer_drv_int;
+	int i;
+
+	peer_dev_int = peer_to_ice_dev_int(peer_dev);
+	if (!peer_dev_int)
+		return;
+	peer_drv_int = peer_dev_int->peer_drv_int;
+
+	for_each_set_bit(i, peer_dev_int->events, IIDC_EVENT_NBITS) {
+		struct iidc_event *curr = &peer_drv_int->current_events[i];
+
+		if (!bitmap_empty(curr->type, IIDC_EVENT_NBITS) &&
+		    p_ops->event_handler)
+			p_ops->event_handler(peer_dev, curr);
+	}
+}
+
+/**
+ * ice_check_peer_for_events - check peer_devs for events new peer reg'd for
+ * @src_peer_int: peer to check for events
+ * @data: ptr to opaque data, to be used for the peer struct that opened
+ *
+ * This function is to be called when a peer device is opened.
+ *
+ * Since a new peer opening would have missed any events that would
+ * have happened before its opening, we need to walk the peers and see
+ * if any of them have events that the new peer cares about
+ *
+ * This function is meant to be called by a device_for_each_child.
+ */
+static int
+ice_check_peer_for_events(struct ice_peer_dev_int *src_peer_int, void *data)
+{
+	struct iidc_peer_dev *new_peer = (struct iidc_peer_dev *)data;
+	const struct iidc_peer_ops *p_ops = new_peer->peer_ops;
+	struct ice_peer_dev_int *new_peer_int;
+	struct iidc_peer_dev *src_peer;
+	int i;
+
+	src_peer = &src_peer_int->peer_dev;
+	if (!ice_validate_peer_dev(new_peer) ||
+	    !ice_validate_peer_dev(src_peer))
+		return 0;
+
+	new_peer_int = peer_to_ice_dev_int(new_peer);
+
+	for_each_set_bit(i, new_peer_int->events, IIDC_EVENT_NBITS) {
+		struct iidc_event *curr = &src_peer_int->current_events[i];
+
+		if (!bitmap_empty(curr->type, IIDC_EVENT_NBITS) &&
+		    new_peer->peer_dev_id != src_peer->peer_dev_id &&
+		    p_ops->event_handler)
+			p_ops->event_handler(new_peer, curr);
+	}
+
+	return 0;
+}
+
 /**
  * ice_for_each_peer - iterate across and call function for each peer dev
  * @pf: pointer to private board struct
@@ -323,6 +389,9 @@ ice_finish_init_peer_device(struct ice_peer_dev_int *peer_dev_int,
 
 		ice_peer_state_change(peer_dev_int, ICE_PEER_DEV_STATE_OPENED,
 				      true);
+		ret = ice_for_each_peer(pf, peer_dev,
+					ice_check_peer_for_events);
+		ice_check_peer_drv_for_events(peer_dev);
 	}
 
 init_unlock:
@@ -630,6 +699,155 @@ ice_peer_free_res(struct iidc_peer_dev *peer_dev, struct iidc_res *res)
 	return ret;
 }
 
+/**
+ * ice_peer_reg_for_notif - register a peer to receive specific notifications
+ * @peer_dev: peer that is registering for event notifications
+ * @events: mask of event types peer is registering for
+ */
+static void
+ice_peer_reg_for_notif(struct iidc_peer_dev *peer_dev,
+		       struct iidc_event *events)
+{
+	struct ice_peer_dev_int *peer_dev_int;
+	struct ice_pf *pf;
+
+	if (!ice_validate_peer_dev(peer_dev) || !events)
+		return;
+
+	peer_dev_int = peer_to_ice_dev_int(peer_dev);
+	pf = pci_get_drvdata(peer_dev->pdev);
+
+	bitmap_or(peer_dev_int->events, peer_dev_int->events, events->type,
+		  IIDC_EVENT_NBITS);
+
+	/* Check to see if any events happened previous to peer registering */
+	ice_for_each_peer(pf, peer_dev, ice_check_peer_for_events);
+	ice_check_peer_drv_for_events(peer_dev);
+}
+
+/**
+ * ice_peer_unreg_for_notif - unreg a peer from receiving certain notifications
+ * @peer_dev: peer that is unregistering from event notifications
+ * @events: mask of event types peer is unregistering for
+ */
+static void
+ice_peer_unreg_for_notif(struct iidc_peer_dev *peer_dev,
+			 struct iidc_event *events)
+{
+	struct ice_peer_dev_int *peer_dev_int;
+
+	if (!ice_validate_peer_dev(peer_dev) || !events)
+		return;
+
+	peer_dev_int = peer_to_ice_dev_int(peer_dev);
+
+	bitmap_andnot(peer_dev_int->events, peer_dev_int->events, events->type,
+		      IIDC_EVENT_NBITS);
+}
+
+/**
+ * ice_peer_check_for_reg - check to see if any peers are reg'd for event
+ * @peer_dev_int: ptr to peer device internal struct
+ * @data: ptr to opaque data, to be used for ice_event to report
+ *
+ * This function is to be called by device_for_each_child to handle an
+ * event reported by a peer or the ice driver.
+ */
+int ice_peer_check_for_reg(struct ice_peer_dev_int *peer_dev_int, void *data)
+{
+	struct iidc_event *event = (struct iidc_event *)data;
+	DECLARE_BITMAP(comp_events, IIDC_EVENT_NBITS);
+	struct iidc_peer_dev *peer_dev;
+	bool check = true;
+
+	peer_dev = &peer_dev_int->peer_dev;
+
+	if (!ice_validate_peer_dev(peer_dev) || !data)
+	/* If invalid dev, in this case return 0 instead of error
+	 * because caller ignores this return value
+	 */
+		return 0;
+
+	if (event->reporter)
+		check = event->reporter->peer_dev_id != peer_dev->peer_dev_id;
+
+	if (bitmap_and(comp_events, event->type, peer_dev_int->events,
+		       IIDC_EVENT_NBITS) &&
+	    (test_bit(ICE_PEER_DEV_STATE_OPENED, peer_dev_int->state) ||
+	     test_bit(ICE_PEER_DEV_STATE_PREP_RST, peer_dev_int->state) ||
+	     test_bit(ICE_PEER_DEV_STATE_PREPPED, peer_dev_int->state)) &&
+	    check &&
+	    peer_dev->peer_ops->event_handler)
+		peer_dev->peer_ops->event_handler(peer_dev, event);
+
+	return 0;
+}
+
+/**
+ * ice_peer_report_state_change - accept report of a peer state change
+ * @peer_dev: peer that is sending notification about state change
+ * @event: ice_event holding info on what the state change is
+ *
+ * We also need to parse the list of peers to see if anyone is registered
+ * for notifications about this state change event, and if so, notify them.
+ */
+static void
+ice_peer_report_state_change(struct iidc_peer_dev *peer_dev,
+			     struct iidc_event *event)
+{
+	struct ice_peer_dev_int *peer_dev_int;
+	struct ice_peer_drv_int *peer_drv_int;
+	int e_type, drv_event = 0;
+	struct ice_pf *pf;
+
+	if (!ice_validate_peer_dev(peer_dev) || !event)
+		return;
+
+	pf = pci_get_drvdata(peer_dev->pdev);
+	peer_dev_int = peer_to_ice_dev_int(peer_dev);
+	peer_drv_int = peer_dev_int->peer_drv_int;
+
+	e_type = find_first_bit(event->type, IIDC_EVENT_NBITS);
+	if (!e_type)
+		return;
+
+	switch (e_type) {
+	/* Check for peer_drv events */
+	case IIDC_EVENT_MBX_CHANGE:
+		drv_event = 1;
+		if (event->info.mbx_rdy)
+			set_bit(ICE_PEER_DRV_STATE_MBX_RDY,
+				peer_drv_int->state);
+		else
+			clear_bit(ICE_PEER_DRV_STATE_MBX_RDY,
+				  peer_drv_int->state);
+		break;
+
+	/* Check for peer_dev events */
+	case IIDC_EVENT_API_CHANGE:
+		if (event->info.api_rdy)
+			set_bit(ICE_PEER_DEV_STATE_API_RDY,
+				peer_dev_int->state);
+		else
+			clear_bit(ICE_PEER_DEV_STATE_API_RDY,
+				  peer_dev_int->state);
+		break;
+
+	default:
+		return;
+	}
+
+	/* store the event and state to notify any new peers opening */
+	if (drv_event)
+		memcpy(&peer_drv_int->current_events[e_type], event,
+		       sizeof(*event));
+	else
+		memcpy(&peer_dev_int->current_events[e_type], event,
+		       sizeof(*event));
+
+	ice_for_each_peer(pf, event, ice_peer_check_for_reg);
+}
+
 /**
  * ice_peer_unregister - request to unregister peer
  * @peer_dev: peer device
@@ -755,6 +973,9 @@ ice_peer_update_vsi_filter(struct iidc_peer_dev *peer_dev,
 static const struct iidc_ops ops = {
 	.alloc_res			= ice_peer_alloc_res,
 	.free_res			= ice_peer_free_res,
+	.reg_for_notification		= ice_peer_reg_for_notif,
+	.unreg_for_notification		= ice_peer_unreg_for_notif,
+	.notify_state_change		= ice_peer_report_state_change,
 	.peer_register			= ice_peer_register,
 	.peer_unregister		= ice_peer_unregister,
 	.update_vsi_filter		= ice_peer_update_vsi_filter,
diff --git a/drivers/net/ethernet/intel/ice/ice_idc_int.h b/drivers/net/ethernet/intel/ice/ice_idc_int.h
index d22e6f5bb50e..1d3d5cafc977 100644
--- a/drivers/net/ethernet/intel/ice/ice_idc_int.h
+++ b/drivers/net/ethernet/intel/ice/ice_idc_int.h
@@ -66,6 +66,7 @@ int ice_peer_update_vsi(struct ice_peer_dev_int *peer_dev_int, void *data);
 int ice_unroll_peer(struct ice_peer_dev_int *peer_dev_int, void *data);
 int ice_unreg_peer_device(struct ice_peer_dev_int *peer_dev_int, void *data);
 int ice_peer_close(struct ice_peer_dev_int *peer_dev_int, void *data);
+int ice_peer_check_for_reg(struct ice_peer_dev_int *peer_dev_int, void *data);
 int
 ice_finish_init_peer_device(struct ice_peer_dev_int *peer_dev_int, void *data);
 
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index ac0c6d5b01e4..d1a528da9128 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -4862,7 +4862,9 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
 	struct ice_netdev_priv *np = netdev_priv(netdev);
 	struct ice_vsi *vsi = np->vsi;
 	struct ice_pf *pf = vsi->back;
+	struct iidc_event *event;
 	u8 count = 0;
+	int err = 0;
 
 	if (new_mtu == netdev->mtu) {
 		netdev_warn(netdev, "MTU is already %u\n", netdev->mtu);
@@ -4904,27 +4906,40 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
 		return -EBUSY;
 	}
 
+	event = kzalloc(sizeof(*event), GFP_KERNEL);
+	if (!event)
+		return -ENOMEM;
+
 	netdev->mtu = new_mtu;
 
 	/* if VSI is up, bring it down and then back up */
 	if (!test_and_set_bit(__ICE_DOWN, vsi->state)) {
-		int err;
-
 		err = ice_down(vsi);
 		if (err) {
-			netdev_err(netdev, "change MTU if_up err %d\n", err);
-			return err;
+			netdev_err(netdev, "change MTU if_down err %d\n", err);
+			goto free_event;
 		}
 
 		err = ice_up(vsi);
 		if (err) {
 			netdev_err(netdev, "change MTU if_up err %d\n", err);
-			return err;
+			goto free_event;
 		}
 	}
 
+	if (ice_is_safe_mode(pf))
+		goto out;
+
+	set_bit(IIDC_EVENT_MTU_CHANGE, event->type);
+	event->reporter = NULL;
+	event->info.mtu = new_mtu;
+	ice_for_each_peer(pf, event, ice_peer_check_for_reg);
+
+out:
 	netdev_dbg(netdev, "changed MTU to %d\n", new_mtu);
-	return 0;
+free_event:
+	kfree(event);
+	return err;
 }
 
 /**
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [net-next v4 06/12] ice: Allow reset operations
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
                   ` (4 preceding siblings ...)
  2020-05-20  7:02 ` [net-next v4 05/12] ice: Enable event notifications Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 07/12] ice: Pass through communications to VF Jeff Kirsher
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Dave Ertman, netdev, linux-rdma, nhorman, sassmann, jgg,
	ranjani.sridharan, pierre-louis.bossart, Tony Nguyen,
	Andrew Bowers, Jeff Kirsher

From: Dave Ertman <david.m.ertman@intel.com>

Enable the PF to notify peers when it's going to reset so that peer devices
can prepare accordingly. Also enable the peer devices to request the PF to
reset.

Implement ice_peer_is_vsi_ready() so the peer device can determine when the
VSI is ready for operations following a reset.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_idc.c     | 140 +++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_idc_int.h |   1 +
 drivers/net/ethernet/intel/ice/ice_lib.c     |   6 +
 drivers/net/ethernet/intel/ice/ice_main.c    |   3 +
 4 files changed, 150 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
index 0fb1080c19d7..748e9134a113 100644
--- a/drivers/net/ethernet/intel/ice/ice_idc.c
+++ b/drivers/net/ethernet/intel/ice/ice_idc.c
@@ -218,6 +218,40 @@ int ice_peer_update_vsi(struct ice_peer_dev_int *peer_dev_int, void *data)
 	return 0;
 }
 
+/**
+ * ice_close_peer_for_reset - queue work to close peer for reset
+ * @peer_dev_int: pointer peer dev internal struct
+ * @data: pointer to opaque data used for reset type
+ */
+int ice_close_peer_for_reset(struct ice_peer_dev_int *peer_dev_int, void *data)
+{
+	struct iidc_peer_dev *peer_dev;
+	enum ice_reset_req reset;
+
+	peer_dev = &peer_dev_int->peer_dev;
+	if (!ice_validate_peer_dev(peer_dev))
+		return 0;
+
+	reset = *(enum ice_reset_req *)data;
+
+	switch (reset) {
+	case ICE_RESET_GLOBR:
+		peer_dev_int->rst_type = IIDC_REASON_GLOBR_REQ;
+		break;
+	case ICE_RESET_CORER:
+		peer_dev_int->rst_type = IIDC_REASON_CORER_REQ;
+		break;
+	case ICE_RESET_PFR:
+		peer_dev_int->rst_type = IIDC_REASON_PFR_REQ;
+		break;
+	default:
+		/* reset type is invalid */
+		return 1;
+	}
+	queue_work(peer_dev_int->ice_peer_wq, &peer_dev_int->peer_close_task);
+	return 0;
+}
+
 /**
  * ice_check_peer_drv_for_events - check peer_drv for events to report
  * @peer_dev: peer device to report to
@@ -930,6 +964,74 @@ static int ice_peer_register(struct iidc_peer_dev *peer_dev)
 	return 0;
 }
 
+/**
+ * ice_peer_request_reset - accept request from peer to perform a reset
+ * @peer_dev: peer device that is request a reset
+ * @reset_type: type of reset the peer is requesting
+ */
+static int
+ice_peer_request_reset(struct iidc_peer_dev *peer_dev,
+		       enum iidc_peer_reset_type reset_type)
+{
+	enum ice_reset_req reset;
+	struct ice_pf *pf;
+
+	if (!ice_validate_peer_dev(peer_dev))
+		return -EINVAL;
+
+	pf = pci_get_drvdata(peer_dev->pdev);
+
+	switch (reset_type) {
+	case IIDC_PEER_PFR:
+		reset = ICE_RESET_PFR;
+		break;
+	case IIDC_PEER_CORER:
+		reset = ICE_RESET_CORER;
+		break;
+	case IIDC_PEER_GLOBR:
+		reset = ICE_RESET_GLOBR;
+		break;
+	default:
+		dev_err(ice_pf_to_dev(pf), "incorrect reset request from peer\n");
+		return -EINVAL;
+	}
+
+	return ice_schedule_reset(pf, reset);
+}
+
+/**
+ * ice_peer_is_vsi_ready - query if VSI in nominal state
+ * @peer_dev: pointer to iidc_peer_dev struct
+ */
+static int ice_peer_is_vsi_ready(struct iidc_peer_dev *peer_dev)
+{
+	DECLARE_BITMAP(check_bits, __ICE_STATE_NBITS) = { 0 };
+	struct ice_netdev_priv *np;
+	struct ice_vsi *vsi;
+
+	/* If the peer_dev or associated values are not valid, then return
+	 * 0 as there is no ready port associated with the values passed in
+	 * as parameters.
+	 */
+
+	if (!ice_validate_peer_dev(peer_dev))
+		return 0;
+
+	if (!peer_dev->netdev)
+		return 0;
+
+	np = netdev_priv(peer_dev->netdev);
+	vsi = np->vsi;
+	if (!vsi)
+		return 0;
+
+	bitmap_set(check_bits, 0, __ICE_STATE_NOMINAL_CHECK_BITS);
+	if (bitmap_intersects(vsi->state, check_bits, __ICE_STATE_NBITS))
+		return 0;
+
+	return 1;
+}
+
 /**
  * ice_peer_update_vsi_filter - update main VSI filters for RDMA
  * @peer_dev: pointer to RDMA peer device
@@ -973,9 +1075,11 @@ ice_peer_update_vsi_filter(struct iidc_peer_dev *peer_dev,
 static const struct iidc_ops ops = {
 	.alloc_res			= ice_peer_alloc_res,
 	.free_res			= ice_peer_free_res,
+	.is_vsi_ready			= ice_peer_is_vsi_ready,
 	.reg_for_notification		= ice_peer_reg_for_notif,
 	.unreg_for_notification		= ice_peer_unreg_for_notif,
 	.notify_state_change		= ice_peer_report_state_change,
+	.request_reset			= ice_peer_request_reset,
 	.peer_register			= ice_peer_register,
 	.peer_unregister		= ice_peer_unregister,
 	.update_vsi_filter		= ice_peer_update_vsi_filter,
@@ -1000,6 +1104,41 @@ static int ice_reserve_peer_qvector(struct ice_pf *pf)
 	return 0;
 }
 
+/**
+ * ice_peer_close_task - call peer's close asynchronously
+ * @work: pointer to work_struct contained by the peer_dev_int struct
+ *
+ * This method (asynchronous) of calling a peer's close function is
+ * meant to be used in the reset path.
+ */
+static void ice_peer_close_task(struct work_struct *work)
+{
+	struct ice_peer_dev_int *peer_dev_int;
+	struct iidc_peer_dev *peer_dev;
+
+	peer_dev_int = container_of(work, struct ice_peer_dev_int,
+				    peer_close_task);
+
+	peer_dev = &peer_dev_int->peer_dev;
+	if (!peer_dev || !peer_dev->peer_ops)
+		return;
+
+	/* If this peer_dev is going to close, we do not want any state changes
+	 * to happen until after we successfully finish or abort the close.
+	 * Grab the peer_dev_state_mutex to protect this flow
+	 */
+	mutex_lock(&peer_dev_int->peer_dev_state_mutex);
+
+	ice_peer_state_change(peer_dev_int, ICE_PEER_DEV_STATE_CLOSING, true);
+
+	if (peer_dev->peer_ops->close)
+		peer_dev->peer_ops->close(peer_dev, peer_dev_int->rst_type);
+
+	ice_peer_state_change(peer_dev_int, ICE_PEER_DEV_STATE_CLOSED, true);
+
+	mutex_unlock(&peer_dev_int->peer_dev_state_mutex);
+}
+
 /**
  * ice_peer_vdev_release - function to map to virtbus_devices release callback
  * @vdev: pointer to virtbus_device to free
@@ -1098,6 +1237,7 @@ int ice_init_peer_devices(struct ice_pf *pf)
 			kfree(vbo);
 			goto unroll_prev_peers;
 		}
+		INIT_WORK(&peer_dev_int->peer_close_task, ice_peer_close_task);
 
 		peer_dev->pdev = pdev;
 		qos_info = &peer_dev->initial_qos_info;
diff --git a/drivers/net/ethernet/intel/ice/ice_idc_int.h b/drivers/net/ethernet/intel/ice/ice_idc_int.h
index 1d3d5cafc977..90e165434aea 100644
--- a/drivers/net/ethernet/intel/ice/ice_idc_int.h
+++ b/drivers/net/ethernet/intel/ice/ice_idc_int.h
@@ -63,6 +63,7 @@ struct ice_peer_dev_int {
 };
 
 int ice_peer_update_vsi(struct ice_peer_dev_int *peer_dev_int, void *data);
+int ice_close_peer_for_reset(struct ice_peer_dev_int *peer_dev_int, void *data);
 int ice_unroll_peer(struct ice_peer_dev_int *peer_dev_int, void *data);
 int ice_unreg_peer_device(struct ice_peer_dev_int *peer_dev_int, void *data);
 int ice_peer_close(struct ice_peer_dev_int *peer_dev_int, void *data);
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 5043d5ed1b2a..34b41b1039f1 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -2416,6 +2416,12 @@ void ice_vsi_close(struct ice_vsi *vsi)
 {
 	enum iidc_close_reason reason = IIDC_REASON_INTERFACE_DOWN;
 
+	if (test_bit(__ICE_CORER_REQ, vsi->back->state))
+		reason = IIDC_REASON_CORER_REQ;
+	if (test_bit(__ICE_GLOBR_REQ, vsi->back->state))
+		reason = IIDC_REASON_GLOBR_REQ;
+	if (test_bit(__ICE_PFR_REQ, vsi->back->state))
+		reason = IIDC_REASON_PFR_REQ;
 	if (!ice_is_safe_mode(vsi->back) && vsi->type == ICE_VSI_PF) {
 		int ret = ice_for_each_peer(vsi->back, &reason, ice_peer_close);
 
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index d1a528da9128..c7eb51bae33d 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -560,6 +560,9 @@ static void ice_reset_subtask(struct ice_pf *pf)
 		/* return if no valid reset type requested */
 		if (reset_type == ICE_RESET_INVAL)
 			return;
+		if (ice_is_peer_ena(pf))
+			ice_for_each_peer(pf, &reset_type,
+					  ice_close_peer_for_reset);
 		ice_prepare_for_reset(pf);
 
 		/* make sure we are ready to rebuild */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [net-next v4 07/12] ice: Pass through communications to VF
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
                   ` (5 preceding siblings ...)
  2020-05-20  7:02 ` [net-next v4 06/12] ice: Allow reset operations Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 08/12] i40e: Move client header location Jeff Kirsher
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Dave Ertman, netdev, linux-rdma, nhorman, sassmann, jgg,
	ranjani.sridharan, pierre-louis.bossart, Tony Nguyen,
	Andrew Bowers, Jeff Kirsher

From: Dave Ertman <david.m.ertman@intel.com>

Allow for forwarding of RDMA and VF virt channel messages. The driver will
forward messages from the RDMA driver to the VF via the vc_send operation
and invoke the peer's vc_receive() call when receiving a virt channel
message destined for the peer driver.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h          |  1 +
 drivers/net/ethernet/intel/ice/ice_idc.c      | 34 +++++++++++++++++++
 .../net/ethernet/intel/ice/ice_virtchnl_pf.c  | 34 +++++++++++++++++++
 3 files changed, 69 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 6ad1894eca3f..0e45e080a41f 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -392,6 +392,7 @@ struct ice_pf {
 	u32 msg_enable;
 	u32 num_rdma_msix;	/* Total MSIX vectors for RDMA driver */
 	u32 rdma_base_vector;
+	struct iidc_peer_dev *rdma_peer;
 	u32 hw_csum_rx_error;
 	u32 oicr_idx;		/* Other interrupt cause MSIX vector index */
 	u32 num_avail_sw_msix;	/* remaining MSIX SW vectors left unclaimed */
diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
index 748e9134a113..d287728b3cc8 100644
--- a/drivers/net/ethernet/intel/ice/ice_idc.c
+++ b/drivers/net/ethernet/intel/ice/ice_idc.c
@@ -1071,6 +1071,38 @@ ice_peer_update_vsi_filter(struct iidc_peer_dev *peer_dev,
 	return ret;
 }
 
+/**
+ * ice_peer_vc_send - send a virt channel message from RDMA peer
+ * @peer_dev: pointer to RDMA peer dev
+ * @vf_id: the absolute VF ID of recipient of message
+ * @msg: pointer to message contents
+ * @len: len of message
+ */
+static int
+ice_peer_vc_send(struct iidc_peer_dev *peer_dev, u32 vf_id, u8 *msg, u16 len)
+{
+	struct ice_pf *pf;
+	int err;
+
+	if (!ice_validate_peer_dev(peer_dev))
+		return -EINVAL;
+	if (!msg || !len)
+		return -ENOMEM;
+
+	pf = pci_get_drvdata(peer_dev->pdev);
+	if (vf_id >= pf->num_alloc_vfs || len > ICE_AQ_MAX_BUF_LEN)
+		return -EINVAL;
+
+	/* VIRTCHNL_OP_IWARP is being used for RoCEv2 msg also */
+	err = ice_aq_send_msg_to_vf(&pf->hw, vf_id, VIRTCHNL_OP_IWARP, 0, msg,
+				    len, NULL);
+	if (err)
+		dev_err(ice_pf_to_dev(pf), "Unable to send RDMA msg to VF, error %d\n",
+			err);
+
+	return err;
+}
+
 /* Initialize the ice_ops struct, which is used in 'ice_init_peer_devices' */
 static const struct iidc_ops ops = {
 	.alloc_res			= ice_peer_alloc_res,
@@ -1083,6 +1115,7 @@ static const struct iidc_ops ops = {
 	.peer_register			= ice_peer_register,
 	.peer_unregister		= ice_peer_unregister,
 	.update_vsi_filter		= ice_peer_update_vsi_filter,
+	.vc_send			= ice_peer_vc_send,
 };
 
 /**
@@ -1264,6 +1297,7 @@ int ice_init_peer_devices(struct ice_pf *pf)
 		switch (ice_peers[i].id) {
 		case IIDC_PEER_RDMA_ID:
 			if (test_bit(ICE_FLAG_IWARP_ENA, pf->flags)) {
+				pf->rdma_peer = peer_dev;
 				peer_dev->msix_count = pf->num_rdma_msix;
 				entry = &pf->msix_entries[pf->rdma_base_vector];
 			}
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index 07f3d4b456c7..95e39fef0a26 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -3170,6 +3170,37 @@ static int ice_vc_dis_vlan_stripping(struct ice_vf *vf)
 				     v_ret, NULL, 0);
 }
 
+/**
+ * ice_vc_rdma_msg - send msg to RDMA PF from VF
+ * @vf: pointer to VF info
+ * @msg: pointer to msg buffer
+ * @len: length of the message
+ *
+ * This function is called indirectly from the AQ clean function.
+ */
+static int ice_vc_rdma_msg(struct ice_vf *vf, u8 *msg, u16 len)
+{
+	struct iidc_peer_dev *rdma_peer;
+	int ret;
+
+	rdma_peer = vf->pf->rdma_peer;
+	if (!rdma_peer) {
+		pr_err("Invalid RDMA peer attempted to send message to peer\n");
+		return -EIO;
+	}
+
+	if (!rdma_peer->peer_ops || !rdma_peer->peer_ops->vc_receive) {
+		pr_err("Incomplete RMDA peer attempting to send msg\n");
+		return -EINVAL;
+	}
+
+	ret = rdma_peer->peer_ops->vc_receive(rdma_peer, vf->vf_id, msg, len);
+	if (ret)
+		pr_err("Failed to send message to RDMA peer, error %d\n", ret);
+
+	return ret;
+}
+
 /**
  * ice_vf_init_vlan_stripping - enable/disable VLAN stripping on initialization
  * @vf: VF to enable/disable VLAN stripping for on initialization
@@ -3304,6 +3335,9 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event)
 	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
 		err = ice_vc_dis_vlan_stripping(vf);
 		break;
+	case VIRTCHNL_OP_IWARP:
+		err = ice_vc_rdma_msg(vf, msg, msglen);
+		break;
 	case VIRTCHNL_OP_UNKNOWN:
 	default:
 		dev_err(dev, "Unsupported opcode %d from VF %d\n", v_opcode,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [net-next v4 08/12] i40e: Move client header location
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
                   ` (6 preceding siblings ...)
  2020-05-20  7:02 ` [net-next v4 07/12] ice: Pass through communications to VF Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 09/12] i40e: Register a virtbus device to provide RDMA Jeff Kirsher
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Shiraz Saleem, netdev, linux-rdma, nhorman, sassmann, jgg,
	ranjani.sridharan, pierre-louis.bossart, Andrew Bowers,
	Jeff Kirsher

From: Shiraz Saleem <shiraz.saleem@intel.com>

Move i40e_client.h to include/linux/net/intel/*
since its shared between i40iw and i40e.

Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/infiniband/hw/i40iw/Makefile                            | 1 -
 drivers/infiniband/hw/i40iw/i40iw.h                             | 2 +-
 drivers/net/ethernet/intel/i40e/i40e.h                          | 2 +-
 drivers/net/ethernet/intel/i40e/i40e_client.c                   | 2 +-
 .../intel/i40e => include/linux/net/intel}/i40e_client.h        | 0
 5 files changed, 3 insertions(+), 4 deletions(-)
 rename {drivers/net/ethernet/intel/i40e => include/linux/net/intel}/i40e_client.h (100%)

diff --git a/drivers/infiniband/hw/i40iw/Makefile b/drivers/infiniband/hw/i40iw/Makefile
index 8942f8229945..34da9eba8a7c 100644
--- a/drivers/infiniband/hw/i40iw/Makefile
+++ b/drivers/infiniband/hw/i40iw/Makefile
@@ -1,5 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0
-ccflags-y :=  -I $(srctree)/drivers/net/ethernet/intel/i40e
 
 obj-$(CONFIG_INFINIBAND_I40IW) += i40iw.o
 
diff --git a/drivers/infiniband/hw/i40iw/i40iw.h b/drivers/infiniband/hw/i40iw/i40iw.h
index 3c62c9327a9c..1ba7561f9cbb 100644
--- a/drivers/infiniband/hw/i40iw/i40iw.h
+++ b/drivers/infiniband/hw/i40iw/i40iw.h
@@ -45,6 +45,7 @@
 #include <linux/slab.h>
 #include <linux/io.h>
 #include <linux/crc32c.h>
+#include <linux/net/intel/i40e_client.h>
 #include <rdma/ib_smi.h>
 #include <rdma/ib_verbs.h>
 #include <rdma/ib_pack.h>
@@ -57,7 +58,6 @@
 #include "i40iw_d.h"
 #include "i40iw_hmc.h"
 
-#include <i40e_client.h>
 #include "i40iw_type.h"
 #include "i40iw_p.h"
 #include <rdma/i40iw-abi.h>
diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index e95b8da45e07..5ff0828a6f50 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -38,7 +38,7 @@
 #include <net/xdp_sock.h>
 #include "i40e_type.h"
 #include "i40e_prototype.h"
-#include "i40e_client.h"
+#include <linux/net/intel/i40e_client.h>
 #include <linux/avf/virtchnl.h>
 #include "i40e_virtchnl_pf.h"
 #include "i40e_txrx.h"
diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c
index e81530ca08d0..befd3018183f 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_client.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_client.c
@@ -3,10 +3,10 @@
 
 #include <linux/list.h>
 #include <linux/errno.h>
+#include <linux/net/intel/i40e_client.h>
 
 #include "i40e.h"
 #include "i40e_prototype.h"
-#include "i40e_client.h"
 
 static const char i40e_client_interface_version_str[] = I40E_CLIENT_VERSION_STR;
 static struct i40e_client *registered_client;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.h b/include/linux/net/intel/i40e_client.h
similarity index 100%
rename from drivers/net/ethernet/intel/i40e/i40e_client.h
rename to include/linux/net/intel/i40e_client.h
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [net-next v4 09/12] i40e: Register a virtbus device to provide RDMA
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
                   ` (7 preceding siblings ...)
  2020-05-20  7:02 ` [net-next v4 08/12] i40e: Move client header location Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-20  7:02 ` [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client Jeff Kirsher
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Shiraz Saleem, netdev, linux-rdma, nhorman, sassmann, jgg,
	ranjani.sridharan, pierre-louis.bossart, Mustafa Ismail,
	Andrew Bowers, Jeff Kirsher

From: Shiraz Saleem <shiraz.saleem@intel.com>

Register client virtbus device on the virtbus for the RDMA
virtbus driver (irdma) to bind to. It allows to realize a
single RDMA driver capable of working with multiple netdev
drivers over multi-generation Intel HW supporting RDMA.
There is also no load ordering dependencies between i40e and
irdma.

Summary of changes:
* Support to add/remove virtbus devices
* Add 2 new client ops.
	* i40e_client_device_register() which is called during RDMA
	  probe() per PF. Validate client drv OPs and schedule service
	  task to call open()
	* i40e_client_device_unregister() called during RDMA remove()
	  per PF. Call client close() and release_qvlist.
* The global register/unregister calls exported for i40iw are retained
  until i40iw is removed from the kernel.

Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/Kconfig            |   1 +
 drivers/net/ethernet/intel/i40e/i40e_client.c | 131 +++++++++++++++---
 include/linux/net/intel/i40e_client.h         |  15 ++
 3 files changed, 127 insertions(+), 20 deletions(-)

diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index 814d6dcf8137..f5d55c3f7f70 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -241,6 +241,7 @@ config I40E
 	tristate "Intel(R) Ethernet Controller XL710 Family support"
 	imply PTP_1588_CLOCK
 	depends on PCI
+	select VIRTUAL_BUS
 	---help---
 	  This driver supports Intel(R) Ethernet Controller XL710 Family of
 	  devices.  For more information on how to identify your adapter, go
diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c
index befd3018183f..fdce8af3ec4f 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_client.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_client.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 /* Copyright(c) 2013 - 2018 Intel Corporation. */
 
+#include <linux/net/intel/i40e_client.h>
 #include <linux/list.h>
 #include <linux/errno.h>
 #include <linux/net/intel/i40e_client.h>
@@ -30,11 +31,17 @@ static int i40e_client_update_vsi_ctxt(struct i40e_info *ldev,
 				       bool is_vf, u32 vf_id,
 				       u32 flag, u32 valid_flag);
 
+static int i40e_client_device_register(struct i40e_info *ldev);
+
+static void i40e_client_device_unregister(struct i40e_info *ldev);
+
 static struct i40e_ops i40e_lan_ops = {
 	.virtchnl_send = i40e_client_virtchnl_send,
 	.setup_qvlist = i40e_client_setup_qvlist,
 	.request_reset = i40e_client_request_reset,
 	.update_vsi_ctxt = i40e_client_update_vsi_ctxt,
+	.client_device_register = i40e_client_device_register,
+	.client_device_unregister = i40e_client_device_unregister,
 };
 
 /**
@@ -275,6 +282,37 @@ void i40e_client_update_msix_info(struct i40e_pf *pf)
 	cdev->lan_info.msix_entries = &pf->msix_entries[pf->iwarp_base_vector];
 }
 
+static void i40e_virtdev_release(struct virtbus_device *vdev)
+{
+	struct i40e_virtbus_device *i40e_vdev =
+			container_of(vdev, struct i40e_virtbus_device, vdev);
+
+	kfree(i40e_vdev);
+}
+
+static int i40e_init_client_virtdev(struct i40e_info *ldev)
+{
+	struct pci_dev *pdev = ldev->pcidev;
+	struct i40e_virtbus_device *i40e_vdev;
+	int ret;
+
+	i40e_vdev = kzalloc(sizeof(*i40e_vdev), GFP_KERNEL);
+	if (!i40e_vdev)
+		return -ENOMEM;
+
+	i40e_vdev->vdev.match_name = I40E_PEER_RDMA_NAME;
+	i40e_vdev->vdev.dev.parent = &pdev->dev;
+	i40e_vdev->vdev.release = i40e_virtdev_release;
+	i40e_vdev->ldev = ldev;
+	ldev->vdev = &i40e_vdev->vdev;
+
+	ret = virtbus_register_device(&i40e_vdev->vdev);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
 /**
  * i40e_client_add_instance - add a client instance struct to the instance list
  * @pf: pointer to the board struct
@@ -288,9 +326,6 @@ static void i40e_client_add_instance(struct i40e_pf *pf)
 	struct netdev_hw_addr *mac = NULL;
 	struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi];
 
-	if (!registered_client || pf->cinst)
-		return;
-
 	cdev = kzalloc(sizeof(*cdev), GFP_KERNEL);
 	if (!cdev)
 		return;
@@ -310,11 +345,8 @@ static void i40e_client_add_instance(struct i40e_pf *pf)
 	cdev->lan_info.fw_build = pf->hw.aq.fw_build;
 	set_bit(__I40E_CLIENT_INSTANCE_NONE, &cdev->state);
 
-	if (i40e_client_get_params(vsi, &cdev->lan_info.params)) {
-		kfree(cdev);
-		cdev = NULL;
-		return;
-	}
+	if (i40e_client_get_params(vsi, &cdev->lan_info.params))
+		goto free_cdev;
 
 	mac = list_first_entry(&cdev->lan_info.netdev->dev_addrs.list,
 			       struct netdev_hw_addr, list);
@@ -326,7 +358,17 @@ static void i40e_client_add_instance(struct i40e_pf *pf)
 	cdev->client = registered_client;
 	pf->cinst = cdev;
 
-	i40e_client_update_msix_info(pf);
+	cdev->lan_info.msix_count = pf->num_iwarp_msix;
+	cdev->lan_info.msix_entries = &pf->msix_entries[pf->iwarp_base_vector];
+
+	if (i40e_init_client_virtdev(&cdev->lan_info))
+		goto free_cdev;
+
+	return;
+
+free_cdev:
+	kfree(cdev);
+	pf->cinst = NULL;
 }
 
 /**
@@ -347,7 +389,7 @@ void i40e_client_del_instance(struct i40e_pf *pf)
  **/
 void i40e_client_subtask(struct i40e_pf *pf)
 {
-	struct i40e_client *client = registered_client;
+	struct i40e_client *client;
 	struct i40e_client_instance *cdev;
 	struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi];
 	int ret = 0;
@@ -361,9 +403,11 @@ void i40e_client_subtask(struct i40e_pf *pf)
 	    test_bit(__I40E_CONFIG_BUSY, pf->state))
 		return;
 
-	if (!client || !cdev)
+	if (!cdev || !cdev->client)
 		return;
 
+	client = cdev->client;
+
 	/* Here we handle client opens. If the client is down, and
 	 * the netdev is registered, then open the client.
 	 */
@@ -424,16 +468,8 @@ int i40e_lan_add_device(struct i40e_pf *pf)
 		 pf->hw.pf_id, pf->hw.bus.bus_id,
 		 pf->hw.bus.device, pf->hw.bus.func);
 
-	/* If a client has already been registered, we need to add an instance
-	 * of it to our new LAN device.
-	 */
-	if (registered_client)
-		i40e_client_add_instance(pf);
+	i40e_client_add_instance(pf);
 
-	/* Since in some cases register may have happened before a device gets
-	 * added, we can schedule a subtask to go initiate the clients if
-	 * they can be launched at probe time.
-	 */
 	set_bit(__I40E_CLIENT_SERVICE_REQUESTED, pf->state);
 	i40e_service_event_schedule(pf);
 
@@ -453,6 +489,8 @@ int i40e_lan_del_device(struct i40e_pf *pf)
 	struct i40e_device *ldev, *tmp;
 	int ret = -ENODEV;
 
+	virtbus_unregister_device(pf->cinst->lan_info.vdev);
+
 	/* First, remove any client instance. */
 	i40e_client_del_instance(pf);
 
@@ -733,6 +771,59 @@ static int i40e_client_update_vsi_ctxt(struct i40e_info *ldev,
 	return err;
 }
 
+static int i40e_client_device_register(struct i40e_info *ldev)
+{
+	struct i40e_client *client;
+	struct i40e_pf *pf;
+
+	if (!ldev) {
+		pr_err("Failed to reg client dev: ldev ptr NULL\n");
+		return -EINVAL;
+	}
+
+	client = ldev->client;
+	pf = ldev->pf;
+	if (!client) {
+		pr_err("Failed to reg client dev: client ptr NULL\n");
+		return -EINVAL;
+	}
+
+	if (!ldev->ops || !client->ops) {
+		pr_err("Failed to reg client dev: client dev peer_ops/ops NULL\n");
+		return -EINVAL;
+	}
+
+	pf->cinst->client = ldev->client;
+	set_bit(__I40E_CLIENT_SERVICE_REQUESTED, pf->state);
+	i40e_service_event_schedule(pf);
+
+	return 0;
+}
+
+static void i40e_client_device_unregister(struct i40e_info *ldev)
+{
+	struct i40e_pf *pf = ldev->pf;
+	struct i40e_client_instance *cdev = pf->cinst;
+
+	while (test_and_set_bit(__I40E_SERVICE_SCHED, pf->state))
+		usleep_range(500, 1000);
+
+	if (!cdev || !cdev->client || !cdev->client->ops ||
+	    !cdev->client->ops->close) {
+		dev_err(&pf->pdev->dev, "Cannot close client device\n");
+		return;
+	}
+	cdev->client->ops->close(&cdev->lan_info, cdev->client, false);
+	clear_bit(__I40E_CLIENT_INSTANCE_OPENED, &cdev->state);
+	i40e_client_release_qvlist(&cdev->lan_info);
+	pf->cinst->client = NULL;
+	clear_bit(__I40E_SERVICE_SCHED, pf->state);
+}
+
+/* Retain legacy global registration/unregistration calls till i40iw is
+ * deprecated from the kernel. The irdma unified driver does not use these
+ * exported symbols.
+ */
 /**
  * i40e_register_client - Register a i40e client driver with the L2 driver
  * @client: pointer to the i40e_client struct
diff --git a/include/linux/net/intel/i40e_client.h b/include/linux/net/intel/i40e_client.h
index 72994baf4941..4a83648cf5fd 100644
--- a/include/linux/net/intel/i40e_client.h
+++ b/include/linux/net/intel/i40e_client.h
@@ -4,6 +4,9 @@
 #ifndef _I40E_CLIENT_H_
 #define _I40E_CLIENT_H_
 
+#include <linux/virtual_bus.h>
+
+#define I40E_PEER_RDMA_NAME	"intel,i40e,rdma"
 #define I40E_CLIENT_STR_LENGTH 10
 
 /* Client interface version should be updated anytime there is a change in the
@@ -84,6 +87,7 @@ struct i40e_info {
 	u8 lanmac[6];
 	struct net_device *netdev;
 	struct pci_dev *pcidev;
+	struct virtbus_device *vdev;
 	u8 __iomem *hw_addr;
 	u8 fid;	/* function id, PF id or VF id */
 #define I40E_CLIENT_FTYPE_PF 0
@@ -97,6 +101,7 @@ struct i40e_info {
 	struct i40e_qvlist_info *qvlist_info;
 	struct i40e_params params;
 	struct i40e_ops *ops;
+	struct i40e_client *client;
 
 	u16 msix_count;	 /* number of msix vectors*/
 	/* Array down below will be dynamically allocated based on msix_count */
@@ -107,6 +112,11 @@ struct i40e_info {
 	u32 fw_build;                   /* firmware build number */
 };
 
+struct i40e_virtbus_device {
+	struct virtbus_device vdev;
+	struct i40e_info *ldev;
+};
+
 #define I40E_CLIENT_RESET_LEVEL_PF   1
 #define I40E_CLIENT_RESET_LEVEL_CORE 2
 #define I40E_CLIENT_VSI_FLAG_TCP_ENABLE  BIT(1)
@@ -132,6 +142,11 @@ struct i40e_ops {
 			       struct i40e_client *client,
 			       bool is_vf, u32 vf_id,
 			       u32 flag, u32 valid_flag);
+
+	int (*client_device_register)(struct i40e_info *ldev);
+
+	void (*client_device_unregister)(struct i40e_info *ldev);
+
 };
 
 struct i40e_client_ops {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
                   ` (8 preceding siblings ...)
  2020-05-20  7:02 ` [net-next v4 09/12] i40e: Register a virtbus device to provide RDMA Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-20  7:20   ` Greg KH
                     ` (2 more replies)
  2020-05-20  7:02 ` [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test Jeff Kirsher
                   ` (2 subsequent siblings)
  12 siblings, 3 replies; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Ranjani Sridharan, netdev, linux-rdma, nhorman, sassmann, jgg,
	pierre-louis.bossart, Fred Oh, Jeff Kirsher

From: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>

A client in the SOF (Sound Open Firmware) context is a
device that needs to communicate with the DSP via IPC
messages. The SOF core is responsible for serializing the
IPC messages to the DSP from the different clients. One
example of an SOF client would be an IPC test client that
floods the DSP with test IPC messages to validate if the
serialization works as expected. Multi-client support will
also add the ability to split the existing audio cards
into multiple ones, so as to e.g. to deal with HDMI with a
dedicated client instead of adding HDMI to all cards.

This patch introduces descriptors for SOF client driver
and SOF client device along with APIs for registering
and unregistering a SOF client driver, sending IPCs from
a client device and accessing the SOF core debugfs root entry.

Along with this, add a couple of new members to struct
snd_sof_dev that will be used for maintaining the list of
clients.

Signed-off-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
Signed-off-by: Fred Oh <fred.oh@linux.intel.com>
Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 sound/soc/sof/Kconfig      | 20 +++++++++
 sound/soc/sof/Makefile     |  1 +
 sound/soc/sof/core.c       |  2 +
 sound/soc/sof/sof-client.c | 91 ++++++++++++++++++++++++++++++++++++++
 sound/soc/sof/sof-client.h | 84 +++++++++++++++++++++++++++++++++++
 sound/soc/sof/sof-priv.h   |  6 +++
 6 files changed, 204 insertions(+)
 create mode 100644 sound/soc/sof/sof-client.c
 create mode 100644 sound/soc/sof/sof-client.h

diff --git a/sound/soc/sof/Kconfig b/sound/soc/sof/Kconfig
index 4dda4b62509f..609989daf85b 100644
--- a/sound/soc/sof/Kconfig
+++ b/sound/soc/sof/Kconfig
@@ -50,6 +50,25 @@ config SND_SOC_SOF_DEBUG_PROBES
 	  Say Y if you want to enable probes.
 	  If unsure, select "N".
 
+config SND_SOC_SOF_CLIENT
+	tristate
+	select VIRTUAL_BUS
+	help
+	  This option is not user-selectable but automagically handled by
+	  'select' statements at a higher level
+
+config SND_SOC_SOF_CLIENT_SUPPORT
+	bool "SOF enable clients"
+	depends on SND_SOC_SOF
+	help
+	  This adds support for client support with Sound Open Firmware.
+	  The SOF driver adds the capability to separate out the debug
+	  functionality for IPC tests, probes etc. into separate client
+	  devices. This option would also allow adding client devices
+	  based on DSP FW capabilities and ACPI/OF device information.
+	  Say Y if you want to enable clients with SOF.
+	  If unsure select "N".
+
 config SND_SOC_SOF_DEVELOPER_SUPPORT
 	bool "SOF developer options support"
 	depends on EXPERT
@@ -186,6 +205,7 @@ endif ## SND_SOC_SOF_DEVELOPER_SUPPORT
 
 config SND_SOC_SOF
 	tristate
+	select SND_SOC_SOF_CLIENT if SND_SOC_SOF_CLIENT_SUPPORT
 	select SND_SOC_TOPOLOGY
 	select SND_SOC_SOF_NOCODEC if SND_SOC_SOF_NOCODEC_SUPPORT
 	help
diff --git a/sound/soc/sof/Makefile b/sound/soc/sof/Makefile
index 8eca2f85c90e..c819124c05bb 100644
--- a/sound/soc/sof/Makefile
+++ b/sound/soc/sof/Makefile
@@ -2,6 +2,7 @@
 
 snd-sof-objs := core.o ops.o loader.o ipc.o pcm.o pm.o debug.o topology.o\
 		control.o trace.o utils.o sof-audio.o
+snd-sof-$(CONFIG_SND_SOC_SOF_CLIENT) += sof-client.o
 snd-sof-$(CONFIG_SND_SOC_SOF_DEBUG_PROBES) += probe.o compress.o
 
 snd-sof-pci-objs := sof-pci-dev.o
diff --git a/sound/soc/sof/core.c b/sound/soc/sof/core.c
index 91acfae7935c..fdfed157e6c0 100644
--- a/sound/soc/sof/core.c
+++ b/sound/soc/sof/core.c
@@ -313,8 +313,10 @@ int snd_sof_device_probe(struct device *dev, struct snd_sof_pdata *plat_data)
 	INIT_LIST_HEAD(&sdev->widget_list);
 	INIT_LIST_HEAD(&sdev->dai_list);
 	INIT_LIST_HEAD(&sdev->route_list);
+	INIT_LIST_HEAD(&sdev->client_list);
 	spin_lock_init(&sdev->ipc_lock);
 	spin_lock_init(&sdev->hw_lock);
+	mutex_init(&sdev->client_mutex);
 
 	if (IS_ENABLED(CONFIG_SND_SOC_SOF_PROBE_WORK_QUEUE))
 		INIT_WORK(&sdev->probe_work, sof_probe_work);
diff --git a/sound/soc/sof/sof-client.c b/sound/soc/sof/sof-client.c
new file mode 100644
index 000000000000..b46080aa062e
--- /dev/null
+++ b/sound/soc/sof/sof-client.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0-only
+//
+// Copyright(c) 2020 Intel Corporation. All rights reserved.
+//
+// Author: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
+//
+
+#include <linux/completion.h>
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/errno.h>
+#include <linux/jiffies.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/virtual_bus.h>
+#include "sof-client.h"
+#include "sof-priv.h"
+
+static void sof_client_virtdev_release(struct virtbus_device *vdev)
+{
+	struct sof_client_dev *cdev = virtbus_dev_to_sof_client_dev(vdev);
+
+	kfree(cdev);
+}
+
+int sof_client_dev_register(struct snd_sof_dev *sdev,
+			    const char *name)
+{
+	struct sof_client_dev *cdev;
+	struct virtbus_device *vdev;
+	unsigned long time, timeout;
+	int ret;
+
+	cdev = kzalloc(sizeof(*cdev), GFP_KERNEL);
+	if (!cdev)
+		return -ENOMEM;
+
+	cdev->sdev = sdev;
+	init_completion(&cdev->probe_complete);
+	vdev = &cdev->vdev;
+	vdev->match_name = name;
+	vdev->dev.parent = sdev->dev;
+	vdev->release = sof_client_virtdev_release;
+
+	/*
+	 * Register virtbus device for the client.
+	 * The error path in virtbus_register_device() calls put_device(),
+	 * which will free cdev in the release callback.
+	 */
+	ret = virtbus_register_device(vdev);
+	if (ret < 0)
+		return ret;
+
+	/* make sure the probe is complete before updating client list */
+	timeout = msecs_to_jiffies(SOF_CLIENT_PROBE_TIMEOUT_MS);
+	time = wait_for_completion_timeout(&cdev->probe_complete, timeout);
+	if (!time) {
+		dev_err(sdev->dev, "error: probe of virtbus dev %s timed out\n",
+			name);
+		virtbus_unregister_device(vdev);
+		return -ETIMEDOUT;
+	}
+
+	/* add to list of SOF client devices */
+	mutex_lock(&sdev->client_mutex);
+	list_add(&cdev->list, &sdev->client_list);
+	mutex_unlock(&sdev->client_mutex);
+
+	return 0;
+}
+EXPORT_SYMBOL_NS_GPL(sof_client_dev_register, SND_SOC_SOF_CLIENT);
+
+int sof_client_ipc_tx_message(struct sof_client_dev *cdev, u32 header,
+			      void *msg_data, size_t msg_bytes,
+			      void *reply_data, size_t reply_bytes)
+{
+	return sof_ipc_tx_message(cdev->sdev->ipc, header, msg_data, msg_bytes,
+				  reply_data, reply_bytes);
+}
+EXPORT_SYMBOL_NS_GPL(sof_client_ipc_tx_message, SND_SOC_SOF_CLIENT);
+
+struct dentry *sof_client_get_debugfs_root(struct sof_client_dev *cdev)
+{
+	return cdev->sdev->debugfs_root;
+}
+EXPORT_SYMBOL_NS_GPL(sof_client_get_debugfs_root, SND_SOC_SOF_CLIENT);
+
+MODULE_AUTHOR("Ranjani Sridharan <ranjani.sridharan@linux.intel.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/sound/soc/sof/sof-client.h b/sound/soc/sof/sof-client.h
new file mode 100644
index 000000000000..fdc4b1511ffc
--- /dev/null
+++ b/sound/soc/sof/sof-client.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: (GPL-2.0-only) */
+
+#ifndef __SOUND_SOC_SOF_CLIENT_H
+#define __SOUND_SOC_SOF_CLIENT_H
+
+#include <linux/completion.h>
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/device/driver.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/virtual_bus.h>
+
+#define SOF_CLIENT_PROBE_TIMEOUT_MS 2000
+
+struct snd_sof_dev;
+
+enum sof_client_type {
+	SOF_CLIENT_AUDIO,
+	SOF_CLIENT_IPC,
+};
+
+/* SOF client device */
+struct sof_client_dev {
+	struct virtbus_device vdev;
+	struct snd_sof_dev *sdev;
+	struct list_head list;	/* item in SOF core client drv list */
+	struct completion probe_complete;
+	void *data;
+};
+
+/* client-specific ops, all optional */
+struct sof_client_ops {
+	int (*client_ipc_rx)(struct sof_client_dev *cdev, u32 msg_cmd);
+};
+
+struct sof_client_drv {
+	const char *name;
+	enum sof_client_type type;
+	const struct sof_client_ops ops;
+	struct virtbus_driver virtbus_drv;
+};
+
+#define virtbus_dev_to_sof_client_dev(virtbus_dev) \
+	container_of(virtbus_dev, struct sof_client_dev, vdev)
+
+static inline int sof_client_drv_register(struct sof_client_drv *drv)
+{
+	return virtbus_register_driver(&drv->virtbus_drv);
+}
+
+static inline void sof_client_drv_unregister(struct sof_client_drv *drv)
+{
+	virtbus_unregister_driver(&drv->virtbus_drv);
+}
+
+int sof_client_dev_register(struct snd_sof_dev *sdev,
+			    const char *name);
+
+static inline void sof_client_dev_unregister(struct sof_client_dev *cdev)
+{
+	virtbus_unregister_device(&cdev->vdev);
+}
+
+int sof_client_ipc_tx_message(struct sof_client_dev *cdev, u32 header,
+			      void *msg_data, size_t msg_bytes,
+			      void *reply_data, size_t reply_bytes);
+
+struct dentry *sof_client_get_debugfs_root(struct sof_client_dev *cdev);
+
+/**
+ * module_sof_client_driver() - Helper macro for registering an SOF Client
+ * driver
+ * @__sof_client_driver: SOF client driver struct
+ *
+ * Helper macro for SOF client drivers which do not do anything special in
+ * module init/exit. This eliminates a lot of boilerplate. Each module may only
+ * use this macro once, and calling it replaces module_init() and module_exit()
+ */
+#define module_sof_client_driver(__sof_client_driver) \
+	module_driver(__sof_client_driver, sof_client_drv_register, \
+			sof_client_drv_unregister)
+
+#endif
diff --git a/sound/soc/sof/sof-priv.h b/sound/soc/sof/sof-priv.h
index a4b297c842df..9da7f6f45362 100644
--- a/sound/soc/sof/sof-priv.h
+++ b/sound/soc/sof/sof-priv.h
@@ -438,6 +438,12 @@ struct snd_sof_dev {
 
 	bool msi_enabled;
 
+	/* list of client devices */
+	struct list_head client_list;
+
+	/* mutex to protect client list */
+	struct mutex client_mutex;
+
 	void *private;			/* core does not touch this */
 };
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
                   ` (9 preceding siblings ...)
  2020-05-20  7:02 ` [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-20  7:22   ` Greg KH
  2020-05-20 12:56   ` Jason Gunthorpe
  2020-05-20  7:02 ` [net-next v4 12/12] ASoC: SOF: ops: Add new op for client registration Jeff Kirsher
  2020-05-20  7:17 ` [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Greg KH
  12 siblings, 2 replies; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Ranjani Sridharan, netdev, linux-rdma, nhorman, sassmann, jgg,
	pierre-louis.bossart, Fred Oh, Jeff Kirsher

From: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>

Create an SOF client driver for IPC flood test. This
driver is used to set up the debugfs entries and the
read/write ops for initiating the IPC flood test that
would be used to measure the min/max/avg response times
for sending IPCs to the DSP.

Signed-off-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
Signed-off-by: Fred Oh <fred.oh@linux.intel.com>
Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 sound/soc/sof/Kconfig               |  10 +
 sound/soc/sof/Makefile              |   5 +-
 sound/soc/sof/sof-ipc-test-client.c | 325 ++++++++++++++++++++++++++++
 3 files changed, 339 insertions(+), 1 deletion(-)
 create mode 100644 sound/soc/sof/sof-ipc-test-client.c

diff --git a/sound/soc/sof/Kconfig b/sound/soc/sof/Kconfig
index 609989daf85b..54cebe1fb1ec 100644
--- a/sound/soc/sof/Kconfig
+++ b/sound/soc/sof/Kconfig
@@ -191,6 +191,16 @@ config SND_SOC_SOF_DEBUG_IPC_FLOOD_TEST
 	  Say Y if you want to enable IPC flood test.
 	  If unsure, select "N".
 
+config SND_SOC_SOF_DEBUG_IPC_FLOOD_TEST_CLIENT
+	tristate "SOF enable IPC flood test client"
+	depends on SND_SOC_SOF_CLIENT
+	help
+	  This option enables a separate client device for IPC flood test
+	  which can be used to flood the DSP with test IPCs and gather stats
+	  about response times.
+	  Say Y if you want to enable IPC flood test.
+	  If unsure, select "N".
+
 config SND_SOC_SOF_DEBUG_RETAIN_DSP_CONTEXT
 	bool "SOF retain DSP context on any FW exceptions"
 	help
diff --git a/sound/soc/sof/Makefile b/sound/soc/sof/Makefile
index c819124c05bb..635094fce5c1 100644
--- a/sound/soc/sof/Makefile
+++ b/sound/soc/sof/Makefile
@@ -9,16 +9,19 @@ snd-sof-pci-objs := sof-pci-dev.o
 snd-sof-acpi-objs := sof-acpi-dev.o
 snd-sof-of-objs := sof-of-dev.o
 
+snd-sof-ipc-test-objs := sof-ipc-test-client.o
+
 snd-sof-nocodec-objs := nocodec.o
 
 obj-$(CONFIG_SND_SOC_SOF) += snd-sof.o
 obj-$(CONFIG_SND_SOC_SOF_NOCODEC) += snd-sof-nocodec.o
 
-
 obj-$(CONFIG_SND_SOC_SOF_ACPI) += snd-sof-acpi.o
 obj-$(CONFIG_SND_SOC_SOF_OF) += snd-sof-of.o
 obj-$(CONFIG_SND_SOC_SOF_PCI) += snd-sof-pci.o
 
+obj-$(CONFIG_SND_SOC_SOF_DEBUG_IPC_FLOOD_TEST_CLIENT) += snd-sof-ipc-test.o
+
 obj-$(CONFIG_SND_SOC_SOF_INTEL_TOPLEVEL) += intel/
 obj-$(CONFIG_SND_SOC_SOF_IMX_TOPLEVEL) += imx/
 obj-$(CONFIG_SND_SOC_SOF_XTENSA) += xtensa/
diff --git a/sound/soc/sof/sof-ipc-test-client.c b/sound/soc/sof/sof-ipc-test-client.c
new file mode 100644
index 000000000000..548417ebfdf8
--- /dev/null
+++ b/sound/soc/sof/sof-ipc-test-client.c
@@ -0,0 +1,325 @@
+// SPDX-License-Identifier: GPL-2.0-only
+//
+// Copyright(c) 2020 Intel Corporation. All rights reserved.
+//
+// Author: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
+//
+
+#include <linux/completion.h>
+#include <linux/debugfs.h>
+#include <linux/ktime.h>
+#include <linux/mod_devicetable.h>
+#include <linux/module.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/virtual_bus.h>
+#include <sound/sof/header.h>
+#include "sof-client.h"
+
+#define MAX_IPC_FLOOD_DURATION_MS 1000
+#define MAX_IPC_FLOOD_COUNT 10000
+#define IPC_FLOOD_TEST_RESULT_LEN 512
+#define SOF_IPC_CLIENT_SUSPEND_DELAY_MS 3000
+
+struct sof_ipc_client_data {
+	struct dentry *dfs_root;
+	char *buf;
+};
+
+static int sof_debug_ipc_flood_test(struct sof_client_dev *cdev,
+				    bool flood_duration_test,
+				    unsigned long ipc_duration_ms,
+				    unsigned long ipc_count)
+{
+	struct sof_ipc_client_data *ipc_client_data = cdev->data;
+	struct device *dev = &cdev->vdev.dev;
+	struct sof_ipc_cmd_hdr hdr;
+	struct sof_ipc_reply reply;
+	u64 min_response_time = U64_MAX;
+	u64 avg_response_time = 0;
+	u64 max_response_time = 0;
+	ktime_t cur = ktime_get();
+	ktime_t test_end;
+	int i = 0;
+	int ret = 0;
+
+	/* configure test IPC */
+	hdr.cmd = SOF_IPC_GLB_TEST_MSG | SOF_IPC_TEST_IPC_FLOOD;
+	hdr.size = sizeof(hdr);
+
+	/* set test end time for duration flood test */
+	test_end = ktime_get_ns() + ipc_duration_ms * NSEC_PER_MSEC;
+
+	/* send test IPC's */
+	for (i = 0;
+	     flood_duration_test ? ktime_to_ns(cur) < test_end : i < ipc_count;
+	     i++) {
+		ktime_t start;
+		u64 ipc_response_time;
+
+		start = ktime_get();
+		ret = sof_client_ipc_tx_message(cdev, hdr.cmd,
+						&hdr, hdr.size, &reply,
+						sizeof(reply));
+		if (ret < 0)
+			break;
+		cur = ktime_get();
+
+		/* compute min and max response times */
+		ipc_response_time = ktime_to_ns(ktime_sub(cur, start));
+		min_response_time = min(min_response_time, ipc_response_time);
+		max_response_time = max(max_response_time, ipc_response_time);
+
+		/* sum up response times */
+		avg_response_time += ipc_response_time;
+	}
+
+	if (ret < 0)
+		dev_err(dev, "error: ipc flood test failed at %d iterations\n",
+			i);
+
+	/* return if the first IPC fails */
+	if (!i)
+		return ret;
+
+	/* compute average response time */
+	DIV_ROUND_CLOSEST(avg_response_time, i);
+
+	/* clear previous test output */
+	memset(ipc_client_data->buf, 0, IPC_FLOOD_TEST_RESULT_LEN);
+
+	if (flood_duration_test) {
+		dev_dbg(dev, "IPC Flood test duration: %lums\n",
+			ipc_duration_ms);
+		snprintf(ipc_client_data->buf, IPC_FLOOD_TEST_RESULT_LEN,
+			 "IPC Flood test duration: %lums\n", ipc_duration_ms);
+	}
+
+	dev_dbg(dev,
+		"IPC Flood count: %d, Avg response time: %lluns\n",
+		i, avg_response_time);
+	dev_dbg(dev, "Max response time: %lluns\n",
+		max_response_time);
+	dev_dbg(dev, "Min response time: %lluns\n",
+		min_response_time);
+
+	/* format output string and save test results */
+	snprintf(ipc_client_data->buf + strlen(ipc_client_data->buf),
+		 IPC_FLOOD_TEST_RESULT_LEN - strlen(ipc_client_data->buf),
+		 "IPC Flood count: %d\nAvg response time: %lluns\n",
+		 i, avg_response_time);
+
+	snprintf(ipc_client_data->buf + strlen(ipc_client_data->buf),
+		 IPC_FLOOD_TEST_RESULT_LEN - strlen(ipc_client_data->buf),
+		 "Max response time: %lluns\nMin response time: %lluns\n",
+		 max_response_time, min_response_time);
+
+	return ret;
+}
+
+static ssize_t sof_ipc_dfsentry_write(struct file *file,
+				      const char __user *buffer,
+				      size_t count, loff_t *ppos)
+{
+	struct dentry *dentry = file->f_path.dentry;
+	struct sof_client_dev *cdev = file->private_data;
+	struct device *dev = &cdev->vdev.dev;
+	unsigned long ipc_duration_ms = 0;
+	bool flood_duration_test;
+	unsigned long ipc_count = 0;
+	char *string;
+	size_t size;
+	int err;
+	int ret;
+
+	string = kzalloc(count, GFP_KERNEL);
+	if (!string)
+		return -ENOMEM;
+
+	size = simple_write_to_buffer(string, count, ppos, buffer, count);
+
+	flood_duration_test = !strcmp(dentry->d_name.name,
+				      "ipc_flood_duration_ms");
+
+	/* set test completion criterion */
+	ret = flood_duration_test ? kstrtoul(string, 0, &ipc_duration_ms) :
+			kstrtoul(string, 0, &ipc_count);
+	if (ret < 0)
+		goto out;
+
+	/* limit max duration/ipc count for flood test */
+	if (flood_duration_test) {
+		if (!ipc_duration_ms) {
+			ret = size;
+			goto out;
+		}
+
+		ipc_duration_ms = min_t(unsigned long, ipc_duration_ms,
+				      MAX_IPC_FLOOD_DURATION_MS);
+	} else {
+		if (!ipc_count) {
+			ret = size;
+			goto out;
+		}
+
+		ipc_count = min_t(unsigned long, ipc_count,
+				  MAX_IPC_FLOOD_COUNT);
+	}
+
+	ret = pm_runtime_get_sync(dev);
+	if (ret < 0) {
+		dev_err_ratelimited(dev,
+				    "error: debugfs write failed to resume %d\n",
+				    ret);
+		pm_runtime_put_noidle(dev);
+		goto out;
+	}
+
+	/* flood test */
+	ret = sof_debug_ipc_flood_test(cdev, flood_duration_test,
+				       ipc_duration_ms, ipc_count);
+
+	pm_runtime_mark_last_busy(dev);
+	err = pm_runtime_put_autosuspend(dev);
+	if (err < 0)
+		dev_err_ratelimited(dev,
+				    "error: debugfs write failed to idle %d\n",
+				    err);
+
+	/* return size if test is successful */
+	if (ret >= 0)
+		ret = size;
+out:
+	kfree(string);
+	return ret;
+}
+
+static ssize_t sof_ipc_dfsentry_read(struct file *file, char __user *buffer,
+				     size_t count, loff_t *ppos)
+{
+	struct sof_client_dev *cdev = file->private_data;
+	struct sof_ipc_client_data *ipc_client_data = cdev->data;
+	size_t size_ret;
+
+	if (*ppos)
+		return 0;
+
+	/* return results of the last IPC test */
+	count = strlen(ipc_client_data->buf);
+	size_ret = copy_to_user(buffer, ipc_client_data->buf, count);
+	if (size_ret)
+		return -EFAULT;
+
+	*ppos += count;
+	return count;
+}
+
+static const struct file_operations sof_ipc_dfs_fops = {
+	.open = simple_open,
+	.read = sof_ipc_dfsentry_read,
+	.llseek = default_llseek,
+	.write = sof_ipc_dfsentry_write,
+};
+
+static int sof_ipc_test_probe(struct virtbus_device *vdev)
+{
+	struct sof_client_dev *cdev = virtbus_dev_to_sof_client_dev(vdev);
+	struct sof_ipc_client_data *ipc_client_data;
+
+	/*
+	 * The virtbus device has a usage count of 0 even before runtime PM
+	 * is enabled. So, increment the usage count to let the device
+	 * suspend after probe is complete.
+	 */
+	pm_runtime_get_noresume(&vdev->dev);
+
+	/* allocate memory for client data */
+	ipc_client_data = devm_kzalloc(&vdev->dev, sizeof(*ipc_client_data),
+				       GFP_KERNEL);
+	if (!ipc_client_data)
+		return -ENOMEM;
+
+	ipc_client_data->buf = devm_kzalloc(&vdev->dev,
+					    IPC_FLOOD_TEST_RESULT_LEN,
+					    GFP_KERNEL);
+	if (!ipc_client_data->buf)
+		return -ENOMEM;
+
+	cdev->data = ipc_client_data;
+
+	/* create debugfs root folder with device name under parent SOF dir */
+	ipc_client_data->dfs_root =
+		debugfs_create_dir(dev_name(&vdev->dev),
+				   sof_client_get_debugfs_root(cdev));
+
+	/* create read-write ipc_flood_count debugfs entry */
+	debugfs_create_file("ipc_flood_count", 0644, ipc_client_data->dfs_root,
+			    cdev, &sof_ipc_dfs_fops);
+
+	/* create read-write ipc_flood_duration_ms debugfs entry */
+	debugfs_create_file("ipc_flood_duration_ms", 0644,
+			    ipc_client_data->dfs_root,
+			    cdev, &sof_ipc_dfs_fops);
+
+	/* enable runtime PM */
+	pm_runtime_set_autosuspend_delay(&vdev->dev,
+					 SOF_IPC_CLIENT_SUSPEND_DELAY_MS);
+	pm_runtime_use_autosuspend(&vdev->dev);
+	pm_runtime_set_active(&vdev->dev);
+	pm_runtime_enable(&vdev->dev);
+	pm_runtime_mark_last_busy(&vdev->dev);
+	pm_runtime_put_autosuspend(&vdev->dev);
+
+	/* complete client device registration */
+	complete(&cdev->probe_complete);
+
+	return 0;
+}
+
+static int sof_ipc_test_cleanup(struct virtbus_device *vdev)
+{
+	struct sof_client_dev *cdev = virtbus_dev_to_sof_client_dev(vdev);
+	struct sof_ipc_client_data *ipc_client_data = cdev->data;
+
+	pm_runtime_disable(&vdev->dev);
+	debugfs_remove_recursive(ipc_client_data->dfs_root);
+
+	return 0;
+}
+
+static int sof_ipc_test_remove(struct virtbus_device *vdev)
+{
+	return sof_ipc_test_cleanup(vdev);
+}
+
+static void sof_ipc_test_shutdown(struct virtbus_device *vdev)
+{
+	sof_ipc_test_cleanup(vdev);
+}
+
+static const struct virtbus_dev_id sof_ipc_virtbus_id_table[] = {
+	{"sof-ipc-test"},
+	{},
+};
+
+static struct sof_client_drv sof_ipc_test_client_drv = {
+	.name = "sof-ipc-test-client-drv",
+	.type = SOF_CLIENT_IPC,
+	.virtbus_drv = {
+		.driver = {
+			.name = "sof-ipc-test-virtbus-drv",
+		},
+		.id_table = sof_ipc_virtbus_id_table,
+		.probe = sof_ipc_test_probe,
+		.remove = sof_ipc_test_remove,
+		.shutdown = sof_ipc_test_shutdown,
+	},
+};
+
+module_sof_client_driver(sof_ipc_test_client_drv);
+
+MODULE_DESCRIPTION("SOF IPC Test Client Driver");
+MODULE_LICENSE("GPL v2");
+MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
+MODULE_ALIAS("virtbus:sof-ipc-test");
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [net-next v4 12/12] ASoC: SOF: ops: Add new op for client registration
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
                   ` (10 preceding siblings ...)
  2020-05-20  7:02 ` [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test Jeff Kirsher
@ 2020-05-20  7:02 ` Jeff Kirsher
  2020-05-20  7:23   ` Greg KH
  2020-05-20  7:17 ` [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Greg KH
  12 siblings, 1 reply; 69+ messages in thread
From: Jeff Kirsher @ 2020-05-20  7:02 UTC (permalink / raw)
  To: davem, gregkh
  Cc: Ranjani Sridharan, netdev, linux-rdma, nhorman, sassmann, jgg,
	pierre-louis.bossart, Fred Oh, Jeff Kirsher

From: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>

Add a new op for registering clients. The clients to be
registered depend on the DSP capabilities and the ACPI/DT
information. For now, we only add 2 IPC test clients that
will be used for run tandem IPC flood tests for all Intel
platforms.

For ACPI platforms, change the Kconfig to select
SND_SOC_SOF_PROBE_WORK_QUEUE to allow the virtbus driver
to probe when the client is registered.

Signed-off-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
Signed-off-by: Fred Oh <fred.oh@linux.intel.com>
Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 sound/soc/sof/core.c        |  8 ++++++++
 sound/soc/sof/intel/Kconfig |  1 +
 sound/soc/sof/intel/apl.c   | 26 ++++++++++++++++++++++++++
 sound/soc/sof/intel/bdw.c   | 25 +++++++++++++++++++++++++
 sound/soc/sof/intel/byt.c   | 28 ++++++++++++++++++++++++++++
 sound/soc/sof/intel/cnl.c   | 26 ++++++++++++++++++++++++++
 sound/soc/sof/ops.h         | 34 ++++++++++++++++++++++++++++++++++
 sound/soc/sof/sof-priv.h    |  3 +++
 8 files changed, 151 insertions(+)

diff --git a/sound/soc/sof/core.c b/sound/soc/sof/core.c
index fdfed157e6c0..a0382612b9e7 100644
--- a/sound/soc/sof/core.c
+++ b/sound/soc/sof/core.c
@@ -245,6 +245,12 @@ static int sof_probe_continue(struct snd_sof_dev *sdev)
 	if (plat_data->sof_probe_complete)
 		plat_data->sof_probe_complete(sdev->dev);
 
+	/*
+	 * Register client devices. This can fail but errors cannot be
+	 * propagated.
+	 */
+	snd_sof_register_clients(sdev);
+
 	return 0;
 
 fw_trace_err:
@@ -349,6 +355,7 @@ int snd_sof_device_remove(struct device *dev)
 		cancel_work_sync(&sdev->probe_work);
 
 	if (sdev->fw_state > SOF_FW_BOOT_NOT_STARTED) {
+		snd_sof_unregister_clients(sdev);
 		snd_sof_fw_unload(sdev);
 		snd_sof_ipc_free(sdev);
 		snd_sof_free_debug(sdev);
@@ -382,4 +389,5 @@ EXPORT_SYMBOL(snd_sof_device_remove);
 MODULE_AUTHOR("Liam Girdwood");
 MODULE_DESCRIPTION("Sound Open Firmware (SOF) Core");
 MODULE_LICENSE("Dual BSD/GPL");
+MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
 MODULE_ALIAS("platform:sof-audio");
diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
index c9a2bee4b55c..002fd426ee53 100644
--- a/sound/soc/sof/intel/Kconfig
+++ b/sound/soc/sof/intel/Kconfig
@@ -13,6 +13,7 @@ config SND_SOC_SOF_INTEL_ACPI
 	def_tristate SND_SOC_SOF_ACPI
 	select SND_SOC_SOF_BAYTRAIL  if SND_SOC_SOF_BAYTRAIL_SUPPORT
 	select SND_SOC_SOF_BROADWELL if SND_SOC_SOF_BROADWELL_SUPPORT
+	select SND_SOC_SOF_PROBE_WORK_QUEUE if SND_SOC_SOF_CLIENT
 	help
 	  This option is not user-selectable but automagically handled by
 	  'select' statements at a higher level
diff --git a/sound/soc/sof/intel/apl.c b/sound/soc/sof/intel/apl.c
index 02218d22e51f..547b2b0ccb9a 100644
--- a/sound/soc/sof/intel/apl.c
+++ b/sound/soc/sof/intel/apl.c
@@ -15,9 +15,13 @@
  * Hardware interface for audio DSP on Apollolake and GeminiLake
  */
 
+#include <linux/module.h>
 #include "../sof-priv.h"
 #include "hda.h"
 #include "../sof-audio.h"
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_CLIENT)
+#include "../sof-client.h"
+#endif
 
 static const struct snd_sof_debugfs_map apl_dsp_debugfs[] = {
 	{"hda", HDA_DSP_HDA_BAR, 0, 0x4000, SOF_DEBUGFS_ACCESS_ALWAYS},
@@ -25,6 +29,24 @@ static const struct snd_sof_debugfs_map apl_dsp_debugfs[] = {
 	{"dsp", HDA_DSP_BAR,  0, 0x10000, SOF_DEBUGFS_ACCESS_ALWAYS},
 };
 
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_CLIENT)
+static void apl_register_clients(struct snd_sof_dev *sdev)
+{
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_DEBUG_IPC_FLOOD_TEST_CLIENT)
+	/*
+	 * Register 2 IPC clients to facilitate tandem flood test.
+	 * The device name below is appended with the device ID assigned
+	 * automatically when the virtbus device is registered making
+	 * them unique.
+	 */
+	sof_client_dev_register(sdev, "sof-ipc-test");
+	sof_client_dev_register(sdev, "sof-ipc-test");
+#endif
+}
+#else
+static void apl_register_clients(struct snd_sof_dev *sdev) {}
+#endif
+
 /* apollolake ops */
 const struct snd_sof_dsp_ops sof_apl_ops = {
 	/* probe and remove */
@@ -101,6 +123,9 @@ const struct snd_sof_dsp_ops sof_apl_ops = {
 	.trace_release = hda_dsp_trace_release,
 	.trace_trigger = hda_dsp_trace_trigger,
 
+	/* client register */
+	.register_clients = apl_register_clients,
+
 	/* DAI drivers */
 	.drv		= skl_dai,
 	.num_drv	= SOF_SKL_NUM_DAIS,
@@ -140,3 +165,4 @@ const struct sof_intel_dsp_desc apl_chip_info = {
 	.ssp_base_offset = APL_SSP_BASE_OFFSET,
 };
 EXPORT_SYMBOL_NS(apl_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON);
+MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
diff --git a/sound/soc/sof/intel/bdw.c b/sound/soc/sof/intel/bdw.c
index a32a3ef78ec5..62617f3c40f8 100644
--- a/sound/soc/sof/intel/bdw.c
+++ b/sound/soc/sof/intel/bdw.c
@@ -18,6 +18,9 @@
 #include "../ops.h"
 #include "shim.h"
 #include "../sof-audio.h"
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_CLIENT)
+#include "../sof-client.h"
+#endif
 
 /* BARs */
 #define BDW_DSP_BAR 0
@@ -563,6 +566,24 @@ static void bdw_set_mach_params(const struct snd_soc_acpi_mach *mach,
 	mach_params->platform = dev_name(dev);
 }
 
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_CLIENT)
+static void bdw_register_clients(struct snd_sof_dev *sdev)
+{
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_DEBUG_IPC_FLOOD_TEST_CLIENT)
+	/*
+	 * Register 2 IPC clients to facilitate tandem flood test.
+	 * The device name below is appended with the device ID assigned
+	 * automatically when the virtbus device is registered making
+	 * them unique.
+	 */
+	sof_client_dev_register(sdev, "sof-ipc-test");
+	sof_client_dev_register(sdev, "sof-ipc-test");
+#endif
+}
+#else
+static void bdw_register_clients(struct snd_sof_dev *sdev) {}
+#endif
+
 /* Broadwell DAIs */
 static struct snd_soc_dai_driver bdw_dai[] = {
 {
@@ -638,6 +659,9 @@ const struct snd_sof_dsp_ops sof_bdw_ops = {
 	/*Firmware loading */
 	.load_firmware	= snd_sof_load_firmware_memcpy,
 
+	/* client register */
+	.register_clients = bdw_register_clients,
+
 	/* DAI drivers */
 	.drv = bdw_dai,
 	.num_drv = ARRAY_SIZE(bdw_dai),
@@ -662,3 +686,4 @@ EXPORT_SYMBOL_NS(bdw_chip_info, SND_SOC_SOF_BROADWELL);
 MODULE_LICENSE("Dual BSD/GPL");
 MODULE_IMPORT_NS(SND_SOC_SOF_INTEL_HIFI_EP_IPC);
 MODULE_IMPORT_NS(SND_SOC_SOF_XTENSA);
+MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
diff --git a/sound/soc/sof/intel/byt.c b/sound/soc/sof/intel/byt.c
index 29fd1d86156c..76263596917f 100644
--- a/sound/soc/sof/intel/byt.c
+++ b/sound/soc/sof/intel/byt.c
@@ -19,6 +19,9 @@
 #include "shim.h"
 #include "../sof-audio.h"
 #include "../../intel/common/soc-intel-quirks.h"
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_CLIENT)
+#include "../sof-client.h"
+#endif
 
 /* DSP memories */
 #define IRAM_OFFSET		0x0C0000
@@ -779,6 +782,24 @@ static int byt_acpi_probe(struct snd_sof_dev *sdev)
 	return ret;
 }
 
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_CLIENT)
+static void byt_register_clients(struct snd_sof_dev *sdev)
+{
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_DEBUG_IPC_FLOOD_TEST_CLIENT)
+	/*
+	 * Register 2 IPC clients to facilitate tandem flood test.
+	 * The device name below is appended with the device ID assigned
+	 * automatically when the virtbus device is registered making
+	 * them unique.
+	 */
+	sof_client_dev_register(sdev, "sof-ipc-test");
+	sof_client_dev_register(sdev, "sof-ipc-test");
+#endif
+}
+#else
+static void byt_register_clients(struct snd_sof_dev *sdev) {}
+#endif
+
 /* baytrail ops */
 const struct snd_sof_dsp_ops sof_byt_ops = {
 	/* device init */
@@ -832,6 +853,9 @@ const struct snd_sof_dsp_ops sof_byt_ops = {
 	/*Firmware loading */
 	.load_firmware	= snd_sof_load_firmware_memcpy,
 
+	/* client register */
+	.register_clients = byt_register_clients,
+
 	/* DAI drivers */
 	.drv = byt_dai,
 	.num_drv = 3, /* we have only 3 SSPs on byt*/
@@ -906,6 +930,9 @@ const struct snd_sof_dsp_ops sof_cht_ops = {
 	/*Firmware loading */
 	.load_firmware	= snd_sof_load_firmware_memcpy,
 
+	/* client register */
+	.register_clients = byt_register_clients,
+
 	/* DAI drivers */
 	.drv = byt_dai,
 	/* all 6 SSPs may be available for cherrytrail */
@@ -933,3 +960,4 @@ EXPORT_SYMBOL_NS(cht_chip_info, SND_SOC_SOF_BAYTRAIL);
 MODULE_LICENSE("Dual BSD/GPL");
 MODULE_IMPORT_NS(SND_SOC_SOF_INTEL_HIFI_EP_IPC);
 MODULE_IMPORT_NS(SND_SOC_SOF_XTENSA);
+MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
diff --git a/sound/soc/sof/intel/cnl.c b/sound/soc/sof/intel/cnl.c
index e427d00eca71..0eedb39e1c89 100644
--- a/sound/soc/sof/intel/cnl.c
+++ b/sound/soc/sof/intel/cnl.c
@@ -15,10 +15,14 @@
  * Hardware interface for audio DSP on Cannonlake.
  */
 
+#include <linux/module.h>
 #include "../ops.h"
 #include "hda.h"
 #include "hda-ipc.h"
 #include "../sof-audio.h"
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_CLIENT)
+#include "../sof-client.h"
+#endif
 
 static const struct snd_sof_debugfs_map cnl_dsp_debugfs[] = {
 	{"hda", HDA_DSP_HDA_BAR, 0, 0x4000, SOF_DEBUGFS_ACCESS_ALWAYS},
@@ -231,6 +235,24 @@ static void cnl_ipc_dump(struct snd_sof_dev *sdev)
 		hipcida, hipctdr, hipcctl);
 }
 
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_CLIENT)
+static void cnl_register_clients(struct snd_sof_dev *sdev)
+{
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_DEBUG_IPC_FLOOD_TEST_CLIENT)
+	/*
+	 * Register 2 IPC clients to facilitate tandem flood test.
+	 * The device name below is appended with the device ID assigned
+	 * automatically when the virtbus device is registered making
+	 * them unique.
+	 */
+	sof_client_dev_register(sdev, "sof-ipc-test");
+	sof_client_dev_register(sdev, "sof-ipc-test");
+#endif
+}
+#else
+static void cnl_register_clients(struct snd_sof_dev *sdev) {}
+#endif
+
 /* cannonlake ops */
 const struct snd_sof_dsp_ops sof_cnl_ops = {
 	/* probe and remove */
@@ -307,6 +329,9 @@ const struct snd_sof_dsp_ops sof_cnl_ops = {
 	.trace_release = hda_dsp_trace_release,
 	.trace_trigger = hda_dsp_trace_trigger,
 
+	/* client register */
+	.register_clients = cnl_register_clients,
+
 	/* DAI drivers */
 	.drv		= skl_dai,
 	.num_drv	= SOF_SKL_NUM_DAIS,
@@ -417,3 +442,4 @@ const struct sof_intel_dsp_desc jsl_chip_info = {
 	.ssp_base_offset = CNL_SSP_BASE_OFFSET,
 };
 EXPORT_SYMBOL_NS(jsl_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON);
+MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
diff --git a/sound/soc/sof/ops.h b/sound/soc/sof/ops.h
index a771500ac442..36b379078b03 100644
--- a/sound/soc/sof/ops.h
+++ b/sound/soc/sof/ops.h
@@ -14,9 +14,14 @@
 #include <linux/device.h>
 #include <linux/interrupt.h>
 #include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
 #include <linux/types.h>
 #include <sound/pcm.h>
 #include "sof-priv.h"
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_CLIENT)
+#include "sof-client.h"
+#endif
 
 #define sof_ops(sdev) \
 	((sdev)->pdata->desc->ops)
@@ -470,6 +475,35 @@ snd_sof_set_mach_params(const struct snd_soc_acpi_mach *mach,
 		sof_ops(sdev)->set_mach_params(mach, dev);
 }
 
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_CLIENT)
+static inline void
+snd_sof_register_clients(struct snd_sof_dev *sdev)
+{
+	if (sof_ops(sdev) && sof_ops(sdev)->register_clients)
+		sof_ops(sdev)->register_clients(sdev);
+}
+
+static inline void
+snd_sof_unregister_clients(struct snd_sof_dev *sdev)
+{
+	struct sof_client_dev *cdev, *_cdev;
+
+	/* unregister client devices */
+	mutex_lock(&sdev->client_mutex);
+	list_for_each_entry_safe(cdev, _cdev, &sdev->client_list, list) {
+		sof_client_dev_unregister(cdev);
+		list_del(&cdev->list);
+	}
+	mutex_unlock(&sdev->client_mutex);
+}
+#else
+static inline void
+snd_sof_register_clients(struct snd_sof_dev *sdev) {}
+
+static inline void
+snd_sof_unregister_clients(struct snd_sof_dev *sdev) {}
+#endif
+
 static inline const struct snd_sof_dsp_ops
 *sof_get_ops(const struct sof_dev_desc *d,
 	     const struct sof_ops_table mach_ops[], int asize)
diff --git a/sound/soc/sof/sof-priv.h b/sound/soc/sof/sof-priv.h
index 9da7f6f45362..0fdd9c5d872c 100644
--- a/sound/soc/sof/sof-priv.h
+++ b/sound/soc/sof/sof-priv.h
@@ -249,6 +249,9 @@ struct snd_sof_dsp_ops {
 	void (*set_mach_params)(const struct snd_soc_acpi_mach *mach,
 				struct device *dev); /* optional */
 
+	/* client ops */
+	void (*register_clients)(struct snd_sof_dev *sdev); /* optional */
+
 	/* DAI ops */
 	struct snd_soc_dai_driver *drv;
 	int num_drv;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19
  2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
                   ` (11 preceding siblings ...)
  2020-05-20  7:02 ` [net-next v4 12/12] ASoC: SOF: ops: Add new op for client registration Jeff Kirsher
@ 2020-05-20  7:17 ` Greg KH
  2020-05-20  7:25   ` Kirsher, Jeffrey T
  12 siblings, 1 reply; 69+ messages in thread
From: Greg KH @ 2020-05-20  7:17 UTC (permalink / raw)
  To: Jeff Kirsher
  Cc: davem, netdev, linux-rdma, nhorman, sassmann, jgg, parav,
	galpress, selvin.xavier, sriharsha.basavapatna, benve, bharat,
	xavier.huwei, yishaih, leonro, mkalderon, aditr,
	ranjani.sridharan, pierre-louis.bossart

On Wed, May 20, 2020 at 12:02:15AM -0700, Jeff Kirsher wrote:
> This series contains the initial implementation of the Virtual Bus,
> virtbus_device, virtbus_driver, updates to 'ice' and 'i40e' to use the new
> Virtual Bus.
> 
> The primary purpose of the Virtual bus is to put devices on it and hook the
> devices up to drivers.  This will allow drivers, like the RDMA drivers, to
> hook up to devices via this Virtual bus.
> 
> The associated irdma driver designed to use this new interface, is still
> in RFC currently and was sent in a separate series.  The latest RFC
> series follows this series, named "Intel RDMA Driver Updates 2020-05-19".  
> 
> This series currently builds against net-next tree.
> 
> Revision history:
> v2: Made changes based on community feedback, like Pierre-Louis's and
>     Jason's comments to update virtual bus interface.
> v3: Updated the virtual bus interface based on feedback from Jason and
>     Greg KH.  Also updated the initial ice driver patch to handle the
>     virtual bus changes and changes requested by Jason and Greg KH.
> v4: Updated the kernel documentation based on feedback from Greg KH.
>     Also added PM interface updates to satisfy the sound driver
>     requirements.  Added the sound driver changes that makes use of the
>     virtual bus.

Why didn't you change patch 2 like I asked you to?

And I still have no idea why you all are not using the virtual bus in
the "ice" driver implementation.  Why is it even there if you don't need
it?  I thought that was the whole reason you wrote this code, not for
the sound drivers.

How can you get away with just using a virtual device but not the bus?
What does that help out with?  What "bus" do those devices belong to?

Again, please fix up patch 2 to only add virtual device/bus support to,
right now it is just too much of a mess with all of the other
functionality you are adding in there to be able to determine if you are
using the new api correctly.

And again, didn't I ask for this last time?

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-20  7:02 ` [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client Jeff Kirsher
@ 2020-05-20  7:20   ` Greg KH
  2020-05-20 12:54   ` Jason Gunthorpe
  2020-06-29 17:36   ` Mark Brown
  2 siblings, 0 replies; 69+ messages in thread
From: Greg KH @ 2020-05-20  7:20 UTC (permalink / raw)
  To: Jeff Kirsher
  Cc: davem, Ranjani Sridharan, netdev, linux-rdma, nhorman, sassmann,
	jgg, pierre-louis.bossart, Fred Oh

On Wed, May 20, 2020 at 12:02:25AM -0700, Jeff Kirsher wrote:
> From: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
> 
> A client in the SOF (Sound Open Firmware) context is a
> device that needs to communicate with the DSP via IPC
> messages. The SOF core is responsible for serializing the
> IPC messages to the DSP from the different clients. One
> example of an SOF client would be an IPC test client that
> floods the DSP with test IPC messages to validate if the
> serialization works as expected. Multi-client support will
> also add the ability to split the existing audio cards
> into multiple ones, so as to e.g. to deal with HDMI with a
> dedicated client instead of adding HDMI to all cards.
> 
> This patch introduces descriptors for SOF client driver
> and SOF client device along with APIs for registering
> and unregistering a SOF client driver, sending IPCs from
> a client device and accessing the SOF core debugfs root entry.
> 
> Along with this, add a couple of new members to struct
> snd_sof_dev that will be used for maintaining the list of
> clients.

Here is where you are first using a virtual bus driver, and yet, no
mention of that at all in the changelog.  Why?

Why are virtual devices/busses even needed here?

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test
  2020-05-20  7:02 ` [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test Jeff Kirsher
@ 2020-05-20  7:22   ` Greg KH
  2020-05-20 12:56   ` Jason Gunthorpe
  1 sibling, 0 replies; 69+ messages in thread
From: Greg KH @ 2020-05-20  7:22 UTC (permalink / raw)
  To: Jeff Kirsher
  Cc: davem, Ranjani Sridharan, netdev, linux-rdma, nhorman, sassmann,
	jgg, pierre-louis.bossart, Fred Oh

On Wed, May 20, 2020 at 12:02:26AM -0700, Jeff Kirsher wrote:
> From: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
> 
> Create an SOF client driver for IPC flood test. This
> driver is used to set up the debugfs entries and the
> read/write ops for initiating the IPC flood test that
> would be used to measure the min/max/avg response times
> for sending IPCs to the DSP.

No form of documentation for what these debugfs files are for?  I know
you don't normally have to do this, but all you are doing here is
creating a "test" driver, with testing interfaces from userspace to the
kernel.  So how is anyone supposed to know how to use them?

These are complex debugfs files you are writing to, so a bit of a hint
as to what they are going to be doing would be nice, don't you think?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 12/12] ASoC: SOF: ops: Add new op for client registration
  2020-05-20  7:02 ` [net-next v4 12/12] ASoC: SOF: ops: Add new op for client registration Jeff Kirsher
@ 2020-05-20  7:23   ` Greg KH
  0 siblings, 0 replies; 69+ messages in thread
From: Greg KH @ 2020-05-20  7:23 UTC (permalink / raw)
  To: Jeff Kirsher
  Cc: davem, Ranjani Sridharan, netdev, linux-rdma, nhorman, sassmann,
	jgg, pierre-louis.bossart, Fred Oh

On Wed, May 20, 2020 at 12:02:27AM -0700, Jeff Kirsher wrote:
> From: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
> 
> Add a new op for registering clients. The clients to be
> registered depend on the DSP capabilities and the ACPI/DT
> information. For now, we only add 2 IPC test clients that
> will be used for run tandem IPC flood tests for all Intel
> platforms.
> 
> For ACPI platforms, change the Kconfig to select
> SND_SOC_SOF_PROBE_WORK_QUEUE to allow the virtbus driver
> to probe when the client is registered.
> 
> Signed-off-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
> Signed-off-by: Fred Oh <fred.oh@linux.intel.com>
> Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> ---
>  sound/soc/sof/core.c        |  8 ++++++++
>  sound/soc/sof/intel/Kconfig |  1 +
>  sound/soc/sof/intel/apl.c   | 26 ++++++++++++++++++++++++++
>  sound/soc/sof/intel/bdw.c   | 25 +++++++++++++++++++++++++
>  sound/soc/sof/intel/byt.c   | 28 ++++++++++++++++++++++++++++
>  sound/soc/sof/intel/cnl.c   | 26 ++++++++++++++++++++++++++
>  sound/soc/sof/ops.h         | 34 ++++++++++++++++++++++++++++++++++
>  sound/soc/sof/sof-priv.h    |  3 +++
>  8 files changed, 151 insertions(+)
> 
> diff --git a/sound/soc/sof/core.c b/sound/soc/sof/core.c
> index fdfed157e6c0..a0382612b9e7 100644
> --- a/sound/soc/sof/core.c
> +++ b/sound/soc/sof/core.c
> @@ -245,6 +245,12 @@ static int sof_probe_continue(struct snd_sof_dev *sdev)
>  	if (plat_data->sof_probe_complete)
>  		plat_data->sof_probe_complete(sdev->dev);
>  
> +	/*
> +	 * Register client devices. This can fail but errors cannot be
> +	 * propagated.
> +	 */
> +	snd_sof_register_clients(sdev);
> +
>  	return 0;
>  
>  fw_trace_err:
> @@ -349,6 +355,7 @@ int snd_sof_device_remove(struct device *dev)
>  		cancel_work_sync(&sdev->probe_work);
>  
>  	if (sdev->fw_state > SOF_FW_BOOT_NOT_STARTED) {
> +		snd_sof_unregister_clients(sdev);
>  		snd_sof_fw_unload(sdev);
>  		snd_sof_ipc_free(sdev);
>  		snd_sof_free_debug(sdev);
> @@ -382,4 +389,5 @@ EXPORT_SYMBOL(snd_sof_device_remove);
>  MODULE_AUTHOR("Liam Girdwood");
>  MODULE_DESCRIPTION("Sound Open Firmware (SOF) Core");
>  MODULE_LICENSE("Dual BSD/GPL");
> +MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
>  MODULE_ALIAS("platform:sof-audio");
> diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
> index c9a2bee4b55c..002fd426ee53 100644
> --- a/sound/soc/sof/intel/Kconfig
> +++ b/sound/soc/sof/intel/Kconfig
> @@ -13,6 +13,7 @@ config SND_SOC_SOF_INTEL_ACPI
>  	def_tristate SND_SOC_SOF_ACPI
>  	select SND_SOC_SOF_BAYTRAIL  if SND_SOC_SOF_BAYTRAIL_SUPPORT
>  	select SND_SOC_SOF_BROADWELL if SND_SOC_SOF_BROADWELL_SUPPORT
> +	select SND_SOC_SOF_PROBE_WORK_QUEUE if SND_SOC_SOF_CLIENT
>  	help
>  	  This option is not user-selectable but automagically handled by
>  	  'select' statements at a higher level
> diff --git a/sound/soc/sof/intel/apl.c b/sound/soc/sof/intel/apl.c
> index 02218d22e51f..547b2b0ccb9a 100644
> --- a/sound/soc/sof/intel/apl.c
> +++ b/sound/soc/sof/intel/apl.c
> @@ -15,9 +15,13 @@
>   * Hardware interface for audio DSP on Apollolake and GeminiLake
>   */
>  
> +#include <linux/module.h>
>  #include "../sof-priv.h"
>  #include "hda.h"
>  #include "../sof-audio.h"
> +#if IS_ENABLED(CONFIG_SND_SOC_SOF_CLIENT)
> +#include "../sof-client.h"
> +#endif

The amount of #if additions in this patch is crazy.  That should never
be needed for a .h file like this, nor should it be needed for all of
the other times it is used in this patch.  Please fix up your api to not
need that at all, as it's really messy, don't you think?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19
  2020-05-20  7:17 ` [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Greg KH
@ 2020-05-20  7:25   ` Kirsher, Jeffrey T
  2020-05-20  9:08     ` Greg KH
  0 siblings, 1 reply; 69+ messages in thread
From: Kirsher, Jeffrey T @ 2020-05-20  7:25 UTC (permalink / raw)
  To: Greg KH
  Cc: davem, netdev, linux-rdma, nhorman, sassmann, jgg, parav,
	galpress, selvin.xavier, sriharsha.basavapatna, benve, bharat,
	xavier.huwei, yishaih, leonro, mkalderon, aditr,
	ranjani.sridharan, pierre-louis.bossart

> -----Original Message-----
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Wednesday, May 20, 2020 00:17
> To: Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>
> Cc: davem@davemloft.net; netdev@vger.kernel.org; linux-
> rdma@vger.kernel.org; nhorman@redhat.com; sassmann@redhat.com;
> jgg@ziepe.ca; parav@mellanox.com; galpress@amazon.com;
> selvin.xavier@broadcom.com; sriharsha.basavapatna@broadcom.com;
> benve@cisco.com; bharat@chelsio.com; xavier.huwei@huawei.com;
> yishaih@mellanox.com; leonro@mellanox.com; mkalderon@marvell.com;
> aditr@vmware.com; ranjani.sridharan@linux.intel.com; pierre-
> louis.bossart@linux.intel.com
> Subject: Re: [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver
> Updates 2020-05-19
> 
> On Wed, May 20, 2020 at 12:02:15AM -0700, Jeff Kirsher wrote:
> > This series contains the initial implementation of the Virtual Bus,
> > virtbus_device, virtbus_driver, updates to 'ice' and 'i40e' to use the
> > new Virtual Bus.
> >
> > The primary purpose of the Virtual bus is to put devices on it and
> > hook the devices up to drivers.  This will allow drivers, like the
> > RDMA drivers, to hook up to devices via this Virtual bus.
> >
> > The associated irdma driver designed to use this new interface, is
> > still in RFC currently and was sent in a separate series.  The latest
> > RFC series follows this series, named "Intel RDMA Driver Updates 2020-05-
> 19".
> >
> > This series currently builds against net-next tree.
> >
> > Revision history:
> > v2: Made changes based on community feedback, like Pierre-Louis's and
> >     Jason's comments to update virtual bus interface.
> > v3: Updated the virtual bus interface based on feedback from Jason and
> >     Greg KH.  Also updated the initial ice driver patch to handle the
> >     virtual bus changes and changes requested by Jason and Greg KH.
> > v4: Updated the kernel documentation based on feedback from Greg KH.
> >     Also added PM interface updates to satisfy the sound driver
> >     requirements.  Added the sound driver changes that makes use of the
> >     virtual bus.
> 
> Why didn't you change patch 2 like I asked you to?
> 
> And I still have no idea why you all are not using the virtual bus in the "ice"
> driver implementation.  Why is it even there if you don't need it?  I thought that
> was the whole reason you wrote this code, not for the sound drivers.
> 
> How can you get away with just using a virtual device but not the bus?
> What does that help out with?  What "bus" do those devices belong to?
> 
> Again, please fix up patch 2 to only add virtual device/bus support to, right now
> it is just too much of a mess with all of the other functionality you are adding in
> there to be able to determine if you are using the new api correctly.
> 
> And again, didn't I ask for this last time?
[Kirsher, Jeffrey T] 

We apologize, but last submission you only commented on the first patch and the documentation.

In v1 & v2, you and Jason made comments on the LAN driver implementation (patch 2), which we
addressed all the comments and did not hear any comments to the contrary in v3 until now.  If you
give constructive feedback, will work to fix any issues you find.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19
  2020-05-20  7:25   ` Kirsher, Jeffrey T
@ 2020-05-20  9:08     ` Greg KH
  0 siblings, 0 replies; 69+ messages in thread
From: Greg KH @ 2020-05-20  9:08 UTC (permalink / raw)
  To: Kirsher, Jeffrey T
  Cc: davem, netdev, linux-rdma, nhorman, sassmann, jgg, parav,
	galpress, selvin.xavier, sriharsha.basavapatna, benve, bharat,
	xavier.huwei, yishaih, leonro, mkalderon, aditr,
	ranjani.sridharan, pierre-louis.bossart

On Wed, May 20, 2020 at 07:25:39AM +0000, Kirsher, Jeffrey T wrote:
> > -----Original Message-----
> > From: Greg KH <gregkh@linuxfoundation.org>
> > Sent: Wednesday, May 20, 2020 00:17
> > To: Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>
> > Cc: davem@davemloft.net; netdev@vger.kernel.org; linux-
> > rdma@vger.kernel.org; nhorman@redhat.com; sassmann@redhat.com;
> > jgg@ziepe.ca; parav@mellanox.com; galpress@amazon.com;
> > selvin.xavier@broadcom.com; sriharsha.basavapatna@broadcom.com;
> > benve@cisco.com; bharat@chelsio.com; xavier.huwei@huawei.com;
> > yishaih@mellanox.com; leonro@mellanox.com; mkalderon@marvell.com;
> > aditr@vmware.com; ranjani.sridharan@linux.intel.com; pierre-
> > louis.bossart@linux.intel.com
> > Subject: Re: [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver
> > Updates 2020-05-19
> > 
> > On Wed, May 20, 2020 at 12:02:15AM -0700, Jeff Kirsher wrote:
> > > This series contains the initial implementation of the Virtual Bus,
> > > virtbus_device, virtbus_driver, updates to 'ice' and 'i40e' to use the
> > > new Virtual Bus.
> > >
> > > The primary purpose of the Virtual bus is to put devices on it and
> > > hook the devices up to drivers.  This will allow drivers, like the
> > > RDMA drivers, to hook up to devices via this Virtual bus.
> > >
> > > The associated irdma driver designed to use this new interface, is
> > > still in RFC currently and was sent in a separate series.  The latest
> > > RFC series follows this series, named "Intel RDMA Driver Updates 2020-05-
> > 19".
> > >
> > > This series currently builds against net-next tree.
> > >
> > > Revision history:
> > > v2: Made changes based on community feedback, like Pierre-Louis's and
> > >     Jason's comments to update virtual bus interface.
> > > v3: Updated the virtual bus interface based on feedback from Jason and
> > >     Greg KH.  Also updated the initial ice driver patch to handle the
> > >     virtual bus changes and changes requested by Jason and Greg KH.
> > > v4: Updated the kernel documentation based on feedback from Greg KH.
> > >     Also added PM interface updates to satisfy the sound driver
> > >     requirements.  Added the sound driver changes that makes use of the
> > >     virtual bus.
> > 
> > Why didn't you change patch 2 like I asked you to?
> > 
> > And I still have no idea why you all are not using the virtual bus in the "ice"
> > driver implementation.  Why is it even there if you don't need it?  I thought that
> > was the whole reason you wrote this code, not for the sound drivers.
> > 
> > How can you get away with just using a virtual device but not the bus?
> > What does that help out with?  What "bus" do those devices belong to?
> > 
> > Again, please fix up patch 2 to only add virtual device/bus support to, right now
> > it is just too much of a mess with all of the other functionality you are adding in
> > there to be able to determine if you are using the new api correctly.
> > 
> > And again, didn't I ask for this last time?
> [Kirsher, Jeffrey T] 
> 
> We apologize, but last submission you only commented on the first patch and the documentation.

It's as if I am shouting into the wind...

{sigh} : https://lore.kernel.org/linux-rdma/20200507081737.GC1024567@kroah.com/


Ok, as the above text was too kind and nice and not explicit enough, let
me try this again:

  - this patch series makes no sense to me in that you are creating
    a virtual bus, but not using it in your driver at all.  Why create
    it at all then?
  - If a virtual device can be used without a virtual driver, what
    driver binds to that device, and what "bus" does it live on?
  - This patch 2 is a total mess of new functionality and virtual device
    additions, making it impossible to review.  Please split it up into
    tiny, easy to understand and review pieces.
  - Why is there sound driver code being submitted to netdev?  This
    virtual bus code should stand on-its-own, and if it is not needed,
    then let the code that adds it come in through a patch series that
    actually needs it (i.e. the sound code.)
  - As I can't understand how you are using the virtual bus/dev code as
    I can't review patch 2, I have no idea if patch 1 is even written
    correctly.

So, your action items now are:
	- make patch 2 sane, in tiny pieces, and use the virtual bus
	  code.
	- review the documentation on patch 1 to see if it actually
	  makes sense (i.e. get a s-o-b from another kernel developer
	  who has never seen it before).
	- if patch 2 does not need the virtual bus code, explain the
	  heck out of it as to why that is so, and where the driver and
	  bus lives instead, when you add support for that code to patch
	  2 (as part of the split up patch series).
	- stop sending these patches out as a "pull request" for netdev
	  maintainers to pull from.  Get my ack on them all before you
	  even attempt to get a networking maintainer's review to be
	  included in their tree.  By doing this you are making me jump
	  in order to keep from this code getting merged before it
	  should be, which just makes me grumpy (as you would be if you
	  were in my position here.)
	- send Greg a bottle of good whisky in penance for wasting all
	  of his time with these reviews that seem to be ignored.
	  Expense it to Intel, it's the least they could do.

Sound reasonable?

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-20  7:02 ` [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client Jeff Kirsher
  2020-05-20  7:20   ` Greg KH
@ 2020-05-20 12:54   ` Jason Gunthorpe
  2020-05-20 12:57     ` Jason Gunthorpe
  2020-05-21 21:11     ` Ranjani Sridharan
  2020-06-29 17:36   ` Mark Brown
  2 siblings, 2 replies; 69+ messages in thread
From: Jason Gunthorpe @ 2020-05-20 12:54 UTC (permalink / raw)
  To: Jeff Kirsher
  Cc: davem, gregkh, Ranjani Sridharan, netdev, linux-rdma, nhorman,
	sassmann, pierre-louis.bossart, Fred Oh

On Wed, May 20, 2020 at 12:02:25AM -0700, Jeff Kirsher wrote:
> From: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
> 
> A client in the SOF (Sound Open Firmware) context is a
> device that needs to communicate with the DSP via IPC
> messages. The SOF core is responsible for serializing the
> IPC messages to the DSP from the different clients. One
> example of an SOF client would be an IPC test client that
> floods the DSP with test IPC messages to validate if the
> serialization works as expected. Multi-client support will
> also add the ability to split the existing audio cards
> into multiple ones, so as to e.g. to deal with HDMI with a
> dedicated client instead of adding HDMI to all cards.
> 
> This patch introduces descriptors for SOF client driver
> and SOF client device along with APIs for registering
> and unregistering a SOF client driver, sending IPCs from
> a client device and accessing the SOF core debugfs root entry.
> 
> Along with this, add a couple of new members to struct
> snd_sof_dev that will be used for maintaining the list of
> clients.

If you want to use sound as the rational for virtual bus then drop the
networking stuff and present a complete device/driver pairing based on
this sound stuff instead.

> +int sof_client_dev_register(struct snd_sof_dev *sdev,
> +			    const char *name)
> +{
> +	struct sof_client_dev *cdev;
> +	struct virtbus_device *vdev;
> +	unsigned long time, timeout;
> +	int ret;
> +
> +	cdev = kzalloc(sizeof(*cdev), GFP_KERNEL);
> +	if (!cdev)
> +		return -ENOMEM;
> +
> +	cdev->sdev = sdev;
> +	init_completion(&cdev->probe_complete);
> +	vdev = &cdev->vdev;
> +	vdev->match_name = name;
> +	vdev->dev.parent = sdev->dev;
> +	vdev->release = sof_client_virtdev_release;
> +
> +	/*
> +	 * Register virtbus device for the client.
> +	 * The error path in virtbus_register_device() calls put_device(),
> +	 * which will free cdev in the release callback.
> +	 */
> +	ret = virtbus_register_device(vdev);
> +	if (ret < 0)
> +		return ret;
> +
> +	/* make sure the probe is complete before updating client list */
> +	timeout = msecs_to_jiffies(SOF_CLIENT_PROBE_TIMEOUT_MS);
> +	time = wait_for_completion_timeout(&cdev->probe_complete, timeout);

This seems bonkers - the whole point of something like virtual bus is
to avoid madness like this.

> +	if (!time) {
> +		dev_err(sdev->dev, "error: probe of virtbus dev %s timed out\n",
> +			name);
> +		virtbus_unregister_device(vdev);

Unregister does kfree? In general I've found that to be a bad idea,
many drivers need to free up resources after unregistering from their
subsystem.

> +#define virtbus_dev_to_sof_client_dev(virtbus_dev) \
> +	container_of(virtbus_dev, struct sof_client_dev, vdev)

Use static inline

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test
  2020-05-20  7:02 ` [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test Jeff Kirsher
  2020-05-20  7:22   ` Greg KH
@ 2020-05-20 12:56   ` Jason Gunthorpe
  2020-05-27 20:18     ` Ranjani Sridharan
  1 sibling, 1 reply; 69+ messages in thread
From: Jason Gunthorpe @ 2020-05-20 12:56 UTC (permalink / raw)
  To: Jeff Kirsher
  Cc: davem, gregkh, Ranjani Sridharan, netdev, linux-rdma, nhorman,
	sassmann, pierre-louis.bossart, Fred Oh

On Wed, May 20, 2020 at 12:02:26AM -0700, Jeff Kirsher wrote:
> +static const struct virtbus_dev_id sof_ipc_virtbus_id_table[] = {
> +	{"sof-ipc-test"},
> +	{},
> +};
> +
> +static struct sof_client_drv sof_ipc_test_client_drv = {
> +	.name = "sof-ipc-test-client-drv",
> +	.type = SOF_CLIENT_IPC,
> +	.virtbus_drv = {
> +		.driver = {
> +			.name = "sof-ipc-test-virtbus-drv",
> +		},
> +		.id_table = sof_ipc_virtbus_id_table,
> +		.probe = sof_ipc_test_probe,
> +		.remove = sof_ipc_test_remove,
> +		.shutdown = sof_ipc_test_shutdown,
> +	},
> +};
> +
> +module_sof_client_driver(sof_ipc_test_client_drv);
> +
> +MODULE_DESCRIPTION("SOF IPC Test Client Driver");
> +MODULE_LICENSE("GPL v2");
> +MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
> +MODULE_ALIAS("virtbus:sof-ipc-test");

Usually the MODULE_ALIAS happens automatically rhough the struct
virtbus_dev_id - is something missing in the enabling patches?

JAson

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-20 12:54   ` Jason Gunthorpe
@ 2020-05-20 12:57     ` Jason Gunthorpe
  2020-05-21 21:11     ` Ranjani Sridharan
  1 sibling, 0 replies; 69+ messages in thread
From: Jason Gunthorpe @ 2020-05-20 12:57 UTC (permalink / raw)
  To: Jeff Kirsher
  Cc: davem, gregkh, Ranjani Sridharan, netdev, linux-rdma, nhorman,
	sassmann, pierre-louis.bossart, Fred Oh

On Wed, May 20, 2020 at 09:54:37AM -0300, Jason Gunthorpe wrote:
> > +	if (!time) {
> > +		dev_err(sdev->dev, "error: probe of virtbus dev %s timed out\n",
> > +			name);
> > +		virtbus_unregister_device(vdev);
> 
> Unregister does kfree? In general I've found that to be a bad idea,
> many drivers need to free up resources after unregistering from their
> subsystem.

oops, never mind, this is the driver side it makes some sense - but
I'm not sure you should call it during error unwind anyhow. See above
about the wait being kind of bonkers..

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 01/12] Implementation of Virtual Bus
  2020-05-20  7:02 ` [net-next v4 01/12] Implementation of Virtual Bus Jeff Kirsher
@ 2020-05-21 14:57   ` Parav Pandit
  2020-05-21 17:43     ` gregkh
  0 siblings, 1 reply; 69+ messages in thread
From: Parav Pandit @ 2020-05-21 14:57 UTC (permalink / raw)
  To: Jeff Kirsher, davem, gregkh
  Cc: Dave Ertman, netdev, linux-rdma, nhorman, sassmann, jgg,
	galpress, selvin.xavier, sriharsha.basavapatna, benve, bharat,
	xavier.huwei, Yishai Hadas, Leon Romanovsky, mkalderon, aditr,
	ranjani.sridharan, pierre-louis.bossart, Kiran Patil,
	Andrew Bowers

Hi Greg, Jason,

On 5/20/2020 12:32 PM, Jeff Kirsher wrote:
> From: Dave Ertman <david.m.ertman@intel.com>
> 

> +static const
> +struct virtbus_dev_id *virtbus_match_id(const struct virtbus_dev_id *id,
> +					struct virtbus_device *vdev)
> +{
> +	while (id->name[0]) {
> +		if (!strcmp(vdev->match_name, id->name))
> +			return id;

Should we have VID, DID based approach instead of _any_ string chosen by
vendor drivers?

This will required central place to define the VID, DID of the vdev in
vdev_ids.h to have unique ids.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 01/12] Implementation of Virtual Bus
  2020-05-21 14:57   ` Parav Pandit
@ 2020-05-21 17:43     ` gregkh
  2020-05-21 20:10       ` Jason Gunthorpe
  0 siblings, 1 reply; 69+ messages in thread
From: gregkh @ 2020-05-21 17:43 UTC (permalink / raw)
  To: Parav Pandit
  Cc: Jeff Kirsher, davem, Dave Ertman, netdev, linux-rdma, nhorman,
	sassmann, jgg, galpress, selvin.xavier, sriharsha.basavapatna,
	benve, bharat, xavier.huwei, Yishai Hadas, Leon Romanovsky,
	mkalderon, aditr, ranjani.sridharan, pierre-louis.bossart,
	Kiran Patil, Andrew Bowers

On Thu, May 21, 2020 at 02:57:55PM +0000, Parav Pandit wrote:
> Hi Greg, Jason,
> 
> On 5/20/2020 12:32 PM, Jeff Kirsher wrote:
> > From: Dave Ertman <david.m.ertman@intel.com>
> > 
> 
> > +static const
> > +struct virtbus_dev_id *virtbus_match_id(const struct virtbus_dev_id *id,
> > +					struct virtbus_device *vdev)
> > +{
> > +	while (id->name[0]) {
> > +		if (!strcmp(vdev->match_name, id->name))
> > +			return id;
> 
> Should we have VID, DID based approach instead of _any_ string chosen by
> vendor drivers?

No, because:

> This will required central place to define the VID, DID of the vdev in
> vdev_ids.h to have unique ids.

That's not a good way to run things :)

Have the virtbus core create the "name", as it really doesn't matter
what it is, just that it is unique, right?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 01/12] Implementation of Virtual Bus
  2020-05-21 17:43     ` gregkh
@ 2020-05-21 20:10       ` Jason Gunthorpe
  0 siblings, 0 replies; 69+ messages in thread
From: Jason Gunthorpe @ 2020-05-21 20:10 UTC (permalink / raw)
  To: gregkh
  Cc: Parav Pandit, Jeff Kirsher, davem, Dave Ertman, netdev,
	linux-rdma, nhorman, sassmann, galpress, selvin.xavier,
	sriharsha.basavapatna, benve, bharat, xavier.huwei, Yishai Hadas,
	Leon Romanovsky, mkalderon, aditr, ranjani.sridharan,
	pierre-louis.bossart, Kiran Patil, Andrew Bowers

On Thu, May 21, 2020 at 07:43:58PM +0200, gregkh@linuxfoundation.org wrote:
> On Thu, May 21, 2020 at 02:57:55PM +0000, Parav Pandit wrote:
> > Hi Greg, Jason,
> > 
> > On 5/20/2020 12:32 PM, Jeff Kirsher wrote:
> > > From: Dave Ertman <david.m.ertman@intel.com>
> > > 
> > 
> > > +static const
> > > +struct virtbus_dev_id *virtbus_match_id(const struct virtbus_dev_id *id,
> > > +					struct virtbus_device *vdev)
> > > +{
> > > +	while (id->name[0]) {
> > > +		if (!strcmp(vdev->match_name, id->name))
> > > +			return id;
> > 
> > Should we have VID, DID based approach instead of _any_ string chosen by
> > vendor drivers?
> 
> No, because:
> 
> > This will required central place to define the VID, DID of the vdev in
> > vdev_ids.h to have unique ids.
> 
> That's not a good way to run things :)
> 
> Have the virtbus core create the "name", as it really doesn't matter
> what it is, just that it is unique, right?

It is being used like the compatible string in OF. Look at where
"sof-ipc-test" appears in the SOF patches.

So it has to be a compile time static, and it has to be broadly global
in some fashion since it appears in a mod alias.

I don't think the name "sof-ipc-test" is particularly good by these
metrics.

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-20 12:54   ` Jason Gunthorpe
  2020-05-20 12:57     ` Jason Gunthorpe
@ 2020-05-21 21:11     ` Ranjani Sridharan
  2020-05-21 23:34       ` Jason Gunthorpe
  1 sibling, 1 reply; 69+ messages in thread
From: Ranjani Sridharan @ 2020-05-21 21:11 UTC (permalink / raw)
  To: Jason Gunthorpe, Jeff Kirsher
  Cc: davem, gregkh, netdev, linux-rdma, nhorman, sassmann,
	pierre-louis.bossart, Fred Oh

On Wed, 2020-05-20 at 09:54 -0300, Jason Gunthorpe wrote:
> On Wed, May 20, 2020 at 12:02:25AM -0700, Jeff Kirsher wrote:
> > From: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
> > 
> > A client in the SOF (Sound Open Firmware) context is a
> > device that needs to communicate with the DSP via IPC
> > messages. The SOF core is responsible for serializing the
> > IPC messages to the DSP from the different clients. One
> > example of an SOF client would be an IPC test client that
> > floods the DSP with test IPC messages to validate if the
> > serialization works as expected. Multi-client support will
> > also add the ability to split the existing audio cards
> > into multiple ones, so as to e.g. to deal with HDMI with a
> > dedicated client instead of adding HDMI to all cards.
> > 
> > This patch introduces descriptors for SOF client driver
> > and SOF client device along with APIs for registering
> > and unregistering a SOF client driver, sending IPCs from
> > a client device and accessing the SOF core debugfs root entry.
> > 
> > Along with this, add a couple of new members to struct
> > snd_sof_dev that will be used for maintaining the list of
> > clients.
> 
> If you want to use sound as the rational for virtual bus then drop
> the
> networking stuff and present a complete device/driver pairing based
> on
> this sound stuff instead.
> 
> > +int sof_client_dev_register(struct snd_sof_dev *sdev,
> > +			    const char *name)
> > +{
> > +	struct sof_client_dev *cdev;
> > +	struct virtbus_device *vdev;
> > +	unsigned long time, timeout;
> > +	int ret;
> > +
> > +	cdev = kzalloc(sizeof(*cdev), GFP_KERNEL);
> > +	if (!cdev)
> > +		return -ENOMEM;
> > +
> > +	cdev->sdev = sdev;
> > +	init_completion(&cdev->probe_complete);
> > +	vdev = &cdev->vdev;
> > +	vdev->match_name = name;
> > +	vdev->dev.parent = sdev->dev;
> > +	vdev->release = sof_client_virtdev_release;
> > +
> > +	/*
> > +	 * Register virtbus device for the client.
> > +	 * The error path in virtbus_register_device() calls
> > put_device(),
> > +	 * which will free cdev in the release callback.
> > +	 */
> > +	ret = virtbus_register_device(vdev);
> > +	if (ret < 0)
> > +		return ret;
> > +
> > +	/* make sure the probe is complete before updating client list
> > */
> > +	timeout = msecs_to_jiffies(SOF_CLIENT_PROBE_TIMEOUT_MS);
> > +	time = wait_for_completion_timeout(&cdev->probe_complete,
> > timeout);
> 
> This seems bonkers - the whole point of something like virtual bus is
> to avoid madness like this.

Thanks for your review, Jason. The idea of the times wait here is to
make the registration of the virtbus devices synchronous so that the
SOF core device has knowledge of all the clients that have been able to
probe successfully. This part is domain-specific and it works very well
in the audio driver case.

Could you please elaborate on why you think this is a bad idea?

Thanks,
Ranjani


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-21 21:11     ` Ranjani Sridharan
@ 2020-05-21 23:34       ` Jason Gunthorpe
  2020-05-22 14:29         ` Pierre-Louis Bossart
  0 siblings, 1 reply; 69+ messages in thread
From: Jason Gunthorpe @ 2020-05-21 23:34 UTC (permalink / raw)
  To: Ranjani Sridharan
  Cc: Jeff Kirsher, davem, gregkh, netdev, linux-rdma, nhorman,
	sassmann, pierre-louis.bossart, Fred Oh

On Thu, May 21, 2020 at 02:11:37PM -0700, Ranjani Sridharan wrote:
> On Wed, 2020-05-20 at 09:54 -0300, Jason Gunthorpe wrote:
> > On Wed, May 20, 2020 at 12:02:25AM -0700, Jeff Kirsher wrote:
> > > From: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
> > > 
> > > A client in the SOF (Sound Open Firmware) context is a
> > > device that needs to communicate with the DSP via IPC
> > > messages. The SOF core is responsible for serializing the
> > > IPC messages to the DSP from the different clients. One
> > > example of an SOF client would be an IPC test client that
> > > floods the DSP with test IPC messages to validate if the
> > > serialization works as expected. Multi-client support will
> > > also add the ability to split the existing audio cards
> > > into multiple ones, so as to e.g. to deal with HDMI with a
> > > dedicated client instead of adding HDMI to all cards.
> > > 
> > > This patch introduces descriptors for SOF client driver
> > > and SOF client device along with APIs for registering
> > > and unregistering a SOF client driver, sending IPCs from
> > > a client device and accessing the SOF core debugfs root entry.
> > > 
> > > Along with this, add a couple of new members to struct
> > > snd_sof_dev that will be used for maintaining the list of
> > > clients.
> > 
> > If you want to use sound as the rational for virtual bus then drop
> > the
> > networking stuff and present a complete device/driver pairing based
> > on
> > this sound stuff instead.
> > 
> > > +int sof_client_dev_register(struct snd_sof_dev *sdev,
> > > +			    const char *name)
> > > +{
> > > +	struct sof_client_dev *cdev;
> > > +	struct virtbus_device *vdev;
> > > +	unsigned long time, timeout;
> > > +	int ret;
> > > +
> > > +	cdev = kzalloc(sizeof(*cdev), GFP_KERNEL);
> > > +	if (!cdev)
> > > +		return -ENOMEM;
> > > +
> > > +	cdev->sdev = sdev;
> > > +	init_completion(&cdev->probe_complete);
> > > +	vdev = &cdev->vdev;
> > > +	vdev->match_name = name;
> > > +	vdev->dev.parent = sdev->dev;
> > > +	vdev->release = sof_client_virtdev_release;
> > > +
> > > +	/*
> > > +	 * Register virtbus device for the client.
> > > +	 * The error path in virtbus_register_device() calls
> > > put_device(),
> > > +	 * which will free cdev in the release callback.
> > > +	 */
> > > +	ret = virtbus_register_device(vdev);
> > > +	if (ret < 0)
> > > +		return ret;
> > > +
> > > +	/* make sure the probe is complete before updating client list
> > > */
> > > +	timeout = msecs_to_jiffies(SOF_CLIENT_PROBE_TIMEOUT_MS);
> > > +	time = wait_for_completion_timeout(&cdev->probe_complete,
> > > timeout);
> > 
> > This seems bonkers - the whole point of something like virtual bus is
> > to avoid madness like this.
> 
> Thanks for your review, Jason. The idea of the times wait here is to
> make the registration of the virtbus devices synchronous so that the
> SOF core device has knowledge of all the clients that have been able to
> probe successfully. This part is domain-specific and it works very well
> in the audio driver case.

This need to be hot plug safe. What if the module for this driver is
not available until later in boot? What if the user unplugs the
driver? What if the kernel runs probing single threaded?

It is really unlikely you can both have the requirement that things be
synchronous and also be doing all the other lifetime details properly..

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-21 23:34       ` Jason Gunthorpe
@ 2020-05-22 14:29         ` Pierre-Louis Bossart
  2020-05-22 14:55           ` Jason Gunthorpe
  2020-05-23  6:23           ` Greg KH
  0 siblings, 2 replies; 69+ messages in thread
From: Pierre-Louis Bossart @ 2020-05-22 14:29 UTC (permalink / raw)
  To: Jason Gunthorpe, Ranjani Sridharan
  Cc: Jeff Kirsher, davem, gregkh, netdev, linux-rdma, nhorman,
	sassmann, Fred Oh


>>>> +	ret = virtbus_register_device(vdev);
>>>> +	if (ret < 0)
>>>> +		return ret;
>>>> +
>>>> +	/* make sure the probe is complete before updating client list
>>>> */
>>>> +	timeout = msecs_to_jiffies(SOF_CLIENT_PROBE_TIMEOUT_MS);
>>>> +	time = wait_for_completion_timeout(&cdev->probe_complete,
>>>> timeout);
>>>
>>> This seems bonkers - the whole point of something like virtual bus is
>>> to avoid madness like this.
>>
>> Thanks for your review, Jason. The idea of the times wait here is to
>> make the registration of the virtbus devices synchronous so that the
>> SOF core device has knowledge of all the clients that have been able to
>> probe successfully. This part is domain-specific and it works very well
>> in the audio driver case.
> 
> This need to be hot plug safe. What if the module for this driver is
> not available until later in boot? What if the user unplugs the
> driver? What if the kernel runs probing single threaded?
> 
> It is really unlikely you can both have the requirement that things be
> synchronous and also be doing all the other lifetime details properly..

Can you suggest an alternate solution then?

The complete/wait_for_completion is a simple mechanism to tell that the 
action requested by the parent is done. Absent that, we can end-up in a 
situation where the probe may fail, or the requested module does not 
exist, and the parent knows nothing about the failure - so the system is 
in a zombie state and users are frustrated. It's not great either, is it?

This is not an hypothetical case, we've had this recurring problem when 
a PCI device creates an audio card represented as a platform device. 
When the card registration fails, typically due to configuration issues, 
the PCI probe still completes. That's really confusing and the source of 
lots of support questions. If we use these virtual bus extensions to 
stpo abusing platform devices, it'd be really nice to make those 
unreported probe failures go away.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-22 14:29         ` Pierre-Louis Bossart
@ 2020-05-22 14:55           ` Jason Gunthorpe
  2020-05-22 15:33             ` Pierre-Louis Bossart
  2020-05-23  6:23           ` Greg KH
  1 sibling, 1 reply; 69+ messages in thread
From: Jason Gunthorpe @ 2020-05-22 14:55 UTC (permalink / raw)
  To: Pierre-Louis Bossart
  Cc: Ranjani Sridharan, Jeff Kirsher, davem, gregkh, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh

On Fri, May 22, 2020 at 09:29:57AM -0500, Pierre-Louis Bossart wrote:
> 
> > > > > +	ret = virtbus_register_device(vdev);
> > > > > +	if (ret < 0)
> > > > > +		return ret;
> > > > > +
> > > > > +	/* make sure the probe is complete before updating client list
> > > > > */
> > > > > +	timeout = msecs_to_jiffies(SOF_CLIENT_PROBE_TIMEOUT_MS);
> > > > > +	time = wait_for_completion_timeout(&cdev->probe_complete,
> > > > > timeout);
> > > > 
> > > > This seems bonkers - the whole point of something like virtual bus is
> > > > to avoid madness like this.
> > > 
> > > Thanks for your review, Jason. The idea of the times wait here is to
> > > make the registration of the virtbus devices synchronous so that the
> > > SOF core device has knowledge of all the clients that have been able to
> > > probe successfully. This part is domain-specific and it works very well
> > > in the audio driver case.
> > 
> > This need to be hot plug safe. What if the module for this driver is
> > not available until later in boot? What if the user unplugs the
> > driver? What if the kernel runs probing single threaded?
> > 
> > It is really unlikely you can both have the requirement that things be
> > synchronous and also be doing all the other lifetime details properly..
> 
> Can you suggest an alternate solution then?

I don't even know what problem you are trying to solve.

> The complete/wait_for_completion is a simple mechanism to tell that the
> action requested by the parent is done. Absent that, we can end-up in a
> situation where the probe may fail, or the requested module does not exist,
> and the parent knows nothing about the failure - so the system is in a
> zombie state and users are frustrated. It's not great either, is it?

Maybe not great, but at least it is consistent with all the lifetime
models and the operation of the driver core.

> This is not an hypothetical case, we've had this recurring problem when a
> PCI device creates an audio card represented as a platform device. When the
> card registration fails, typically due to configuration issues, the PCI
> probe still completes. That's really confusing and the source of lots of
> support questions. If we use these virtual bus extensions to stpo abusing
> platform devices, it'd be really nice to make those unreported probe
> failures go away.

I think you need to address this in some other way that is hot plug
safe.

Surely you can make this failure visible to users in some other way?

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-22 14:55           ` Jason Gunthorpe
@ 2020-05-22 15:33             ` Pierre-Louis Bossart
  2020-05-22 17:10               ` Jason Gunthorpe
  2020-06-29 20:59               ` Mark Brown
  0 siblings, 2 replies; 69+ messages in thread
From: Pierre-Louis Bossart @ 2020-05-22 15:33 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ranjani Sridharan, Jeff Kirsher, davem, gregkh, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh



On 5/22/20 9:55 AM, Jason Gunthorpe wrote:
> On Fri, May 22, 2020 at 09:29:57AM -0500, Pierre-Louis Bossart wrote:
>>
>>>>>> +	ret = virtbus_register_device(vdev);
>>>>>> +	if (ret < 0)
>>>>>> +		return ret;
>>>>>> +
>>>>>> +	/* make sure the probe is complete before updating client list
>>>>>> */
>>>>>> +	timeout = msecs_to_jiffies(SOF_CLIENT_PROBE_TIMEOUT_MS);
>>>>>> +	time = wait_for_completion_timeout(&cdev->probe_complete,
>>>>>> timeout);
>>>>>
>>>>> This seems bonkers - the whole point of something like virtual bus is
>>>>> to avoid madness like this.
>>>>
>>>> Thanks for your review, Jason. The idea of the times wait here is to
>>>> make the registration of the virtbus devices synchronous so that the
>>>> SOF core device has knowledge of all the clients that have been able to
>>>> probe successfully. This part is domain-specific and it works very well
>>>> in the audio driver case.
>>>
>>> This need to be hot plug safe. What if the module for this driver is
>>> not available until later in boot? What if the user unplugs the
>>> driver? What if the kernel runs probing single threaded?
>>>
>>> It is really unlikely you can both have the requirement that things be
>>> synchronous and also be doing all the other lifetime details properly..
>>
>> Can you suggest an alternate solution then?
> 
> I don't even know what problem you are trying to solve.
> 
>> The complete/wait_for_completion is a simple mechanism to tell that the
>> action requested by the parent is done. Absent that, we can end-up in a
>> situation where the probe may fail, or the requested module does not exist,
>> and the parent knows nothing about the failure - so the system is in a
>> zombie state and users are frustrated. It's not great either, is it?
> 
> Maybe not great, but at least it is consistent with all the lifetime
> models and the operation of the driver core.

I agree your comments are valid ones, I just don't have a solution to be 
fully compliant with these models and report failures of the driver 
probe for a child device due to configuration issues (bad audio 
topology, etc).

My understanding is that errors on probe are explicitly not handled in 
the driver core, see e.g. comments such as:

/*
  * Ignore errors returned by ->probe so that the next driver can try
  * its luck.
  */
https://elixir.bootlin.com/linux/latest/source/drivers/base/dd.c#L636

If somehow we could request the error to be reported then probably we 
wouldn't need this complete/wait_for_completion mechanism as a custom 
notification.

>> This is not an hypothetical case, we've had this recurring problem when a
>> PCI device creates an audio card represented as a platform device. When the
>> card registration fails, typically due to configuration issues, the PCI
>> probe still completes. That's really confusing and the source of lots of
>> support questions. If we use these virtual bus extensions to stpo abusing
>> platform devices, it'd be really nice to make those unreported probe
>> failures go away.
> 
> I think you need to address this in some other way that is hot plug
> safe.
> 
> Surely you can make this failure visible to users in some other way?

Not at the moment, no. there are no failures reported in dmesg, and the 
user does not see any card created. This is a silent error.

This is probably domain-specific btw, the use of complete() is only part 
of the SOF core where we extended the virtual bus to support SOF 
clients. This is not a requirement in general for virtual bus users. We 
are not forcing anyone to rely on this complete/wait_for_completion, and 
if someone has a better idea to help us report probe failures we are all 
ears.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-22 15:33             ` Pierre-Louis Bossart
@ 2020-05-22 17:10               ` Jason Gunthorpe
  2020-05-22 18:35                 ` Pierre-Louis Bossart
  2020-06-29 20:59               ` Mark Brown
  1 sibling, 1 reply; 69+ messages in thread
From: Jason Gunthorpe @ 2020-05-22 17:10 UTC (permalink / raw)
  To: Pierre-Louis Bossart
  Cc: Ranjani Sridharan, Jeff Kirsher, davem, gregkh, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh

On Fri, May 22, 2020 at 10:33:20AM -0500, Pierre-Louis Bossart wrote:

> > Maybe not great, but at least it is consistent with all the lifetime
> > models and the operation of the driver core.
> 
> I agree your comments are valid ones, I just don't have a solution to be
> fully compliant with these models and report failures of the driver probe
> for a child device due to configuration issues (bad audio topology, etc).


> My understanding is that errors on probe are explicitly not handled in the
> driver core, see e.g. comments such as:

Yes, but that doesn't really apply here...
 
> /*
>  * Ignore errors returned by ->probe so that the next driver can try
>  * its luck.
>  */
> https://elixir.bootlin.com/linux/latest/source/drivers/base/dd.c#L636
> 
> If somehow we could request the error to be reported then probably we
> wouldn't need this complete/wait_for_completion mechanism as a custom
> notification.

That is the same issue as the completion, a driver should not be
making assumptions about ordering like this. For instance what if the
current driver is in the initrd and the 2nd driver is in a module in
the filesystem? It will not probe until the system boots more
completely. 

This is all stuff that is supposed to work properly.

> Not at the moment, no. there are no failures reported in dmesg, and
> the user does not see any card created. This is a silent error.

Creating a partial non-function card until all the parts are loaded
seems like the right way to surface an error like this.

Or don't break the driver up in this manner if all the parts are really
required just for it to function - quite strange place to get into.

What happens if the user unplugs this sub driver once things start
running?

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-22 17:10               ` Jason Gunthorpe
@ 2020-05-22 18:35                 ` Pierre-Louis Bossart
  2020-05-22 18:40                   ` Jason Gunthorpe
  0 siblings, 1 reply; 69+ messages in thread
From: Pierre-Louis Bossart @ 2020-05-22 18:35 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ranjani Sridharan, Jeff Kirsher, davem, gregkh, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh



On 5/22/20 12:10 PM, Jason Gunthorpe wrote:
> On Fri, May 22, 2020 at 10:33:20AM -0500, Pierre-Louis Bossart wrote:
> 
>>> Maybe not great, but at least it is consistent with all the lifetime
>>> models and the operation of the driver core.
>>
>> I agree your comments are valid ones, I just don't have a solution to be
>> fully compliant with these models and report failures of the driver probe
>> for a child device due to configuration issues (bad audio topology, etc).
> 
> 
>> My understanding is that errors on probe are explicitly not handled in the
>> driver core, see e.g. comments such as:
> 
> Yes, but that doesn't really apply here...
>   
>> /*
>>   * Ignore errors returned by ->probe so that the next driver can try
>>   * its luck.
>>   */
>> https://elixir.bootlin.com/linux/latest/source/drivers/base/dd.c#L636
>>
>> If somehow we could request the error to be reported then probably we
>> wouldn't need this complete/wait_for_completion mechanism as a custom
>> notification.
> 
> That is the same issue as the completion, a driver should not be
> making assumptions about ordering like this. For instance what if the
> current driver is in the initrd and the 2nd driver is in a module in
> the filesystem? It will not probe until the system boots more
> completely.
> 
> This is all stuff that is supposed to work properly.
> 
>> Not at the moment, no. there are no failures reported in dmesg, and
>> the user does not see any card created. This is a silent error.
> 
> Creating a partial non-function card until all the parts are loaded
> seems like the right way to surface an error like this.
> 
> Or don't break the driver up in this manner if all the parts are really
> required just for it to function - quite strange place to get into.

This is not about having all the parts available - that's handled 
already with deferred probe - but an error happening during card 
registration. In that case the ALSA/ASoC core throws an error and we 
cannot report it back to the parent.

> What happens if the user unplugs this sub driver once things start
> running?

refcounting in the ALSA core prevents that from happening usually.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-22 18:35                 ` Pierre-Louis Bossart
@ 2020-05-22 18:40                   ` Jason Gunthorpe
  2020-05-22 18:48                     ` Pierre-Louis Bossart
  0 siblings, 1 reply; 69+ messages in thread
From: Jason Gunthorpe @ 2020-05-22 18:40 UTC (permalink / raw)
  To: Pierre-Louis Bossart
  Cc: Ranjani Sridharan, Jeff Kirsher, davem, gregkh, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh

On Fri, May 22, 2020 at 01:35:54PM -0500, Pierre-Louis Bossart wrote:
> 
> 
> On 5/22/20 12:10 PM, Jason Gunthorpe wrote:
> > On Fri, May 22, 2020 at 10:33:20AM -0500, Pierre-Louis Bossart wrote:
> > 
> > > > Maybe not great, but at least it is consistent with all the lifetime
> > > > models and the operation of the driver core.
> > > 
> > > I agree your comments are valid ones, I just don't have a solution to be
> > > fully compliant with these models and report failures of the driver probe
> > > for a child device due to configuration issues (bad audio topology, etc).
> > 
> > 
> > > My understanding is that errors on probe are explicitly not handled in the
> > > driver core, see e.g. comments such as:
> > 
> > Yes, but that doesn't really apply here...
> > > /*
> > >   * Ignore errors returned by ->probe so that the next driver can try
> > >   * its luck.
> > >   */
> > > https://elixir.bootlin.com/linux/latest/source/drivers/base/dd.c#L636
> > > 
> > > If somehow we could request the error to be reported then probably we
> > > wouldn't need this complete/wait_for_completion mechanism as a custom
> > > notification.
> > 
> > That is the same issue as the completion, a driver should not be
> > making assumptions about ordering like this. For instance what if the
> > current driver is in the initrd and the 2nd driver is in a module in
> > the filesystem? It will not probe until the system boots more
> > completely.
> > 
> > This is all stuff that is supposed to work properly.
> > 
> > > Not at the moment, no. there are no failures reported in dmesg, and
> > > the user does not see any card created. This is a silent error.
> > 
> > Creating a partial non-function card until all the parts are loaded
> > seems like the right way to surface an error like this.
> > 
> > Or don't break the driver up in this manner if all the parts are really
> > required just for it to function - quite strange place to get into.
> 
> This is not about having all the parts available - that's handled already
> with deferred probe - but an error happening during card registration. In
> that case the ALSA/ASoC core throws an error and we cannot report it back to
> the parent.

The whole point of the virtual bus stuff was to split up a
multi-functional PCI device into parts. If all the parts are required
to be working to make the device work, why are you using virtual bus
here?

> > What happens if the user unplugs this sub driver once things start
> > running?
> 
> refcounting in the ALSA core prevents that from happening usually.

So user triggered unplug of driver that attaches here just hangs
forever? That isn't OK either.

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-22 18:40                   ` Jason Gunthorpe
@ 2020-05-22 18:48                     ` Pierre-Louis Bossart
  2020-05-22 19:44                       ` Jason Gunthorpe
  0 siblings, 1 reply; 69+ messages in thread
From: Pierre-Louis Bossart @ 2020-05-22 18:48 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ranjani Sridharan, Jeff Kirsher, davem, gregkh, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh



On 5/22/20 1:40 PM, Jason Gunthorpe wrote:
> On Fri, May 22, 2020 at 01:35:54PM -0500, Pierre-Louis Bossart wrote:
>>
>>
>> On 5/22/20 12:10 PM, Jason Gunthorpe wrote:
>>> On Fri, May 22, 2020 at 10:33:20AM -0500, Pierre-Louis Bossart wrote:
>>>
>>>>> Maybe not great, but at least it is consistent with all the lifetime
>>>>> models and the operation of the driver core.
>>>>
>>>> I agree your comments are valid ones, I just don't have a solution to be
>>>> fully compliant with these models and report failures of the driver probe
>>>> for a child device due to configuration issues (bad audio topology, etc).
>>>
>>>
>>>> My understanding is that errors on probe are explicitly not handled in the
>>>> driver core, see e.g. comments such as:
>>>
>>> Yes, but that doesn't really apply here...
>>>> /*
>>>>    * Ignore errors returned by ->probe so that the next driver can try
>>>>    * its luck.
>>>>    */
>>>> https://elixir.bootlin.com/linux/latest/source/drivers/base/dd.c#L636
>>>>
>>>> If somehow we could request the error to be reported then probably we
>>>> wouldn't need this complete/wait_for_completion mechanism as a custom
>>>> notification.
>>>
>>> That is the same issue as the completion, a driver should not be
>>> making assumptions about ordering like this. For instance what if the
>>> current driver is in the initrd and the 2nd driver is in a module in
>>> the filesystem? It will not probe until the system boots more
>>> completely.
>>>
>>> This is all stuff that is supposed to work properly.
>>>
>>>> Not at the moment, no. there are no failures reported in dmesg, and
>>>> the user does not see any card created. This is a silent error.
>>>
>>> Creating a partial non-function card until all the parts are loaded
>>> seems like the right way to surface an error like this.
>>>
>>> Or don't break the driver up in this manner if all the parts are really
>>> required just for it to function - quite strange place to get into.
>>
>> This is not about having all the parts available - that's handled already
>> with deferred probe - but an error happening during card registration. In
>> that case the ALSA/ASoC core throws an error and we cannot report it back to
>> the parent.
> 
> The whole point of the virtual bus stuff was to split up a
> multi-functional PCI device into parts. If all the parts are required
> to be working to make the device work, why are you using virtual bus
> here?

It's the other way around: how does the core know that one part isn't 
functional.

There is nothing in what we said that requires that all parts are fully 
functional. All we stated is that when *one* part isn't fully functional 
we know about it.

>>> What happens if the user unplugs this sub driver once things start
>>> running?
>>
>> refcounting in the ALSA core prevents that from happening usually.
> 
> So user triggered unplug of driver that attaches here just hangs
> forever? That isn't OK either.

No, you'd get a 'module in use' error if I am not mistaken.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-22 18:48                     ` Pierre-Louis Bossart
@ 2020-05-22 19:44                       ` Jason Gunthorpe
  2020-05-22 21:05                         ` Pierre-Louis Bossart
  0 siblings, 1 reply; 69+ messages in thread
From: Jason Gunthorpe @ 2020-05-22 19:44 UTC (permalink / raw)
  To: Pierre-Louis Bossart
  Cc: Ranjani Sridharan, Jeff Kirsher, davem, gregkh, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh

On Fri, May 22, 2020 at 01:48:00PM -0500, Pierre-Louis Bossart wrote:
> 
> 
> On 5/22/20 1:40 PM, Jason Gunthorpe wrote:
> > On Fri, May 22, 2020 at 01:35:54PM -0500, Pierre-Louis Bossart wrote:
> > > 
> > > 
> > > On 5/22/20 12:10 PM, Jason Gunthorpe wrote:
> > > > On Fri, May 22, 2020 at 10:33:20AM -0500, Pierre-Louis Bossart wrote:
> > > > 
> > > > > > Maybe not great, but at least it is consistent with all the lifetime
> > > > > > models and the operation of the driver core.
> > > > > 
> > > > > I agree your comments are valid ones, I just don't have a solution to be
> > > > > fully compliant with these models and report failures of the driver probe
> > > > > for a child device due to configuration issues (bad audio topology, etc).
> > > > 
> > > > 
> > > > > My understanding is that errors on probe are explicitly not handled in the
> > > > > driver core, see e.g. comments such as:
> > > > 
> > > > Yes, but that doesn't really apply here...
> > > > > /*
> > > > >    * Ignore errors returned by ->probe so that the next driver can try
> > > > >    * its luck.
> > > > >    */
> > > > > https://elixir.bootlin.com/linux/latest/source/drivers/base/dd.c#L636
> > > > > 
> > > > > If somehow we could request the error to be reported then probably we
> > > > > wouldn't need this complete/wait_for_completion mechanism as a custom
> > > > > notification.
> > > > 
> > > > That is the same issue as the completion, a driver should not be
> > > > making assumptions about ordering like this. For instance what if the
> > > > current driver is in the initrd and the 2nd driver is in a module in
> > > > the filesystem? It will not probe until the system boots more
> > > > completely.
> > > > 
> > > > This is all stuff that is supposed to work properly.
> > > > 
> > > > > Not at the moment, no. there are no failures reported in dmesg, and
> > > > > the user does not see any card created. This is a silent error.
> > > > 
> > > > Creating a partial non-function card until all the parts are loaded
> > > > seems like the right way to surface an error like this.
> > > > 
> > > > Or don't break the driver up in this manner if all the parts are really
> > > > required just for it to function - quite strange place to get into.
> > > 
> > > This is not about having all the parts available - that's handled already
> > > with deferred probe - but an error happening during card registration. In
> > > that case the ALSA/ASoC core throws an error and we cannot report it back to
> > > the parent.
> > 
> > The whole point of the virtual bus stuff was to split up a
> > multi-functional PCI device into parts. If all the parts are required
> > to be working to make the device work, why are you using virtual bus
> > here?
> 
> It's the other way around: how does the core know that one part isn't
> functional.

> There is nothing in what we said that requires that all parts are fully
> functional. All we stated is that when *one* part isn't fully functional we
> know about it.

Maybe if you can present some diagram or something, because I really
can't understand why asoc is trying to do with virtual bus here.

> > > > What happens if the user unplugs this sub driver once things start
> > > > running?
> > > 
> > > refcounting in the ALSA core prevents that from happening usually.
> > 
> > So user triggered unplug of driver that attaches here just hangs
> > forever? That isn't OK either.
> 
> No, you'd get a 'module in use' error if I am not mistaken.

You can disconnect drivers without unloading modules. It is a common
misconception. You should never, ever, rely on module ref counting for
anything more than keeping function pointers in memory.

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-22 19:44                       ` Jason Gunthorpe
@ 2020-05-22 21:05                         ` Pierre-Louis Bossart
  0 siblings, 0 replies; 69+ messages in thread
From: Pierre-Louis Bossart @ 2020-05-22 21:05 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ranjani Sridharan, Jeff Kirsher, davem, gregkh, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh



> Maybe if you can present some diagram or something, because I really
> can't understand why asoc is trying to do with virtual bus here.

instead of having a 1:1 mapping between PCI device and a monolithic 
card, we want to split the sound card in multiple orthogonal parts such as:

PCI device
   - local devices (mic/speakers)
   - hdmi devices
   - presence detection/sensing
   - probe/tuning interfaces
   - debug/tests

Initially we wanted to use platform devices but Greg suggested this API 
is abused. We don't have a platform/firmware based enumeration, nor a 
physical bus for each of these subparts, so the virtual bus was suggested.

Does this help?

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-22 14:29         ` Pierre-Louis Bossart
  2020-05-22 14:55           ` Jason Gunthorpe
@ 2020-05-23  6:23           ` Greg KH
  2020-05-23 19:41             ` Pierre-Louis Bossart
  2020-06-29 20:21             ` Mark Brown
  1 sibling, 2 replies; 69+ messages in thread
From: Greg KH @ 2020-05-23  6:23 UTC (permalink / raw)
  To: Pierre-Louis Bossart
  Cc: Jason Gunthorpe, Ranjani Sridharan, Jeff Kirsher, davem, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh

On Fri, May 22, 2020 at 09:29:57AM -0500, Pierre-Louis Bossart wrote:
> This is not an hypothetical case, we've had this recurring problem when a
> PCI device creates an audio card represented as a platform device. When the
> card registration fails, typically due to configuration issues, the PCI
> probe still completes.

Then fix that problem there.  The audio card should not be being created
as a platform device, as that is not what it is.  And even if it was,
the probe should not complete, it should clean up after itself and error
out.

That's not a driver core issue, sounds like a subsystem error handling
issue that needs to be resolved.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-23  6:23           ` Greg KH
@ 2020-05-23 19:41             ` Pierre-Louis Bossart
  2020-05-24  6:35               ` Greg KH
  2020-05-25 16:55               ` Jason Gunthorpe
  2020-06-29 20:21             ` Mark Brown
  1 sibling, 2 replies; 69+ messages in thread
From: Pierre-Louis Bossart @ 2020-05-23 19:41 UTC (permalink / raw)
  To: Greg KH
  Cc: Jason Gunthorpe, Ranjani Sridharan, Jeff Kirsher, davem, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh



On 5/23/20 1:23 AM, Greg KH wrote:
> On Fri, May 22, 2020 at 09:29:57AM -0500, Pierre-Louis Bossart wrote:
>> This is not an hypothetical case, we've had this recurring problem when a
>> PCI device creates an audio card represented as a platform device. When the
>> card registration fails, typically due to configuration issues, the PCI
>> probe still completes.
> 
> Then fix that problem there.  The audio card should not be being created
> as a platform device, as that is not what it is.  And even if it was,
> the probe should not complete, it should clean up after itself and error
> out.

Did you mean 'the PCI probe should not complete and error out'?

If yes, that's yet another problem... During the PCI probe, we start a 
workqueue and return success to avoid blocking everything. And only 
'later' do we actually create the card. So that's two levels of probe 
that cannot report a failure. I didn't come up with this design, IIRC 
this is due to audio-DRM dependencies and it's been used for 10+ years.

> 
> That's not a driver core issue, sounds like a subsystem error handling
> issue that needs to be resolved.
> 
> thanks,
> 
> greg k-h
> 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-23 19:41             ` Pierre-Louis Bossart
@ 2020-05-24  6:35               ` Greg KH
  2020-05-26 13:15                 ` Pierre-Louis Bossart
  2020-05-25 16:55               ` Jason Gunthorpe
  1 sibling, 1 reply; 69+ messages in thread
From: Greg KH @ 2020-05-24  6:35 UTC (permalink / raw)
  To: Pierre-Louis Bossart
  Cc: Jason Gunthorpe, Ranjani Sridharan, Jeff Kirsher, davem, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh

On Sat, May 23, 2020 at 02:41:51PM -0500, Pierre-Louis Bossart wrote:
> 
> 
> On 5/23/20 1:23 AM, Greg KH wrote:
> > On Fri, May 22, 2020 at 09:29:57AM -0500, Pierre-Louis Bossart wrote:
> > > This is not an hypothetical case, we've had this recurring problem when a
> > > PCI device creates an audio card represented as a platform device. When the
> > > card registration fails, typically due to configuration issues, the PCI
> > > probe still completes.
> > 
> > Then fix that problem there.  The audio card should not be being created
> > as a platform device, as that is not what it is.  And even if it was,
> > the probe should not complete, it should clean up after itself and error
> > out.
> 
> Did you mean 'the PCI probe should not complete and error out'?

Yes.

> If yes, that's yet another problem... During the PCI probe, we start a
> workqueue and return success to avoid blocking everything.

That's crazy.

> And only 'later' do we actually create the card. So that's two levels
> of probe that cannot report a failure. I didn't come up with this
> design, IIRC this is due to audio-DRM dependencies and it's been used
> for 10+ years.

Then if the probe function fails, it needs to unwind everything itself
and unregister the device with the PCI subsystem so that things work
properly.  If it does not do that today, that's a bug.

What kind of crazy dependencies cause this type of "requirement"?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-23 19:41             ` Pierre-Louis Bossart
  2020-05-24  6:35               ` Greg KH
@ 2020-05-25 16:55               ` Jason Gunthorpe
  1 sibling, 0 replies; 69+ messages in thread
From: Jason Gunthorpe @ 2020-05-25 16:55 UTC (permalink / raw)
  To: Pierre-Louis Bossart
  Cc: Greg KH, Ranjani Sridharan, Jeff Kirsher, davem, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh

On Sat, May 23, 2020 at 02:41:51PM -0500, Pierre-Louis Bossart wrote:

> If yes, that's yet another problem... During the PCI probe, we start a
> workqueue and return success to avoid blocking everything. And only 'later'
> do we actually create the card. So that's two levels of probe that cannot
> report a failure. I didn't come up with this design, IIRC this is due to
> audio-DRM dependencies and it's been used for 10+ years.

I think there are more tools now than 10 years ago, maybe it is time
to revisit designs like this - clearly something is really wrong with
it based on your explanations.

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-24  6:35               ` Greg KH
@ 2020-05-26 13:15                 ` Pierre-Louis Bossart
  2020-05-26 13:37                   ` Takashi Iwai
  0 siblings, 1 reply; 69+ messages in thread
From: Pierre-Louis Bossart @ 2020-05-26 13:15 UTC (permalink / raw)
  To: Greg KH
  Cc: Jason Gunthorpe, Ranjani Sridharan, Jeff Kirsher, davem, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh, Takashi Iwai



On 5/24/20 1:35 AM, Greg KH wrote:
> On Sat, May 23, 2020 at 02:41:51PM -0500, Pierre-Louis Bossart wrote:
>>
>>
>> On 5/23/20 1:23 AM, Greg KH wrote:
>>> On Fri, May 22, 2020 at 09:29:57AM -0500, Pierre-Louis Bossart wrote:
>>>> This is not an hypothetical case, we've had this recurring problem when a
>>>> PCI device creates an audio card represented as a platform device. When the
>>>> card registration fails, typically due to configuration issues, the PCI
>>>> probe still completes.
>>>
>>> Then fix that problem there.  The audio card should not be being created
>>> as a platform device, as that is not what it is.  And even if it was,
>>> the probe should not complete, it should clean up after itself and error
>>> out.
>>
>> Did you mean 'the PCI probe should not complete and error out'?
> 
> Yes.
> 
>> If yes, that's yet another problem... During the PCI probe, we start a
>> workqueue and return success to avoid blocking everything.
> 
> That's crazy.
> 
>> And only 'later' do we actually create the card. So that's two levels
>> of probe that cannot report a failure. I didn't come up with this
>> design, IIRC this is due to audio-DRM dependencies and it's been used
>> for 10+ years.
> 
> Then if the probe function fails, it needs to unwind everything itself
> and unregister the device with the PCI subsystem so that things work
> properly.  If it does not do that today, that's a bug.
> 
> What kind of crazy dependencies cause this type of "requirement"?

I think it is related to the request_module("i915") in 
snd_hdac_i915_init(), and possibly other firmware download.

Adding Takashi for more details.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-26 13:15                 ` Pierre-Louis Bossart
@ 2020-05-26 13:37                   ` Takashi Iwai
  2020-05-27  7:17                     ` Greg KH
  0 siblings, 1 reply; 69+ messages in thread
From: Takashi Iwai @ 2020-05-26 13:37 UTC (permalink / raw)
  To: Pierre-Louis Bossart
  Cc: Greg KH, Jason Gunthorpe, Ranjani Sridharan, Jeff Kirsher, davem,
	netdev, linux-rdma, nhorman, sassmann, Fred Oh

On Tue, 26 May 2020 15:15:26 +0200,
Pierre-Louis Bossart wrote:
> 
> 
> 
> On 5/24/20 1:35 AM, Greg KH wrote:
> > On Sat, May 23, 2020 at 02:41:51PM -0500, Pierre-Louis Bossart wrote:
> >>
> >>
> >> On 5/23/20 1:23 AM, Greg KH wrote:
> >>> On Fri, May 22, 2020 at 09:29:57AM -0500, Pierre-Louis Bossart wrote:
> >>>> This is not an hypothetical case, we've had this recurring problem when a
> >>>> PCI device creates an audio card represented as a platform device. When the
> >>>> card registration fails, typically due to configuration issues, the PCI
> >>>> probe still completes.
> >>>
> >>> Then fix that problem there.  The audio card should not be being created
> >>> as a platform device, as that is not what it is.  And even if it was,
> >>> the probe should not complete, it should clean up after itself and error
> >>> out.
> >>
> >> Did you mean 'the PCI probe should not complete and error out'?
> >
> > Yes.
> >
> >> If yes, that's yet another problem... During the PCI probe, we start a
> >> workqueue and return success to avoid blocking everything.
> >
> > That's crazy.
> >
> >> And only 'later' do we actually create the card. So that's two levels
> >> of probe that cannot report a failure. I didn't come up with this
> >> design, IIRC this is due to audio-DRM dependencies and it's been used
> >> for 10+ years.
> >
> > Then if the probe function fails, it needs to unwind everything itself
> > and unregister the device with the PCI subsystem so that things work
> > properly.  If it does not do that today, that's a bug.
> >
> > What kind of crazy dependencies cause this type of "requirement"?
> 
> I think it is related to the request_module("i915") in
> snd_hdac_i915_init(), and possibly other firmware download.
> 
> Adding Takashi for more details.

Right, there are a few levels of complexity there.  The HD-audio
PCI controller driver, for example, is initialized in an async way
with a work.  It loads the firmware files with
request_firmware_nowait() and also binds itself as a component master
with the DRM graphics driver via component framework.

Currently it has no way to unwind the PCI binding itself at the error
path, though.  In theory it should be possible to unregister the PCI
from the driver itself in the work context, but it failed in the
earlier experiments, hence the driver sets itself in a disabled state
instead.  Maybe worth to try again.

But, to be noted, all belonging sub-devices aren't instantiated but
deleted at the error path.  Only the main PCI binding is kept in a
disabled state just as a place holder until it's unbound explicitly.


Takashi

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-26 13:37                   ` Takashi Iwai
@ 2020-05-27  7:17                     ` Greg KH
  2020-05-27 14:05                       ` Pierre-Louis Bossart
  2020-06-29 20:33                       ` Mark Brown
  0 siblings, 2 replies; 69+ messages in thread
From: Greg KH @ 2020-05-27  7:17 UTC (permalink / raw)
  To: Takashi Iwai
  Cc: Pierre-Louis Bossart, Jason Gunthorpe, Ranjani Sridharan,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh

On Tue, May 26, 2020 at 03:37:36PM +0200, Takashi Iwai wrote:
> On Tue, 26 May 2020 15:15:26 +0200,
> Pierre-Louis Bossart wrote:
> > 
> > 
> > 
> > On 5/24/20 1:35 AM, Greg KH wrote:
> > > On Sat, May 23, 2020 at 02:41:51PM -0500, Pierre-Louis Bossart wrote:
> > >>
> > >>
> > >> On 5/23/20 1:23 AM, Greg KH wrote:
> > >>> On Fri, May 22, 2020 at 09:29:57AM -0500, Pierre-Louis Bossart wrote:
> > >>>> This is not an hypothetical case, we've had this recurring problem when a
> > >>>> PCI device creates an audio card represented as a platform device. When the
> > >>>> card registration fails, typically due to configuration issues, the PCI
> > >>>> probe still completes.
> > >>>
> > >>> Then fix that problem there.  The audio card should not be being created
> > >>> as a platform device, as that is not what it is.  And even if it was,
> > >>> the probe should not complete, it should clean up after itself and error
> > >>> out.
> > >>
> > >> Did you mean 'the PCI probe should not complete and error out'?
> > >
> > > Yes.
> > >
> > >> If yes, that's yet another problem... During the PCI probe, we start a
> > >> workqueue and return success to avoid blocking everything.
> > >
> > > That's crazy.
> > >
> > >> And only 'later' do we actually create the card. So that's two levels
> > >> of probe that cannot report a failure. I didn't come up with this
> > >> design, IIRC this is due to audio-DRM dependencies and it's been used
> > >> for 10+ years.
> > >
> > > Then if the probe function fails, it needs to unwind everything itself
> > > and unregister the device with the PCI subsystem so that things work
> > > properly.  If it does not do that today, that's a bug.
> > >
> > > What kind of crazy dependencies cause this type of "requirement"?
> > 
> > I think it is related to the request_module("i915") in
> > snd_hdac_i915_init(), and possibly other firmware download.
> > 
> > Adding Takashi for more details.
> 
> Right, there are a few levels of complexity there.  The HD-audio
> PCI controller driver, for example, is initialized in an async way
> with a work.  It loads the firmware files with
> request_firmware_nowait() and also binds itself as a component master
> with the DRM graphics driver via component framework.
> 
> Currently it has no way to unwind the PCI binding itself at the error
> path, though.  In theory it should be possible to unregister the PCI
> from the driver itself in the work context, but it failed in the
> earlier experiments, hence the driver sets itself in a disabled state
> instead.  Maybe worth to try again.
> 
> But, to be noted, all belonging sub-devices aren't instantiated but
> deleted at the error path.  Only the main PCI binding is kept in a
> disabled state just as a place holder until it's unbound explicitly.

Ok, that's good to hear.  But platform devices should never be showing
up as a child of a PCI device.  In the "near future" when we get the
virtual bus code merged, we can convert any existing users like this to
the new code.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-27  7:17                     ` Greg KH
@ 2020-05-27 14:05                       ` Pierre-Louis Bossart
  2020-06-29 20:33                       ` Mark Brown
  1 sibling, 0 replies; 69+ messages in thread
From: Pierre-Louis Bossart @ 2020-05-27 14:05 UTC (permalink / raw)
  To: Greg KH, Takashi Iwai
  Cc: Jason Gunthorpe, Ranjani Sridharan, Jeff Kirsher, davem, netdev,
	linux-rdma, nhorman, sassmann, Fred Oh



>>>>> If yes, that's yet another problem... During the PCI probe, we start a
>>>>> workqueue and return success to avoid blocking everything.
>>>>
>>>> That's crazy.
>>>>
>>>>> And only 'later' do we actually create the card. So that's two levels
>>>>> of probe that cannot report a failure. I didn't come up with this
>>>>> design, IIRC this is due to audio-DRM dependencies and it's been used
>>>>> for 10+ years.
>>>>
>>>> Then if the probe function fails, it needs to unwind everything itself
>>>> and unregister the device with the PCI subsystem so that things work
>>>> properly.  If it does not do that today, that's a bug.
>>>>
>>>> What kind of crazy dependencies cause this type of "requirement"?
>>>
>>> I think it is related to the request_module("i915") in
>>> snd_hdac_i915_init(), and possibly other firmware download.
>>>
>>> Adding Takashi for more details.
>>
>> Right, there are a few levels of complexity there.  The HD-audio
>> PCI controller driver, for example, is initialized in an async way
>> with a work.  It loads the firmware files with
>> request_firmware_nowait() and also binds itself as a component master
>> with the DRM graphics driver via component framework.
>>
>> Currently it has no way to unwind the PCI binding itself at the error
>> path, though.  In theory it should be possible to unregister the PCI
>> from the driver itself in the work context, but it failed in the
>> earlier experiments, hence the driver sets itself in a disabled state
>> instead.  Maybe worth to try again.
>>
>> But, to be noted, all belonging sub-devices aren't instantiated but
>> deleted at the error path.  Only the main PCI binding is kept in a
>> disabled state just as a place holder until it's unbound explicitly.
> 
> Ok, that's good to hear.  But platform devices should never be showing
> up as a child of a PCI device.  In the "near future" when we get the
> virtual bus code merged, we can convert any existing users like this to
> the new code.

yes that's the plan. It'll be however more than a 1:1 replacement, i.e. 
we want to use this opportunity to split existing cards into separate 
ones when it makes sense to do so. There's really no rationale for 
having code to deal with HDMI in each machine driver when we could have 
a single driver for HDMI. That's really what drove us to suggest this 
patchset based on the virtual bus: removal of platform devices + 
repartition.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test
  2020-05-20 12:56   ` Jason Gunthorpe
@ 2020-05-27 20:18     ` Ranjani Sridharan
  2020-05-28  0:12       ` Jason Gunthorpe
  0 siblings, 1 reply; 69+ messages in thread
From: Ranjani Sridharan @ 2020-05-27 20:18 UTC (permalink / raw)
  To: Jason Gunthorpe, Jeff Kirsher
  Cc: davem, gregkh, netdev, linux-rdma, nhorman, sassmann,
	pierre-louis.bossart, Fred Oh

On Wed, 2020-05-20 at 09:56 -0300, Jason Gunthorpe wrote:
> On Wed, May 20, 2020 at 12:02:26AM -0700, Jeff Kirsher wrote:
> > +static const struct virtbus_dev_id sof_ipc_virtbus_id_table[] = {
> > +	{"sof-ipc-test"},
> > +	{},
> > +};
> > +
> > +static struct sof_client_drv sof_ipc_test_client_drv = {
> > +	.name = "sof-ipc-test-client-drv",
> > +	.type = SOF_CLIENT_IPC,
> > +	.virtbus_drv = {
> > +		.driver = {
> > +			.name = "sof-ipc-test-virtbus-drv",
> > +		},
> > +		.id_table = sof_ipc_virtbus_id_table,
> > +		.probe = sof_ipc_test_probe,
> > +		.remove = sof_ipc_test_remove,
> > +		.shutdown = sof_ipc_test_shutdown,
> > +	},
> > +};
> > +
> > +module_sof_client_driver(sof_ipc_test_client_drv);
> > +
> > +MODULE_DESCRIPTION("SOF IPC Test Client Driver");
> > +MODULE_LICENSE("GPL v2");
> > +MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
> > +MODULE_ALIAS("virtbus:sof-ipc-test");
> 
> Usually the MODULE_ALIAS happens automatically rhough the struct
> virtbus_dev_id - is something missing in the enabling patches?
Hi Jason,

Without the MODULE_ALIAS,  the driver never probes when the virtual bus
device is registered. The MODULE_ALIAS is not different from the ones
we typically have in the platform drivers. Could you please give me
some pointers on what you think might be missing?

Thanks,
Rajnani


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test
  2020-05-27 20:18     ` Ranjani Sridharan
@ 2020-05-28  0:12       ` Jason Gunthorpe
  2020-05-28  1:40         ` Ranjani Sridharan
  0 siblings, 1 reply; 69+ messages in thread
From: Jason Gunthorpe @ 2020-05-28  0:12 UTC (permalink / raw)
  To: Ranjani Sridharan
  Cc: Jeff Kirsher, davem, gregkh, netdev, linux-rdma, nhorman,
	sassmann, pierre-louis.bossart, Fred Oh

On Wed, May 27, 2020 at 01:18:35PM -0700, Ranjani Sridharan wrote:
> On Wed, 2020-05-20 at 09:56 -0300, Jason Gunthorpe wrote:
> > On Wed, May 20, 2020 at 12:02:26AM -0700, Jeff Kirsher wrote:
> > > +static const struct virtbus_dev_id sof_ipc_virtbus_id_table[] = {
> > > +	{"sof-ipc-test"},
> > > +	{},
> > > +};
> > > +
> > > +static struct sof_client_drv sof_ipc_test_client_drv = {
> > > +	.name = "sof-ipc-test-client-drv",
> > > +	.type = SOF_CLIENT_IPC,
> > > +	.virtbus_drv = {
> > > +		.driver = {
> > > +			.name = "sof-ipc-test-virtbus-drv",
> > > +		},
> > > +		.id_table = sof_ipc_virtbus_id_table,
> > > +		.probe = sof_ipc_test_probe,
> > > +		.remove = sof_ipc_test_remove,
> > > +		.shutdown = sof_ipc_test_shutdown,
> > > +	},
> > > +};
> > > +
> > > +module_sof_client_driver(sof_ipc_test_client_drv);
> > > +
> > > +MODULE_DESCRIPTION("SOF IPC Test Client Driver");
> > > +MODULE_LICENSE("GPL v2");
> > > +MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
> > > +MODULE_ALIAS("virtbus:sof-ipc-test");
> > 
> > Usually the MODULE_ALIAS happens automatically rhough the struct
> > virtbus_dev_id - is something missing in the enabling patches?
> Hi Jason,
> 
> Without the MODULE_ALIAS,  the driver never probes when the virtual bus
> device is registered. The MODULE_ALIAS is not different from the ones
> we typically have in the platform drivers. Could you please give me
> some pointers on what you think might be missing?

Look at how the stuff in include/linux/mod_devicetable.h works and do
the same for virtbus

Looks like you push a MODALIAS= uevent when creating the device and
the generic machinery does the rest based on the matching table, once
mod_devicetable.h and related is updated. But it has been a long time
since I looked at this..

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test
  2020-05-28  0:12       ` Jason Gunthorpe
@ 2020-05-28  1:40         ` Ranjani Sridharan
  2020-05-28 10:45           ` Greg KH
  0 siblings, 1 reply; 69+ messages in thread
From: Ranjani Sridharan @ 2020-05-28  1:40 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Jeff Kirsher, davem, gregkh, netdev, linux-rdma, nhorman,
	sassmann, pierre-louis.bossart, Fred Oh

On Wed, 2020-05-27 at 21:12 -0300, Jason Gunthorpe wrote:
> On Wed, May 27, 2020 at 01:18:35PM -0700, Ranjani Sridharan wrote:
> > On Wed, 2020-05-20 at 09:56 -0300, Jason Gunthorpe wrote:
> > > On Wed, May 20, 2020 at 12:02:26AM -0700, Jeff Kirsher wrote:
> > > > +static const struct virtbus_dev_id sof_ipc_virtbus_id_table[]
> > > > = {
> > > > +	{"sof-ipc-test"},
> > > > +	{},
> > > > +};
> > > > +
> > > > +static struct sof_client_drv sof_ipc_test_client_drv = {
> > > > +	.name = "sof-ipc-test-client-drv",
> > > > +	.type = SOF_CLIENT_IPC,
> > > > +	.virtbus_drv = {
> > > > +		.driver = {
> > > > +			.name = "sof-ipc-test-virtbus-drv",
> > > > +		},
> > > > +		.id_table = sof_ipc_virtbus_id_table,
> > > > +		.probe = sof_ipc_test_probe,
> > > > +		.remove = sof_ipc_test_remove,
> > > > +		.shutdown = sof_ipc_test_shutdown,
> > > > +	},
> > > > +};
> > > > +
> > > > +module_sof_client_driver(sof_ipc_test_client_drv);
> > > > +
> > > > +MODULE_DESCRIPTION("SOF IPC Test Client Driver");
> > > > +MODULE_LICENSE("GPL v2");
> > > > +MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
> > > > +MODULE_ALIAS("virtbus:sof-ipc-test");
> > > 
> > > Usually the MODULE_ALIAS happens automatically rhough the struct
> > > virtbus_dev_id - is something missing in the enabling patches?
> > 
> > Hi Jason,
> > 
> > Without the MODULE_ALIAS,  the driver never probes when the virtual
> > bus
> > device is registered. The MODULE_ALIAS is not different from the
> > ones
> > we typically have in the platform drivers. Could you please give me
> > some pointers on what you think might be missing?
> 
> Look at how the stuff in include/linux/mod_devicetable.h works and do
> the same for virtbus
It looks like include/linux/mod_devicetable.h has everything needed for
virtbus already.
> 
> Looks like you push a MODALIAS= uevent when creating the device and
> the generic machinery does the rest based on the matching table, once
> mod_devicetable.h and related is updated. But it has been a long time
> since I looked at this..

This is also done with uevent callback in the bus_type definition for
the virtual_bus.

Is your expectation that with the above changes, we should not be
needing the MODULE_ALIAS() in the driver?

Thanks,
Ranjani


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test
  2020-05-28  1:40         ` Ranjani Sridharan
@ 2020-05-28 10:45           ` Greg KH
  2020-06-29 20:37             ` Mark Brown
  0 siblings, 1 reply; 69+ messages in thread
From: Greg KH @ 2020-05-28 10:45 UTC (permalink / raw)
  To: Ranjani Sridharan
  Cc: Jason Gunthorpe, Jeff Kirsher, davem, netdev, linux-rdma,
	nhorman, sassmann, pierre-louis.bossart, Fred Oh

On Wed, May 27, 2020 at 06:40:05PM -0700, Ranjani Sridharan wrote:
> On Wed, 2020-05-27 at 21:12 -0300, Jason Gunthorpe wrote:
> > On Wed, May 27, 2020 at 01:18:35PM -0700, Ranjani Sridharan wrote:
> > > On Wed, 2020-05-20 at 09:56 -0300, Jason Gunthorpe wrote:
> > > > On Wed, May 20, 2020 at 12:02:26AM -0700, Jeff Kirsher wrote:
> > > > > +static const struct virtbus_dev_id sof_ipc_virtbus_id_table[]
> > > > > = {
> > > > > +	{"sof-ipc-test"},
> > > > > +	{},
> > > > > +};
> > > > > +
> > > > > +static struct sof_client_drv sof_ipc_test_client_drv = {
> > > > > +	.name = "sof-ipc-test-client-drv",
> > > > > +	.type = SOF_CLIENT_IPC,
> > > > > +	.virtbus_drv = {
> > > > > +		.driver = {
> > > > > +			.name = "sof-ipc-test-virtbus-drv",
> > > > > +		},
> > > > > +		.id_table = sof_ipc_virtbus_id_table,
> > > > > +		.probe = sof_ipc_test_probe,
> > > > > +		.remove = sof_ipc_test_remove,
> > > > > +		.shutdown = sof_ipc_test_shutdown,
> > > > > +	},
> > > > > +};
> > > > > +
> > > > > +module_sof_client_driver(sof_ipc_test_client_drv);
> > > > > +
> > > > > +MODULE_DESCRIPTION("SOF IPC Test Client Driver");
> > > > > +MODULE_LICENSE("GPL v2");
> > > > > +MODULE_IMPORT_NS(SND_SOC_SOF_CLIENT);
> > > > > +MODULE_ALIAS("virtbus:sof-ipc-test");
> > > > 
> > > > Usually the MODULE_ALIAS happens automatically rhough the struct
> > > > virtbus_dev_id - is something missing in the enabling patches?
> > > 
> > > Hi Jason,
> > > 
> > > Without the MODULE_ALIAS,  the driver never probes when the virtual
> > > bus
> > > device is registered. The MODULE_ALIAS is not different from the
> > > ones
> > > we typically have in the platform drivers. Could you please give me
> > > some pointers on what you think might be missing?
> > 
> > Look at how the stuff in include/linux/mod_devicetable.h works and do
> > the same for virtbus
> It looks like include/linux/mod_devicetable.h has everything needed for
> virtbus already.
> > 
> > Looks like you push a MODALIAS= uevent when creating the device and
> > the generic machinery does the rest based on the matching table, once
> > mod_devicetable.h and related is updated. But it has been a long time
> > since I looked at this..
> 
> This is also done with uevent callback in the bus_type definition for
> the virtual_bus.
> 
> Is your expectation that with the above changes, we should not be
> needing the MODULE_ALIAS() in the driver?

Yes, it should not be needed if you did everything properly in
mod_devicetable.h

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-20  7:02 ` [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client Jeff Kirsher
  2020-05-20  7:20   ` Greg KH
  2020-05-20 12:54   ` Jason Gunthorpe
@ 2020-06-29 17:36   ` Mark Brown
  2 siblings, 0 replies; 69+ messages in thread
From: Mark Brown @ 2020-06-29 17:36 UTC (permalink / raw)
  To: Jeff Kirsher
  Cc: davem, gregkh, Ranjani Sridharan, netdev, linux-rdma, nhorman,
	sassmann, jgg, pierre-louis.bossart, Fred Oh

[-- Attachment #1: Type: text/plain, Size: 2145 bytes --]

On Wed, May 20, 2020 at 12:02:25AM -0700, Jeff Kirsher wrote:
> From: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
> 
> A client in the SOF (Sound Open Firmware) context is a
> device that needs to communicate with the DSP via IPC

As please send patches to the maintainers for the code you would like to
change.  :(

> +config SND_SOC_SOF_CLIENT
> +	tristate
> +	select VIRTUAL_BUS
> +	help
> +	  This option is not user-selectable but automagically handled by
> +	  'select' statements at a higher level

VIRTUAL_BUS is both user visible and selected here (not that the select
will do much since this option is not user visible) - which is it?

> +config SND_SOC_SOF_CLIENT_SUPPORT
> +	bool "SOF enable clients"
> +	depends on SND_SOC_SOF
> +	help
> +	  This adds support for client support with Sound Open Firmware.
> +	  The SOF driver adds the capability to separate out the debug
> +	  functionality for IPC tests, probes etc. into separate client
> +	  devices. This option would also allow adding client devices
> +	  based on DSP FW capabilities and ACPI/OF device information.
> +	  Say Y if you want to enable clients with SOF.
> +	  If unsure select "N".

I'm not sure that's going to mean much to users, nor can I see why you'd
ever have a SOF system without this enabled?

> +	/*
> +	 * Register virtbus device for the client.
> +	 * The error path in virtbus_register_device() calls put_device(),
> +	 * which will free cdev in the release callback.
> +	 */
> +	ret = virtbus_register_device(vdev);
> +	if (ret < 0)
> +		return ret;

> +	/* make sure the probe is complete before updating client list */
> +	timeout = msecs_to_jiffies(SOF_CLIENT_PROBE_TIMEOUT_MS);
> +	time = wait_for_completion_timeout(&cdev->probe_complete, timeout);
> +	if (!time) {
> +		dev_err(sdev->dev, "error: probe of virtbus dev %s timed out\n",
> +			name);
> +		virtbus_unregister_device(vdev);
> +		return -ETIMEDOUT;
> +	}

This seems wrong - what happens if the driver isn't loaded yet for
example?  Why does the device registration care what's going on with
driver binding at all?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-23  6:23           ` Greg KH
  2020-05-23 19:41             ` Pierre-Louis Bossart
@ 2020-06-29 20:21             ` Mark Brown
  1 sibling, 0 replies; 69+ messages in thread
From: Mark Brown @ 2020-06-29 20:21 UTC (permalink / raw)
  To: Greg KH
  Cc: Pierre-Louis Bossart, Jason Gunthorpe, Ranjani Sridharan,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh

[-- Attachment #1: Type: text/plain, Size: 955 bytes --]

On Sat, May 23, 2020 at 08:23:51AM +0200, Greg KH wrote:

> Then fix that problem there.  The audio card should not be being created
> as a platform device, as that is not what it is.  And even if it was,
> the probe should not complete, it should clean up after itself and error
> out.

To be clear ASoC sound cards are physical devices which exist in the
real world.

> That's not a driver core issue, sounds like a subsystem error handling
> issue that needs to be resolved.

It's not a subsystem issue, it's an issue with the half baked support
for enumerating modern audio hardware on ACPI systems.  Unfortunately
we have to enumerate hardware based on having data tables instantiated
via DMI information for the system which doesn't work well with a
generic kernel like Linux, on Windows they're per-machine custom
drivers.  There is some effort at putting some of the data into ACPI
tables on newer systems which is helping a lot but it's partial.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-27  7:17                     ` Greg KH
  2020-05-27 14:05                       ` Pierre-Louis Bossart
@ 2020-06-29 20:33                       ` Mark Brown
  2020-06-29 22:59                         ` Jason Gunthorpe
  1 sibling, 1 reply; 69+ messages in thread
From: Mark Brown @ 2020-06-29 20:33 UTC (permalink / raw)
  To: Greg KH
  Cc: Takashi Iwai, Pierre-Louis Bossart, Jason Gunthorpe,
	Ranjani Sridharan, Jeff Kirsher, davem, netdev, linux-rdma,
	nhorman, sassmann, Fred Oh

[-- Attachment #1: Type: text/plain, Size: 740 bytes --]

On Wed, May 27, 2020 at 09:17:33AM +0200, Greg KH wrote:

> Ok, that's good to hear.  But platform devices should never be showing
> up as a child of a PCI device.  In the "near future" when we get the
> virtual bus code merged, we can convert any existing users like this to
> the new code.

What are we supposed to do with things like PCI attached FPGAs and ASICs
in that case?  They can have host visible devices with physical
resources like MMIO ranges and interrupts without those being split up
neatly as PCI subfunctions - the original use case for MFD was such
ASICs, there's a few PCI drivers in there now.  Adding support for those
into virtual bus would make it even more of a cut'n'paste of the
platform bus than it already is.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test
  2020-05-28 10:45           ` Greg KH
@ 2020-06-29 20:37             ` Mark Brown
  0 siblings, 0 replies; 69+ messages in thread
From: Mark Brown @ 2020-06-29 20:37 UTC (permalink / raw)
  To: Greg KH
  Cc: Ranjani Sridharan, Jason Gunthorpe, Jeff Kirsher, davem, netdev,
	linux-rdma, nhorman, sassmann, pierre-louis.bossart, Fred Oh

[-- Attachment #1: Type: text/plain, Size: 459 bytes --]

On Thu, May 28, 2020 at 12:45:45PM +0200, Greg KH wrote:
> On Wed, May 27, 2020 at 06:40:05PM -0700, Ranjani Sridharan wrote:

> > Is your expectation that with the above changes, we should not be
> > needing the MODULE_ALIAS() in the driver?

> Yes, it should not be needed if you did everything properly in
> mod_devicetable.h

It will also need a MODULE_DEVICE_TABLE() on _virtbus_id_table[] -
MODULE_ALIAS() is functioning as a single entry one of those.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-05-22 15:33             ` Pierre-Louis Bossart
  2020-05-22 17:10               ` Jason Gunthorpe
@ 2020-06-29 20:59               ` Mark Brown
  1 sibling, 0 replies; 69+ messages in thread
From: Mark Brown @ 2020-06-29 20:59 UTC (permalink / raw)
  To: Pierre-Louis Bossart
  Cc: Jason Gunthorpe, Ranjani Sridharan, Jeff Kirsher, davem, gregkh,
	netdev, linux-rdma, nhorman, sassmann, Fred Oh

[-- Attachment #1: Type: text/plain, Size: 1570 bytes --]

On Fri, May 22, 2020 at 10:33:20AM -0500, Pierre-Louis Bossart wrote:
> On 5/22/20 9:55 AM, Jason Gunthorpe wrote:

> > Maybe not great, but at least it is consistent with all the lifetime
> > models and the operation of the driver core.

> I agree your comments are valid ones, I just don't have a solution to be
> fully compliant with these models and report failures of the driver probe
> for a child device due to configuration issues (bad audio topology, etc).

> My understanding is that errors on probe are explicitly not handled in the
> driver core, see e.g. comments such as:

It's just not an error for a child device to not instantiate, we don't
even know if the driver is loaded yet.  The parent really should not
care if the child is there or not.

> > > PCI device creates an audio card represented as a platform device. When the
> > > card registration fails, typically due to configuration issues, the PCI
> > > probe still completes. That's really confusing and the source of lots of
> > > support questions. If we use these virtual bus extensions to stpo abusing
> > > platform devices, it'd be really nice to make those unreported probe
> > > failures go away.

> > I think you need to address this in some other way that is hot plug
> > safe.

> > Surely you can make this failure visible to users in some other way?

> Not at the moment, no. there are no failures reported in dmesg, and the user
> does not see any card created. This is a silent error.

If we're failing to do something we should report it.  This includes
deferred probe failures.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-06-29 20:33                       ` Mark Brown
@ 2020-06-29 22:59                         ` Jason Gunthorpe
  2020-06-29 23:13                           ` Kirsher, Jeffrey T
  2020-06-30 10:31                           ` Mark Brown
  0 siblings, 2 replies; 69+ messages in thread
From: Jason Gunthorpe @ 2020-06-29 22:59 UTC (permalink / raw)
  To: Mark Brown
  Cc: Greg KH, Takashi Iwai, Pierre-Louis Bossart, Ranjani Sridharan,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh

On Mon, Jun 29, 2020 at 09:33:17PM +0100, Mark Brown wrote:
> On Wed, May 27, 2020 at 09:17:33AM +0200, Greg KH wrote:
> 
> > Ok, that's good to hear.  But platform devices should never be showing
> > up as a child of a PCI device.  In the "near future" when we get the
> > virtual bus code merged, we can convert any existing users like this to
> > the new code.
> 
> What are we supposed to do with things like PCI attached FPGAs and ASICs
> in that case?  They can have host visible devices with physical
> resources like MMIO ranges and interrupts without those being split up
> neatly as PCI subfunctions - the original use case for MFD was such
> ASICs, there's a few PCI drivers in there now. 

Greg has been pretty clear that MFD shouldn't have been used on top of
PCI drivers.

In a sense virtual bus is pretty much MFD v2.

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-06-29 22:59                         ` Jason Gunthorpe
@ 2020-06-29 23:13                           ` Kirsher, Jeffrey T
  2020-06-30 10:31                           ` Mark Brown
  1 sibling, 0 replies; 69+ messages in thread
From: Kirsher, Jeffrey T @ 2020-06-29 23:13 UTC (permalink / raw)
  To: Jason Gunthorpe, Mark Brown
  Cc: Greg KH, Takashi Iwai, Pierre-Louis Bossart, Ranjani Sridharan,
	davem, netdev, linux-rdma, nhorman, sassmann, Fred Oh

> -----Original Message-----
> From: Jason Gunthorpe <jgg@ziepe.ca>
> Sent: Monday, June 29, 2020 16:00
> To: Mark Brown <broonie@kernel.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>; Takashi Iwai <tiwai@suse.de>;
> Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>; Ranjani Sridharan
> <ranjani.sridharan@linux.intel.com>; Kirsher, Jeffrey T
> <jeffrey.t.kirsher@intel.com>; davem@davemloft.net; netdev@vger.kernel.org;
> linux-rdma@vger.kernel.org; nhorman@redhat.com; sassmann@redhat.com;
> Fred Oh <fred.oh@linux.intel.com>
> Subject: Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF
> client
> 
> On Mon, Jun 29, 2020 at 09:33:17PM +0100, Mark Brown wrote:
> > On Wed, May 27, 2020 at 09:17:33AM +0200, Greg KH wrote:
> >
> > > Ok, that's good to hear.  But platform devices should never be
> > > showing up as a child of a PCI device.  In the "near future" when we
> > > get the virtual bus code merged, we can convert any existing users
> > > like this to the new code.
> >
> > What are we supposed to do with things like PCI attached FPGAs and
> > ASICs in that case?  They can have host visible devices with physical
> > resources like MMIO ranges and interrupts without those being split up
> > neatly as PCI subfunctions - the original use case for MFD was such
> > ASICs, there's a few PCI drivers in there now.
> 
> Greg has been pretty clear that MFD shouldn't have been used on top of PCI
> drivers.
> 
> In a sense virtual bus is pretty much MFD v2.
 
With the big distinction that MFD uses Platform bus/devices, which is why we could not
use MFD as a solution, and virtbus does not.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-06-29 22:59                         ` Jason Gunthorpe
  2020-06-29 23:13                           ` Kirsher, Jeffrey T
@ 2020-06-30 10:31                           ` Mark Brown
  2020-06-30 11:32                             ` Jason Gunthorpe
  1 sibling, 1 reply; 69+ messages in thread
From: Mark Brown @ 2020-06-30 10:31 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Greg KH, Takashi Iwai, Pierre-Louis Bossart, Ranjani Sridharan,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh, lee.jones

[-- Attachment #1: Type: text/plain, Size: 1736 bytes --]

On Mon, Jun 29, 2020 at 07:59:59PM -0300, Jason Gunthorpe wrote:
> On Mon, Jun 29, 2020 at 09:33:17PM +0100, Mark Brown wrote:
> > On Wed, May 27, 2020 at 09:17:33AM +0200, Greg KH wrote:

> > > Ok, that's good to hear.  But platform devices should never be showing
> > > up as a child of a PCI device.  In the "near future" when we get the
> > > virtual bus code merged, we can convert any existing users like this to
> > > the new code.

> > What are we supposed to do with things like PCI attached FPGAs and ASICs
> > in that case?  They can have host visible devices with physical
> > resources like MMIO ranges and interrupts without those being split up
> > neatly as PCI subfunctions - the original use case for MFD was such
> > ASICs, there's a few PCI drivers in there now. 

> Greg has been pretty clear that MFD shouldn't have been used on top of
> PCI drivers.

The proposed bus lacks resource handling, an equivalent of
platform_get_resource() and friends for example, which would be needed
for use with physical devices.  Both that and the name suggest that it's
for virtual devices.

> In a sense virtual bus is pretty much MFD v2.

Copying in Lee since I'm not sure he's aware of this, it's quite a
recent thing...  MFD is a layer above AFAICT, it's not a bus but rather
a combination of helpers for registering subdevices and a place for
drivers for core functionality of devices which have multiple features.

The reason the MFDs use platform devices is that they end up having to
have all the features of platform devices - originally people were
making virtual buses for them but the code duplication is real so
everyone (including Greg) decided to just use what was there already.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-06-30 10:31                           ` Mark Brown
@ 2020-06-30 11:32                             ` Jason Gunthorpe
  2020-06-30 14:16                               ` Mark Brown
  2020-06-30 17:24                               ` Ranjani Sridharan
  0 siblings, 2 replies; 69+ messages in thread
From: Jason Gunthorpe @ 2020-06-30 11:32 UTC (permalink / raw)
  To: Mark Brown
  Cc: Greg KH, Takashi Iwai, Pierre-Louis Bossart, Ranjani Sridharan,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh, lee.jones

On Tue, Jun 30, 2020 at 11:31:41AM +0100, Mark Brown wrote:
> On Mon, Jun 29, 2020 at 07:59:59PM -0300, Jason Gunthorpe wrote:
> > On Mon, Jun 29, 2020 at 09:33:17PM +0100, Mark Brown wrote:
> > > On Wed, May 27, 2020 at 09:17:33AM +0200, Greg KH wrote:
> 
> > > > Ok, that's good to hear.  But platform devices should never be showing
> > > > up as a child of a PCI device.  In the "near future" when we get the
> > > > virtual bus code merged, we can convert any existing users like this to
> > > > the new code.
> 
> > > What are we supposed to do with things like PCI attached FPGAs and ASICs
> > > in that case?  They can have host visible devices with physical
> > > resources like MMIO ranges and interrupts without those being split up
> > > neatly as PCI subfunctions - the original use case for MFD was such
> > > ASICs, there's a few PCI drivers in there now. 
> 
> > Greg has been pretty clear that MFD shouldn't have been used on top of
> > PCI drivers.
> 
> The proposed bus lacks resource handling, an equivalent of
> platform_get_resource() and friends for example, which would be needed
> for use with physical devices.  Both that and the name suggest that it's
> for virtual devices.

Resource handling is only useful if the HW has a hard distinction
between it's functional blocks. This scheme is intended for devices
where that doesn't exist. The driver that attaches to the PCI device
and creates the virtual devices is supposed to provide SW abstractions
for the other drivers to sit on.
 
I'm not sure why we are calling it virtual bus.

> The reason the MFDs use platform devices is that they end up having to
> have all the features of platform devices - originally people were
> making virtual buses for them but the code duplication is real so
> everyone (including Greg) decided to just use what was there already.

Maybe Greg will explain why he didn't like the earlier version of that
stuff that used MFD

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-06-30 11:32                             ` Jason Gunthorpe
@ 2020-06-30 14:16                               ` Mark Brown
  2020-06-30 17:24                               ` Ranjani Sridharan
  1 sibling, 0 replies; 69+ messages in thread
From: Mark Brown @ 2020-06-30 14:16 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Greg KH, Takashi Iwai, Pierre-Louis Bossart, Ranjani Sridharan,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh, lee.jones

[-- Attachment #1: Type: text/plain, Size: 2197 bytes --]

On Tue, Jun 30, 2020 at 08:32:45AM -0300, Jason Gunthorpe wrote:
> On Tue, Jun 30, 2020 at 11:31:41AM +0100, Mark Brown wrote:
> > On Mon, Jun 29, 2020 at 07:59:59PM -0300, Jason Gunthorpe wrote:

> > > > What are we supposed to do with things like PCI attached FPGAs and ASICs
> > > > in that case?  They can have host visible devices with physical
> > > > resources like MMIO ranges and interrupts without those being split up
> > > > neatly as PCI subfunctions - the original use case for MFD was such
> > > > ASICs, there's a few PCI drivers in there now. 

> > > Greg has been pretty clear that MFD shouldn't have been used on top of
> > > PCI drivers.

> > The proposed bus lacks resource handling, an equivalent of
> > platform_get_resource() and friends for example, which would be needed
> > for use with physical devices.  Both that and the name suggest that it's
> > for virtual devices.

> Resource handling is only useful if the HW has a hard distinction
> between it's functional blocks. This scheme is intended for devices
> where that doesn't exist. The driver that attaches to the PCI device
> and creates the virtual devices is supposed to provide SW abstractions
> for the other drivers to sit on.

> I'm not sure why we are calling it virtual bus.

The abstraction that the PCI based MFDs (and FPGAs will be similar,
they're just dynamic MFDs to a good approximation) need is to pass
through MMIO regions, interrupts and so on which is exactly what the
platform bus offers.  The hardware is basically someone taking a bunch
of IPs and shoving them behind the MMIO/interrupt regions of a PCI
device.

> > The reason the MFDs use platform devices is that they end up having to
> > have all the features of platform devices - originally people were
> > making virtual buses for them but the code duplication is real so
> > everyone (including Greg) decided to just use what was there already.

> Maybe Greg will explain why he didn't like the earlier version of that
> stuff that used MFD

AFAICT Greg is mostly concerned about the MFDs that aren't memory
mapped, though some of them do use the resource API to pass interrupts
through.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-06-30 11:32                             ` Jason Gunthorpe
  2020-06-30 14:16                               ` Mark Brown
@ 2020-06-30 17:24                               ` Ranjani Sridharan
  2020-06-30 17:27                                 ` Jason Gunthorpe
  2020-07-01  6:59                                 ` Greg KH
  1 sibling, 2 replies; 69+ messages in thread
From: Ranjani Sridharan @ 2020-06-30 17:24 UTC (permalink / raw)
  To: Jason Gunthorpe, Mark Brown
  Cc: Greg KH, Takashi Iwai, Pierre-Louis Bossart, Jeff Kirsher, davem,
	netdev, linux-rdma, nhorman, sassmann, Fred Oh, lee.jones

On Tue, 2020-06-30 at 08:32 -0300, Jason Gunthorpe wrote:
> On Tue, Jun 30, 2020 at 11:31:41AM +0100, Mark Brown wrote:
> > On Mon, Jun 29, 2020 at 07:59:59PM -0300, Jason Gunthorpe wrote:
> > > On Mon, Jun 29, 2020 at 09:33:17PM +0100, Mark Brown wrote:
> > > > On Wed, May 27, 2020 at 09:17:33AM +0200, Greg KH wrote:
> > > > > Ok, that's good to hear.  But platform devices should never
> > > > > be showing
> > > > > up as a child of a PCI device.  In the "near future" when we
> > > > > get the
> > > > > virtual bus code merged, we can convert any existing users
> > > > > like this to
> > > > > the new code.
> > > > What are we supposed to do with things like PCI attached FPGAs
> > > > and ASICs
> > > > in that case?  They can have host visible devices with physical
> > > > resources like MMIO ranges and interrupts without those being
> > > > split up
> > > > neatly as PCI subfunctions - the original use case for MFD was
> > > > such
> > > > ASICs, there's a few PCI drivers in there now. 
> > > Greg has been pretty clear that MFD shouldn't have been used on
> > > top of
> > > PCI drivers.
> > 
> > The proposed bus lacks resource handling, an equivalent of
> > platform_get_resource() and friends for example, which would be
> > needed
> > for use with physical devices.  Both that and the name suggest that
> > it's
> > for virtual devices.
> 
> Resource handling is only useful if the HW has a hard distinction
> between it's functional blocks. This scheme is intended for devices
> where that doesn't exist. The driver that attaches to the PCI device
> and creates the virtual devices is supposed to provide SW
> abstractions
> for the other drivers to sit on.
>  
> I'm not sure why we are calling it virtual bus.
Hi Jason,

We're addressing the naming in the next version as well. We've had
several people reject the name virtual bus and we've narrowed in on
"ancillary bus" for the new name suggesting that we have the core
device that is attached to the primary bus and one or more sub-devices
that are attached to the ancillary bus. Please let us know what you
think of it.

Thanks,
Ranjani


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-06-30 17:24                               ` Ranjani Sridharan
@ 2020-06-30 17:27                                 ` Jason Gunthorpe
  2020-07-01  9:50                                   ` Mark Brown
  2020-07-01  6:59                                 ` Greg KH
  1 sibling, 1 reply; 69+ messages in thread
From: Jason Gunthorpe @ 2020-06-30 17:27 UTC (permalink / raw)
  To: Ranjani Sridharan
  Cc: Mark Brown, Greg KH, Takashi Iwai, Pierre-Louis Bossart,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh, lee.jones

On Tue, Jun 30, 2020 at 10:24:04AM -0700, Ranjani Sridharan wrote:
> On Tue, 2020-06-30 at 08:32 -0300, Jason Gunthorpe wrote:
> > On Tue, Jun 30, 2020 at 11:31:41AM +0100, Mark Brown wrote:
> > > On Mon, Jun 29, 2020 at 07:59:59PM -0300, Jason Gunthorpe wrote:
> > > > On Mon, Jun 29, 2020 at 09:33:17PM +0100, Mark Brown wrote:
> > > > > On Wed, May 27, 2020 at 09:17:33AM +0200, Greg KH wrote:
> > > > > > Ok, that's good to hear.  But platform devices should never
> > > > > > be showing
> > > > > > up as a child of a PCI device.  In the "near future" when we
> > > > > > get the
> > > > > > virtual bus code merged, we can convert any existing users
> > > > > > like this to
> > > > > > the new code.
> > > > > What are we supposed to do with things like PCI attached FPGAs
> > > > > and ASICs
> > > > > in that case?  They can have host visible devices with physical
> > > > > resources like MMIO ranges and interrupts without those being
> > > > > split up
> > > > > neatly as PCI subfunctions - the original use case for MFD was
> > > > > such
> > > > > ASICs, there's a few PCI drivers in there now. 
> > > > Greg has been pretty clear that MFD shouldn't have been used on
> > > > top of
> > > > PCI drivers.
> > > 
> > > The proposed bus lacks resource handling, an equivalent of
> > > platform_get_resource() and friends for example, which would be
> > > needed
> > > for use with physical devices.  Both that and the name suggest that
> > > it's
> > > for virtual devices.
> > 
> > Resource handling is only useful if the HW has a hard distinction
> > between it's functional blocks. This scheme is intended for devices
> > where that doesn't exist. The driver that attaches to the PCI device
> > and creates the virtual devices is supposed to provide SW
> > abstractions
> > for the other drivers to sit on.
> >  
> > I'm not sure why we are calling it virtual bus.
> Hi Jason,
> 
> We're addressing the naming in the next version as well. We've had
> several people reject the name virtual bus and we've narrowed in on
> "ancillary bus" for the new name suggesting that we have the core
> device that is attached to the primary bus and one or more sub-devices
> that are attached to the ancillary bus. Please let us know what you
> think of it.

It is sufficiently vauge

I wonder if SW_MFD might me more apt though? Based on Mark's remarks
current MFD is 'hw' MFD where the created platform_devices expect a
MMIO pass through, while this is a MFD a device-specific SW
interfacing layer.

MFD really is the best name for this kind of functionality,
understanding how it is different from the current MFD might also help
justify why it exists and give it a name.

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-06-30 17:24                               ` Ranjani Sridharan
  2020-06-30 17:27                                 ` Jason Gunthorpe
@ 2020-07-01  6:59                                 ` Greg KH
  2020-07-02 13:43                                   ` Ranjani Sridharan
  1 sibling, 1 reply; 69+ messages in thread
From: Greg KH @ 2020-07-01  6:59 UTC (permalink / raw)
  To: Ranjani Sridharan
  Cc: Jason Gunthorpe, Mark Brown, Takashi Iwai, Pierre-Louis Bossart,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh, lee.jones

On Tue, Jun 30, 2020 at 10:24:04AM -0700, Ranjani Sridharan wrote:
> On Tue, 2020-06-30 at 08:32 -0300, Jason Gunthorpe wrote:
> > On Tue, Jun 30, 2020 at 11:31:41AM +0100, Mark Brown wrote:
> > > On Mon, Jun 29, 2020 at 07:59:59PM -0300, Jason Gunthorpe wrote:
> > > > On Mon, Jun 29, 2020 at 09:33:17PM +0100, Mark Brown wrote:
> > > > > On Wed, May 27, 2020 at 09:17:33AM +0200, Greg KH wrote:
> > > > > > Ok, that's good to hear.  But platform devices should never
> > > > > > be showing
> > > > > > up as a child of a PCI device.  In the "near future" when we
> > > > > > get the
> > > > > > virtual bus code merged, we can convert any existing users
> > > > > > like this to
> > > > > > the new code.
> > > > > What are we supposed to do with things like PCI attached FPGAs
> > > > > and ASICs
> > > > > in that case?  They can have host visible devices with physical
> > > > > resources like MMIO ranges and interrupts without those being
> > > > > split up
> > > > > neatly as PCI subfunctions - the original use case for MFD was
> > > > > such
> > > > > ASICs, there's a few PCI drivers in there now. 
> > > > Greg has been pretty clear that MFD shouldn't have been used on
> > > > top of
> > > > PCI drivers.
> > > 
> > > The proposed bus lacks resource handling, an equivalent of
> > > platform_get_resource() and friends for example, which would be
> > > needed
> > > for use with physical devices.  Both that and the name suggest that
> > > it's
> > > for virtual devices.
> > 
> > Resource handling is only useful if the HW has a hard distinction
> > between it's functional blocks. This scheme is intended for devices
> > where that doesn't exist. The driver that attaches to the PCI device
> > and creates the virtual devices is supposed to provide SW
> > abstractions
> > for the other drivers to sit on.
> >  
> > I'm not sure why we are calling it virtual bus.
> Hi Jason,
> 
> We're addressing the naming in the next version as well. We've had
> several people reject the name virtual bus and we've narrowed in on
> "ancillary bus" for the new name suggesting that we have the core
> device that is attached to the primary bus and one or more sub-devices
> that are attached to the ancillary bus. Please let us know what you
> think of it.

I'm thinking that the primary person who keeps asking you to create this
"virtual bus" was not upset about that name, nor consulted, so why are
you changing this?  :(

Right now this feels like the old technique of "keep throwing crap at a
maintainer until they get so sick of it that they do the work
themselves..."

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-06-30 17:27                                 ` Jason Gunthorpe
@ 2020-07-01  9:50                                   ` Mark Brown
  2020-07-01 23:32                                     ` Jason Gunthorpe
  0 siblings, 1 reply; 69+ messages in thread
From: Mark Brown @ 2020-07-01  9:50 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ranjani Sridharan, Greg KH, Takashi Iwai, Pierre-Louis Bossart,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh, lee.jones

[-- Attachment #1: Type: text/plain, Size: 741 bytes --]

On Tue, Jun 30, 2020 at 02:27:10PM -0300, Jason Gunthorpe wrote:

> I wonder if SW_MFD might me more apt though? Based on Mark's remarks
> current MFD is 'hw' MFD where the created platform_devices expect a
> MMIO pass through, while this is a MFD a device-specific SW
> interfacing layer.

Another part of this is that there's not a clean cut over between MMIO
and not using any hardware resources at all - for example a device might
be connected over I2C but use resources to distribute interrupts to
subdevices.

> MFD really is the best name for this kind of functionality,
> understanding how it is different from the current MFD might also help
> justify why it exists and give it a name.

Right, it's not clear what we're doing here.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-07-01  9:50                                   ` Mark Brown
@ 2020-07-01 23:32                                     ` Jason Gunthorpe
  2020-07-02 11:15                                       ` Mark Brown
  0 siblings, 1 reply; 69+ messages in thread
From: Jason Gunthorpe @ 2020-07-01 23:32 UTC (permalink / raw)
  To: Mark Brown
  Cc: Ranjani Sridharan, Greg KH, Takashi Iwai, Pierre-Louis Bossart,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh, lee.jones

On Wed, Jul 01, 2020 at 10:50:49AM +0100, Mark Brown wrote:
> On Tue, Jun 30, 2020 at 02:27:10PM -0300, Jason Gunthorpe wrote:
> 
> > I wonder if SW_MFD might me more apt though? Based on Mark's remarks
> > current MFD is 'hw' MFD where the created platform_devices expect a
> > MMIO pass through, while this is a MFD a device-specific SW
> > interfacing layer.
> 
> Another part of this is that there's not a clean cut over between MMIO
> and not using any hardware resources at all - for example a device might
> be connected over I2C but use resources to distribute interrupts to
> subdevices.

How does the subdevice do anything if it only received an interrupt?

That sounds rather more like virtual bus's use case..

Jason

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-07-01 23:32                                     ` Jason Gunthorpe
@ 2020-07-02 11:15                                       ` Mark Brown
  2020-07-02 12:11                                         ` Jason Gunthorpe
  0 siblings, 1 reply; 69+ messages in thread
From: Mark Brown @ 2020-07-02 11:15 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ranjani Sridharan, Greg KH, Takashi Iwai, Pierre-Louis Bossart,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh, lee.jones

[-- Attachment #1: Type: text/plain, Size: 740 bytes --]

On Wed, Jul 01, 2020 at 08:32:50PM -0300, Jason Gunthorpe wrote:
> On Wed, Jul 01, 2020 at 10:50:49AM +0100, Mark Brown wrote:

> > Another part of this is that there's not a clean cut over between MMIO
> > and not using any hardware resources at all - for example a device might
> > be connected over I2C but use resources to distribute interrupts to
> > subdevices.

> How does the subdevice do anything if it only received an interrupt?

Via some bus that isn't memory mapped like I2C or SPI.

> That sounds rather more like virtual bus's use case..

These are very much physical devices often with distinct IPs in distinct
address ranges and so on, it's just that those addresses happen not to
be on buses it is sensible to memory map.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-07-02 11:15                                       ` Mark Brown
@ 2020-07-02 12:11                                         ` Jason Gunthorpe
  2020-07-02 12:20                                           ` Mark Brown
  0 siblings, 1 reply; 69+ messages in thread
From: Jason Gunthorpe @ 2020-07-02 12:11 UTC (permalink / raw)
  To: Mark Brown
  Cc: Ranjani Sridharan, Greg KH, Takashi Iwai, Pierre-Louis Bossart,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh, lee.jones

On Thu, Jul 02, 2020 at 12:15:22PM +0100, Mark Brown wrote:
> On Wed, Jul 01, 2020 at 08:32:50PM -0300, Jason Gunthorpe wrote:
> > On Wed, Jul 01, 2020 at 10:50:49AM +0100, Mark Brown wrote:
> 
> > > Another part of this is that there's not a clean cut over between MMIO
> > > and not using any hardware resources at all - for example a device might
> > > be connected over I2C but use resources to distribute interrupts to
> > > subdevices.
> 
> > How does the subdevice do anything if it only received an interrupt?
> 
> Via some bus that isn't memory mapped like I2C or SPI.
> 
> > That sounds rather more like virtual bus's use case..
> 
> These are very much physical devices often with distinct IPs in distinct
> address ranges and so on, it's just that those addresses happen not to
> be on buses it is sensible to memory map.

But platform bus is all about memmory mapping, so how does the
subdevice learn the address range and properly share the underlying
transport?

Jason


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-07-02 12:11                                         ` Jason Gunthorpe
@ 2020-07-02 12:20                                           ` Mark Brown
  0 siblings, 0 replies; 69+ messages in thread
From: Mark Brown @ 2020-07-02 12:20 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ranjani Sridharan, Greg KH, Takashi Iwai, Pierre-Louis Bossart,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh, lee.jones

[-- Attachment #1: Type: text/plain, Size: 648 bytes --]

On Thu, Jul 02, 2020 at 09:11:47AM -0300, Jason Gunthorpe wrote:
> On Thu, Jul 02, 2020 at 12:15:22PM +0100, Mark Brown wrote:

> > These are very much physical devices often with distinct IPs in distinct
> > address ranges and so on, it's just that those addresses happen not to
> > be on buses it is sensible to memory map.

> But platform bus is all about memmory mapping, so how does the
> subdevice learn the address range and properly share the underlying
> transport?

Hard coding, some out of band mechanism or using an unparented register
region (the resource the code is fine, it's not going to actually look
at the registers described).

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-07-01  6:59                                 ` Greg KH
@ 2020-07-02 13:43                                   ` Ranjani Sridharan
  2020-07-06 23:02                                     ` Dan Williams
  0 siblings, 1 reply; 69+ messages in thread
From: Ranjani Sridharan @ 2020-07-02 13:43 UTC (permalink / raw)
  To: Greg KH
  Cc: Jason Gunthorpe, Mark Brown, Takashi Iwai, Pierre-Louis Bossart,
	Jeff Kirsher, davem, netdev, linux-rdma, nhorman, sassmann,
	Fred Oh, lee.jones, Dan J Williams

On Wed, 2020-07-01 at 08:59 +0200, Greg KH wrote:
> On Tue, Jun 30, 2020 at 10:24:04AM -0700, Ranjani Sridharan wrote:
> > On Tue, 2020-06-30 at 08:32 -0300, Jason Gunthorpe wrote:
> > > On Tue, Jun 30, 2020 at 11:31:41AM +0100, Mark Brown wrote:
> > > > On Mon, Jun 29, 2020 at 07:59:59PM -0300, Jason Gunthorpe
> > > > wrote:
> > > > > On Mon, Jun 29, 2020 at 09:33:17PM +0100, Mark Brown wrote:
> > > > > > On Wed, May 27, 2020 at 09:17:33AM +0200, Greg KH wrote:
> > > > > > > Ok, that's good to hear.  But platform devices should
> > > > > > > never
> > > > > > > be showing
> > > > > > > up as a child of a PCI device.  In the "near future" when
> > > > > > > we
> > > > > > > get the
> > > > > > > virtual bus code merged, we can convert any existing
> > > > > > > users
> > > > > > > like this to
> > > > > > > the new code.
> > > > > > What are we supposed to do with things like PCI attached
> > > > > > FPGAs
> > > > > > and ASICs
> > > > > > in that case?  They can have host visible devices with
> > > > > > physical
> > > > > > resources like MMIO ranges and interrupts without those
> > > > > > being
> > > > > > split up
> > > > > > neatly as PCI subfunctions - the original use case for MFD
> > > > > > was
> > > > > > such
> > > > > > ASICs, there's a few PCI drivers in there now. 
> > > > > Greg has been pretty clear that MFD shouldn't have been used
> > > > > on
> > > > > top of
> > > > > PCI drivers.
> > > > 
> > > > The proposed bus lacks resource handling, an equivalent of
> > > > platform_get_resource() and friends for example, which would be
> > > > needed
> > > > for use with physical devices.  Both that and the name suggest
> > > > that
> > > > it's
> > > > for virtual devices.
> > > 
> > > Resource handling is only useful if the HW has a hard distinction
> > > between it's functional blocks. This scheme is intended for
> > > devices
> > > where that doesn't ex"ist. The driver that attaches to the PCI
> > > device
> > > and creates the virtual devices is supposed to provide SW
> > > abstractions
> > > for the other drivers to sit on.
> > >  
> > > I'm not sure why we are calling it virtual bus.
> > Hi Jason,
> > 
> > We're addressing the naming in the next version as well. We've had
> > several people reject the name virtual bus and we've narrowed in on
> > "ancillary bus" for the new name suggesting that we have the core
> > device that is attached to the primary bus and one or more sub-
> > devices
> > that are attached to the ancillary bus. Please let us know what you
> > think of it.
> 
> I'm thinking that the primary person who keeps asking you to create
> this
> "virtual bus" was not upset about that name, nor consulted, so why
> are
> you changing this?  :(
> 
> Right now this feels like the old technique of "keep throwing crap at
> a
> maintainer until they get so sick of it that they do the work
> themselves..."

Hi Greg,

It wasnt our intention to frustrate you with the name change but in the
last exchange you had specifically asked for signed-off-by's from other
Intel developers. In that process, one of the recent feedback from some
of them was about the name being misleading and confusing.

If you feel strongly about the keeping name "virtual bus", please let
us know and we can circle back with them again.

Thanks,
Ranjani


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-07-02 13:43                                   ` Ranjani Sridharan
@ 2020-07-06 23:02                                     ` Dan Williams
  2020-07-07 14:16                                       ` Greg KH
  0 siblings, 1 reply; 69+ messages in thread
From: Dan Williams @ 2020-07-06 23:02 UTC (permalink / raw)
  To: Ranjani Sridharan
  Cc: Greg KH, Jason Gunthorpe, Mark Brown, Takashi Iwai,
	Pierre-Louis Bossart, Jeff Kirsher, David Miller, Netdev,
	linux-rdma, nhorman, sassmann, Fred Oh, lee.jones

On Thu, Jul 2, 2020 at 6:44 AM Ranjani Sridharan
<ranjani.sridharan@linux.intel.com> wrote:
[..]
> > > Hi Jason,
> > >
> > > We're addressing the naming in the next version as well. We've had
> > > several people reject the name virtual bus and we've narrowed in on
> > > "ancillary bus" for the new name suggesting that we have the core
> > > device that is attached to the primary bus and one or more sub-
> > > devices
> > > that are attached to the ancillary bus. Please let us know what you
> > > think of it.
> >
> > I'm thinking that the primary person who keeps asking you to create
> > this
> > "virtual bus" was not upset about that name, nor consulted, so why
> > are
> > you changing this?  :(
> >
> > Right now this feels like the old technique of "keep throwing crap at
> > a
> > maintainer until they get so sick of it that they do the work
> > themselves..."
>
> Hi Greg,
>
> It wasnt our intention to frustrate you with the name change but in the
> last exchange you had specifically asked for signed-off-by's from other
> Intel developers. In that process, one of the recent feedback from some
> of them was about the name being misleading and confusing.
>
> If you feel strongly about the keeping name "virtual bus", please let
> us know and we can circle back with them again.

Hey Greg,

Feel free to blame me for the naming thrash it was part of my internal
review feedback trying to crispen the definition of this facility. I
was expecting the next revision to come with the internal reviewed-by
and an explanation of all the items that were changed during that
review.

Ranjani, is the next rev ready to go out with the review items
identified? Let's just proceed with the current direction of the
review tags that Greg asked for, name changes and all, and iterate the
next details on the list with the new patches in hand.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client
  2020-07-06 23:02                                     ` Dan Williams
@ 2020-07-07 14:16                                       ` Greg KH
  0 siblings, 0 replies; 69+ messages in thread
From: Greg KH @ 2020-07-07 14:16 UTC (permalink / raw)
  To: Dan Williams
  Cc: Ranjani Sridharan, Jason Gunthorpe, Mark Brown, Takashi Iwai,
	Pierre-Louis Bossart, Jeff Kirsher, David Miller, Netdev,
	linux-rdma, nhorman, sassmann, Fred Oh, lee.jones

On Mon, Jul 06, 2020 at 04:02:57PM -0700, Dan Williams wrote:
> On Thu, Jul 2, 2020 at 6:44 AM Ranjani Sridharan
> <ranjani.sridharan@linux.intel.com> wrote:
> [..]
> > > > Hi Jason,
> > > >
> > > > We're addressing the naming in the next version as well. We've had
> > > > several people reject the name virtual bus and we've narrowed in on
> > > > "ancillary bus" for the new name suggesting that we have the core
> > > > device that is attached to the primary bus and one or more sub-
> > > > devices
> > > > that are attached to the ancillary bus. Please let us know what you
> > > > think of it.
> > >
> > > I'm thinking that the primary person who keeps asking you to create
> > > this
> > > "virtual bus" was not upset about that name, nor consulted, so why
> > > are
> > > you changing this?  :(
> > >
> > > Right now this feels like the old technique of "keep throwing crap at
> > > a
> > > maintainer until they get so sick of it that they do the work
> > > themselves..."
> >
> > Hi Greg,
> >
> > It wasnt our intention to frustrate you with the name change but in the
> > last exchange you had specifically asked for signed-off-by's from other
> > Intel developers. In that process, one of the recent feedback from some
> > of them was about the name being misleading and confusing.
> >
> > If you feel strongly about the keeping name "virtual bus", please let
> > us know and we can circle back with them again.
> 
> Hey Greg,
> 
> Feel free to blame me for the naming thrash it was part of my internal
> review feedback trying to crispen the definition of this facility. I
> was expecting the next revision to come with the internal reviewed-by
> and an explanation of all the items that were changed during that
> review.

That would have been nice to see, instead of it "leaking" like this :(

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 69+ messages in thread

end of thread, other threads:[~2020-07-07 14:16 UTC | newest]

Thread overview: 69+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-20  7:02 [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Jeff Kirsher
2020-05-20  7:02 ` [net-next v4 01/12] Implementation of Virtual Bus Jeff Kirsher
2020-05-21 14:57   ` Parav Pandit
2020-05-21 17:43     ` gregkh
2020-05-21 20:10       ` Jason Gunthorpe
2020-05-20  7:02 ` [net-next v4 02/12] ice: Create and register virtual bus for RDMA Jeff Kirsher
2020-05-20  7:02 ` [net-next v4 03/12] ice: Complete RDMA peer registration Jeff Kirsher
2020-05-20  7:02 ` [net-next v4 04/12] ice: Support resource allocation requests Jeff Kirsher
2020-05-20  7:02 ` [net-next v4 05/12] ice: Enable event notifications Jeff Kirsher
2020-05-20  7:02 ` [net-next v4 06/12] ice: Allow reset operations Jeff Kirsher
2020-05-20  7:02 ` [net-next v4 07/12] ice: Pass through communications to VF Jeff Kirsher
2020-05-20  7:02 ` [net-next v4 08/12] i40e: Move client header location Jeff Kirsher
2020-05-20  7:02 ` [net-next v4 09/12] i40e: Register a virtbus device to provide RDMA Jeff Kirsher
2020-05-20  7:02 ` [net-next v4 10/12] ASoC: SOF: Introduce descriptors for SOF client Jeff Kirsher
2020-05-20  7:20   ` Greg KH
2020-05-20 12:54   ` Jason Gunthorpe
2020-05-20 12:57     ` Jason Gunthorpe
2020-05-21 21:11     ` Ranjani Sridharan
2020-05-21 23:34       ` Jason Gunthorpe
2020-05-22 14:29         ` Pierre-Louis Bossart
2020-05-22 14:55           ` Jason Gunthorpe
2020-05-22 15:33             ` Pierre-Louis Bossart
2020-05-22 17:10               ` Jason Gunthorpe
2020-05-22 18:35                 ` Pierre-Louis Bossart
2020-05-22 18:40                   ` Jason Gunthorpe
2020-05-22 18:48                     ` Pierre-Louis Bossart
2020-05-22 19:44                       ` Jason Gunthorpe
2020-05-22 21:05                         ` Pierre-Louis Bossart
2020-06-29 20:59               ` Mark Brown
2020-05-23  6:23           ` Greg KH
2020-05-23 19:41             ` Pierre-Louis Bossart
2020-05-24  6:35               ` Greg KH
2020-05-26 13:15                 ` Pierre-Louis Bossart
2020-05-26 13:37                   ` Takashi Iwai
2020-05-27  7:17                     ` Greg KH
2020-05-27 14:05                       ` Pierre-Louis Bossart
2020-06-29 20:33                       ` Mark Brown
2020-06-29 22:59                         ` Jason Gunthorpe
2020-06-29 23:13                           ` Kirsher, Jeffrey T
2020-06-30 10:31                           ` Mark Brown
2020-06-30 11:32                             ` Jason Gunthorpe
2020-06-30 14:16                               ` Mark Brown
2020-06-30 17:24                               ` Ranjani Sridharan
2020-06-30 17:27                                 ` Jason Gunthorpe
2020-07-01  9:50                                   ` Mark Brown
2020-07-01 23:32                                     ` Jason Gunthorpe
2020-07-02 11:15                                       ` Mark Brown
2020-07-02 12:11                                         ` Jason Gunthorpe
2020-07-02 12:20                                           ` Mark Brown
2020-07-01  6:59                                 ` Greg KH
2020-07-02 13:43                                   ` Ranjani Sridharan
2020-07-06 23:02                                     ` Dan Williams
2020-07-07 14:16                                       ` Greg KH
2020-05-25 16:55               ` Jason Gunthorpe
2020-06-29 20:21             ` Mark Brown
2020-06-29 17:36   ` Mark Brown
2020-05-20  7:02 ` [net-next v4 11/12] ASoC: SOF: Create client driver for IPC test Jeff Kirsher
2020-05-20  7:22   ` Greg KH
2020-05-20 12:56   ` Jason Gunthorpe
2020-05-27 20:18     ` Ranjani Sridharan
2020-05-28  0:12       ` Jason Gunthorpe
2020-05-28  1:40         ` Ranjani Sridharan
2020-05-28 10:45           ` Greg KH
2020-06-29 20:37             ` Mark Brown
2020-05-20  7:02 ` [net-next v4 12/12] ASoC: SOF: ops: Add new op for client registration Jeff Kirsher
2020-05-20  7:23   ` Greg KH
2020-05-20  7:17 ` [net-next v4 00/12][pull request] 100GbE Intel Wired LAN Driver Updates 2020-05-19 Greg KH
2020-05-20  7:25   ` Kirsher, Jeffrey T
2020-05-20  9:08     ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).