All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver
@ 2020-11-10  9:19 Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 01/10] thunderbolt: Do not clear USB4 router protocol adapter IFC and ISE bits Mika Westerberg
                   ` (10 more replies)
  0 siblings, 11 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-10  9:19 UTC (permalink / raw)
  To: linux-usb
  Cc: Michael Jamet, Yehezkel Bernat, Andreas Noever, Isaac Hazan,
	Lukas Wunner, David S . Miller, Greg Kroah-Hartman,
	Mika Westerberg, netdev

Hi all,

This series adds a new Thunderbolt service driver that can be used on
manufacturing floor to test that each Thunderbolt/USB4 port is functional.
It can be done either using a special loopback dongle that has RX and TX
lanes crossed, or by connecting a cable back to the host (for those who
don't have these dongles).

This takes advantage of the existing XDomain protocol and creates XDomain
devices for the loops back to the host where the DMA traffic test driver
can bind to.

The DMA traffic test driver creates a tunnel through the fabric and then
sends and receives data frames over the tunnel checking for different
errors.

The previous version can be found here:

  https://lore.kernel.org/linux-usb/20201104140030.6853-1-mika.westerberg@linux.intel.com/

Changes from the previous version:

  * Fix resource leak in tb_xdp_handle_request() (patch 2/10)
  * Use debugfs_remove_recursive() in tb_service_debugfs_remove() (patch 6/10)
  * Add tags from Yehezkel

Isaac Hazan (4):
  thunderbolt: Add link_speed and link_width to XDomain
  thunderbolt: Add functions for enabling and disabling lane bonding on XDomain
  thunderbolt: Add DMA traffic test driver
  MAINTAINERS: Add Isaac as maintainer of Thunderbolt DMA traffic test driver

Mika Westerberg (6):
  thunderbolt: Do not clear USB4 router protocol adapter IFC and ISE bits
  thunderbolt: Find XDomain by route instead of UUID
  thunderbolt: Create XDomain devices for loops back to the host
  thunderbolt: Create debugfs directory automatically for services
  thunderbolt: Make it possible to allocate one directional DMA tunnel
  thunderbolt: Add support for end-to-end flow control

 .../ABI/testing/sysfs-bus-thunderbolt         |  28 +
 MAINTAINERS                                   |   6 +
 drivers/net/thunderbolt.c                     |   2 +-
 drivers/thunderbolt/Kconfig                   |  13 +
 drivers/thunderbolt/Makefile                  |   3 +
 drivers/thunderbolt/ctl.c                     |   4 +-
 drivers/thunderbolt/debugfs.c                 |  24 +
 drivers/thunderbolt/dma_test.c                | 736 ++++++++++++++++++
 drivers/thunderbolt/nhi.c                     |  36 +-
 drivers/thunderbolt/path.c                    |  13 +-
 drivers/thunderbolt/switch.c                  |  33 +-
 drivers/thunderbolt/tb.h                      |   8 +
 drivers/thunderbolt/tunnel.c                  |  50 +-
 drivers/thunderbolt/xdomain.c                 | 148 +++-
 include/linux/thunderbolt.h                   |  18 +-
 15 files changed, 1080 insertions(+), 42 deletions(-)
 create mode 100644 drivers/thunderbolt/dma_test.c

-- 
2.28.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 01/10] thunderbolt: Do not clear USB4 router protocol adapter IFC and ISE bits
  2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
@ 2020-11-10  9:19 ` Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 02/10] thunderbolt: Find XDomain by route instead of UUID Mika Westerberg
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-10  9:19 UTC (permalink / raw)
  To: linux-usb
  Cc: Michael Jamet, Yehezkel Bernat, Andreas Noever, Isaac Hazan,
	Lukas Wunner, David S . Miller, Greg Kroah-Hartman,
	Mika Westerberg, netdev

These fields are marked as vendor defined in the USB4 spec and should
not be modified by the software, so only clear them when we are dealing
with pre-USB4 hardware.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Yehezkel Bernat <YehezkelShB@gmail.com>
---
 drivers/thunderbolt/path.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c
index 03e7b714deab..7c2c45d9ba4a 100644
--- a/drivers/thunderbolt/path.c
+++ b/drivers/thunderbolt/path.c
@@ -406,10 +406,17 @@ static int __tb_path_deactivate_hop(struct tb_port *port, int hop_index,
 
 		if (!hop.pending) {
 			if (clear_fc) {
-				/* Clear flow control */
-				hop.ingress_fc = 0;
+				/*
+				 * Clear flow control. Protocol adapters
+				 * IFC and ISE bits are vendor defined
+				 * in the USB4 spec so we clear them
+				 * only for pre-USB4 adapters.
+				 */
+				if (!tb_switch_is_usb4(port->sw)) {
+					hop.ingress_fc = 0;
+					hop.ingress_shared_buffer = 0;
+				}
 				hop.egress_fc = 0;
-				hop.ingress_shared_buffer = 0;
 				hop.egress_shared_buffer = 0;
 
 				return tb_port_write(port, &hop, TB_CFG_HOPS,
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 02/10] thunderbolt: Find XDomain by route instead of UUID
  2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 01/10] thunderbolt: Do not clear USB4 router protocol adapter IFC and ISE bits Mika Westerberg
@ 2020-11-10  9:19 ` Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 03/10] thunderbolt: Create XDomain devices for loops back to the host Mika Westerberg
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-10  9:19 UTC (permalink / raw)
  To: linux-usb
  Cc: Michael Jamet, Yehezkel Bernat, Andreas Noever, Isaac Hazan,
	Lukas Wunner, David S . Miller, Greg Kroah-Hartman,
	Mika Westerberg, netdev

We are going to represent loops back to the host also as XDomains and
they all have the same (host) UUID, so finding them needs to use route
string instead. This also requires that we check if the XDomain device
is added to the bus before its properties can be updated. Otherwise the
remote UUID might not be populated yet.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Yehezkel Bernat <YehezkelShB@gmail.com>
---
 drivers/thunderbolt/xdomain.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
index 48907853732a..e436e9efa7e7 100644
--- a/drivers/thunderbolt/xdomain.c
+++ b/drivers/thunderbolt/xdomain.c
@@ -587,8 +587,6 @@ static void tb_xdp_handle_request(struct work_struct *work)
 		break;
 
 	case PROPERTIES_CHANGED_REQUEST: {
-		const struct tb_xdp_properties_changed *xchg =
-			(const struct tb_xdp_properties_changed *)pkg;
 		struct tb_xdomain *xd;
 
 		ret = tb_xdp_properties_changed_response(ctl, route, sequence);
@@ -598,10 +596,12 @@ static void tb_xdp_handle_request(struct work_struct *work)
 		 * the xdomain related to this connection as well in
 		 * case there is a change in services it offers.
 		 */
-		xd = tb_xdomain_find_by_uuid_locked(tb, &xchg->src_uuid);
+		xd = tb_xdomain_find_by_route_locked(tb, route);
 		if (xd) {
-			queue_delayed_work(tb->wq, &xd->get_properties_work,
-					   msecs_to_jiffies(50));
+			if (device_is_registered(&xd->dev)) {
+				queue_delayed_work(tb->wq, &xd->get_properties_work,
+						   msecs_to_jiffies(50));
+			}
 			tb_xdomain_put(xd);
 		}
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 03/10] thunderbolt: Create XDomain devices for loops back to the host
  2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 01/10] thunderbolt: Do not clear USB4 router protocol adapter IFC and ISE bits Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 02/10] thunderbolt: Find XDomain by route instead of UUID Mika Westerberg
@ 2020-11-10  9:19 ` Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 04/10] thunderbolt: Add link_speed and link_width to XDomain Mika Westerberg
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-10  9:19 UTC (permalink / raw)
  To: linux-usb
  Cc: Michael Jamet, Yehezkel Bernat, Andreas Noever, Isaac Hazan,
	Lukas Wunner, David S . Miller, Greg Kroah-Hartman,
	Mika Westerberg, netdev

It is perfectly possible to have loops back from the routers to the
host, or even from one host port to another. Instead of ignoring these,
we create XDomain devices for each. This allows creating services such
as DMA traffic test that is used in manufacturing for example.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Yehezkel Bernat <YehezkelShB@gmail.com>
---
 drivers/thunderbolt/xdomain.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
index e436e9efa7e7..da229ac4e471 100644
--- a/drivers/thunderbolt/xdomain.c
+++ b/drivers/thunderbolt/xdomain.c
@@ -961,10 +961,8 @@ static void tb_xdomain_get_uuid(struct work_struct *work)
 		return;
 	}
 
-	if (uuid_equal(&uuid, xd->local_uuid)) {
+	if (uuid_equal(&uuid, xd->local_uuid))
 		dev_dbg(&xd->dev, "intra-domain loop detected\n");
-		return;
-	}
 
 	/*
 	 * If the UUID is different, there is another domain connected
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 04/10] thunderbolt: Add link_speed and link_width to XDomain
  2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
                   ` (2 preceding siblings ...)
  2020-11-10  9:19 ` [PATCH v2 03/10] thunderbolt: Create XDomain devices for loops back to the host Mika Westerberg
@ 2020-11-10  9:19 ` Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 05/10] thunderbolt: Add functions for enabling and disabling lane bonding on XDomain Mika Westerberg
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-10  9:19 UTC (permalink / raw)
  To: linux-usb
  Cc: Michael Jamet, Yehezkel Bernat, Andreas Noever, Isaac Hazan,
	Lukas Wunner, David S . Miller, Greg Kroah-Hartman,
	Mika Westerberg, netdev

From: Isaac Hazan <isaac.hazan@intel.com>

Link speed and link width are needed for checking expected values in
case of using a loopback service.

Signed-off-by: Isaac Hazan <isaac.hazan@intel.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Yehezkel Bernat <YehezkelShB@gmail.com>
---
 .../ABI/testing/sysfs-bus-thunderbolt         | 28 ++++++++
 drivers/thunderbolt/switch.c                  |  9 ++-
 drivers/thunderbolt/tb.h                      |  1 +
 drivers/thunderbolt/xdomain.c                 | 65 +++++++++++++++++++
 include/linux/thunderbolt.h                   |  4 ++
 5 files changed, 106 insertions(+), 1 deletion(-)

diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt
index 0b4ab9e4b8f4..a91b4b24496e 100644
--- a/Documentation/ABI/testing/sysfs-bus-thunderbolt
+++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt
@@ -1,3 +1,31 @@
+What:		/sys/bus/thunderbolt/devices/<xdomain>/rx_speed
+Date:		Feb 2021
+KernelVersion:	5.11
+Contact:	Isaac Hazan <isaac.hazan@intel.com>
+Description:	This attribute reports the XDomain RX speed per lane.
+		All RX lanes run at the same speed.
+
+What:		/sys/bus/thunderbolt/devices/<xdomain>/rx_lanes
+Date:		Feb 2021
+KernelVersion:	5.11
+Contact:	Isaac Hazan <isaac.hazan@intel.com>
+Description:	This attribute reports the number of RX lanes the XDomain
+		is using simultaneously through its upstream port.
+
+What:		/sys/bus/thunderbolt/devices/<xdomain>/tx_speed
+Date:		Feb 2021
+KernelVersion:	5.11
+Contact:	Isaac Hazan <isaac.hazan@intel.com>
+Description:	This attribute reports the XDomain TX speed per lane.
+		All TX lanes run at the same speed.
+
+What:		/sys/bus/thunderbolt/devices/<xdomain>/tx_lanes
+Date:		Feb 2021
+KernelVersion:	5.11
+Contact:	Isaac Hazan <isaac.hazan@intel.com>
+Description:	This attribute reports number of TX lanes the XDomain
+		is using simultaneously through its upstream port.
+
 What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl
 Date:		Jun 2018
 KernelVersion:	4.17
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index c73bbfe69ba1..05a360901790 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -932,7 +932,14 @@ int tb_port_get_link_speed(struct tb_port *port)
 	return speed == LANE_ADP_CS_1_CURRENT_SPEED_GEN3 ? 20 : 10;
 }
 
-static int tb_port_get_link_width(struct tb_port *port)
+/**
+ * tb_port_get_link_width() - Get current link width
+ * @port: Port to check (USB4 or CIO)
+ *
+ * Returns link width. Return values can be 1 (Single-Lane), 2 (Dual-Lane)
+ * or negative errno in case of failure.
+ */
+int tb_port_get_link_width(struct tb_port *port)
 {
 	u32 val;
 	int ret;
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index a9995e21b916..3a826315049e 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -862,6 +862,7 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
 	     (p) = tb_next_port_on_path((src), (dst), (p)))
 
 int tb_port_get_link_speed(struct tb_port *port);
+int tb_port_get_link_width(struct tb_port *port);
 
 int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
 int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
index da229ac4e471..83a315f96934 100644
--- a/drivers/thunderbolt/xdomain.c
+++ b/drivers/thunderbolt/xdomain.c
@@ -942,6 +942,43 @@ static void tb_xdomain_restore_paths(struct tb_xdomain *xd)
 	}
 }
 
+static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd)
+{
+	return tb_to_switch(xd->dev.parent);
+}
+
+static int tb_xdomain_update_link_attributes(struct tb_xdomain *xd)
+{
+	bool change = false;
+	struct tb_port *port;
+	int ret;
+
+	port = tb_port_at(xd->route, tb_xdomain_parent(xd));
+
+	ret = tb_port_get_link_speed(port);
+	if (ret < 0)
+		return ret;
+
+	if (xd->link_speed != ret)
+		change = true;
+
+	xd->link_speed = ret;
+
+	ret = tb_port_get_link_width(port);
+	if (ret < 0)
+		return ret;
+
+	if (xd->link_width != ret)
+		change = true;
+
+	xd->link_width = ret;
+
+	if (change)
+		kobject_uevent(&xd->dev.kobj, KOBJ_CHANGE);
+
+	return 0;
+}
+
 static void tb_xdomain_get_uuid(struct work_struct *work)
 {
 	struct tb_xdomain *xd = container_of(work, typeof(*xd),
@@ -1053,6 +1090,8 @@ static void tb_xdomain_get_properties(struct work_struct *work)
 	xd->properties = dir;
 	xd->property_block_gen = gen;
 
+	tb_xdomain_update_link_attributes(xd);
+
 	tb_xdomain_restore_paths(xd);
 
 	mutex_unlock(&xd->lock);
@@ -1159,9 +1198,35 @@ static ssize_t unique_id_show(struct device *dev, struct device_attribute *attr,
 }
 static DEVICE_ATTR_RO(unique_id);
 
+static ssize_t speed_show(struct device *dev, struct device_attribute *attr,
+			  char *buf)
+{
+	struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev);
+
+	return sprintf(buf, "%u.0 Gb/s\n", xd->link_speed);
+}
+
+static DEVICE_ATTR(rx_speed, 0444, speed_show, NULL);
+static DEVICE_ATTR(tx_speed, 0444, speed_show, NULL);
+
+static ssize_t lanes_show(struct device *dev, struct device_attribute *attr,
+			  char *buf)
+{
+	struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev);
+
+	return sprintf(buf, "%u\n", xd->link_width);
+}
+
+static DEVICE_ATTR(rx_lanes, 0444, lanes_show, NULL);
+static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL);
+
 static struct attribute *xdomain_attrs[] = {
 	&dev_attr_device.attr,
 	&dev_attr_device_name.attr,
+	&dev_attr_rx_lanes.attr,
+	&dev_attr_rx_speed.attr,
+	&dev_attr_tx_lanes.attr,
+	&dev_attr_tx_speed.attr,
 	&dev_attr_unique_id.attr,
 	&dev_attr_vendor.attr,
 	&dev_attr_vendor_name.attr,
diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h
index 5db2b11ab085..e441af88ed77 100644
--- a/include/linux/thunderbolt.h
+++ b/include/linux/thunderbolt.h
@@ -179,6 +179,8 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir);
  * @lock: Lock to serialize access to the following fields of this structure
  * @vendor_name: Name of the vendor (or %NULL if not known)
  * @device_name: Name of the device (or %NULL if not known)
+ * @link_speed: Speed of the link in Gb/s
+ * @link_width: Width of the link (1 or 2)
  * @is_unplugged: The XDomain is unplugged
  * @resume: The XDomain is being resumed
  * @needs_uuid: If the XDomain does not have @remote_uuid it will be
@@ -223,6 +225,8 @@ struct tb_xdomain {
 	struct mutex lock;
 	const char *vendor_name;
 	const char *device_name;
+	unsigned int link_speed;
+	unsigned int link_width;
 	bool is_unplugged;
 	bool resume;
 	bool needs_uuid;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 05/10] thunderbolt: Add functions for enabling and disabling lane bonding on XDomain
  2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
                   ` (3 preceding siblings ...)
  2020-11-10  9:19 ` [PATCH v2 04/10] thunderbolt: Add link_speed and link_width to XDomain Mika Westerberg
@ 2020-11-10  9:19 ` Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 06/10] thunderbolt: Create debugfs directory automatically for services Mika Westerberg
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-10  9:19 UTC (permalink / raw)
  To: linux-usb
  Cc: Michael Jamet, Yehezkel Bernat, Andreas Noever, Isaac Hazan,
	Lukas Wunner, David S . Miller, Greg Kroah-Hartman,
	Mika Westerberg, netdev

From: Isaac Hazan <isaac.hazan@intel.com>

These can be used by service drivers to enable and disable lane bonding
as needed.

Signed-off-by: Isaac Hazan <isaac.hazan@intel.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Yehezkel Bernat <YehezkelShB@gmail.com>
---
 drivers/thunderbolt/switch.c  | 24 +++++++++++--
 drivers/thunderbolt/tb.h      |  3 ++
 drivers/thunderbolt/xdomain.c | 66 +++++++++++++++++++++++++++++++++++
 include/linux/thunderbolt.h   |  2 ++
 4 files changed, 92 insertions(+), 3 deletions(-)

diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 05a360901790..cdfd8cccfe19 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -503,12 +503,13 @@ static void tb_dump_port(struct tb *tb, struct tb_regs_port_header *port)
 
 /**
  * tb_port_state() - get connectedness state of a port
+ * @port: the port to check
  *
  * The port must have a TB_CAP_PHY (i.e. it should be a real port).
  *
  * Return: Returns an enum tb_port_state on success or an error code on failure.
  */
-static int tb_port_state(struct tb_port *port)
+int tb_port_state(struct tb_port *port)
 {
 	struct tb_cap_phy phy;
 	int res;
@@ -1008,7 +1009,16 @@ static int tb_port_set_link_width(struct tb_port *port, unsigned int width)
 			     port->cap_phy + LANE_ADP_CS_1, 1);
 }
 
-static int tb_port_lane_bonding_enable(struct tb_port *port)
+/**
+ * tb_port_lane_bonding_enable() - Enable bonding on port
+ * @port: port to enable
+ *
+ * Enable bonding by setting the link width of the port and the
+ * other port in case of dual link port.
+ *
+ * Return: %0 in case of success and negative errno in case of error
+ */
+int tb_port_lane_bonding_enable(struct tb_port *port)
 {
 	int ret;
 
@@ -1038,7 +1048,15 @@ static int tb_port_lane_bonding_enable(struct tb_port *port)
 	return 0;
 }
 
-static void tb_port_lane_bonding_disable(struct tb_port *port)
+/**
+ * tb_port_lane_bonding_disable() - Disable bonding on port
+ * @port: port to disable
+ *
+ * Disable bonding by setting the link width of the port and the
+ * other port in case of dual link port.
+ *
+ */
+void tb_port_lane_bonding_disable(struct tb_port *port)
 {
 	port->dual_link_port->bonded = false;
 	port->bonded = false;
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 3a826315049e..e98d3561648d 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -863,6 +863,9 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
 
 int tb_port_get_link_speed(struct tb_port *port);
 int tb_port_get_link_width(struct tb_port *port);
+int tb_port_state(struct tb_port *port);
+int tb_port_lane_bonding_enable(struct tb_port *port);
+void tb_port_lane_bonding_disable(struct tb_port *port);
 
 int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
 int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
index 83a315f96934..65108216bfe3 100644
--- a/drivers/thunderbolt/xdomain.c
+++ b/drivers/thunderbolt/xdomain.c
@@ -8,6 +8,7 @@
  */
 
 #include <linux/device.h>
+#include <linux/delay.h>
 #include <linux/kmod.h>
 #include <linux/module.h>
 #include <linux/pm_runtime.h>
@@ -21,6 +22,7 @@
 #define XDOMAIN_UUID_RETRIES			10
 #define XDOMAIN_PROPERTIES_RETRIES		60
 #define XDOMAIN_PROPERTIES_CHANGED_RETRIES	10
+#define XDOMAIN_BONDING_WAIT			100  /* ms */
 
 struct xdomain_request_work {
 	struct work_struct work;
@@ -1443,6 +1445,70 @@ void tb_xdomain_remove(struct tb_xdomain *xd)
 		device_unregister(&xd->dev);
 }
 
+/**
+ * tb_xdomain_lane_bonding_enable() - Enable lane bonding on XDomain
+ * @xd: XDomain connection
+ *
+ * Lane bonding is disabled by default for XDomains. This function tries
+ * to enable bonding by first enabling the port and waiting for the CL0
+ * state.
+ *
+ * Return: %0 in case of success and negative errno in case of error.
+ */
+int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd)
+{
+	struct tb_port *port;
+	int ret;
+
+	port = tb_port_at(xd->route, tb_xdomain_parent(xd));
+	if (!port->dual_link_port)
+		return -ENODEV;
+
+	ret = tb_port_enable(port->dual_link_port);
+	if (ret)
+		return ret;
+
+	ret = tb_wait_for_port(port->dual_link_port, true);
+	if (ret < 0)
+		return ret;
+	if (!ret)
+		return -ENOTCONN;
+
+	ret = tb_port_lane_bonding_enable(port);
+	if (ret) {
+		tb_port_warn(port, "failed to enable lane bonding\n");
+		return ret;
+	}
+
+	tb_xdomain_update_link_attributes(xd);
+
+	dev_dbg(&xd->dev, "lane bonding enabled\n");
+	return 0;
+}
+EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_enable);
+
+/**
+ * tb_xdomain_lane_bonding_disable() - Disable lane bonding
+ * @xd: XDomain connection
+ *
+ * Lane bonding is disabled by default for XDomains. If bonding has been
+ * enabled, this function can be used to disable it.
+ */
+void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd)
+{
+	struct tb_port *port;
+
+	port = tb_port_at(xd->route, tb_xdomain_parent(xd));
+	if (port->dual_link_port) {
+		tb_port_lane_bonding_disable(port);
+		tb_port_disable(port->dual_link_port);
+		tb_xdomain_update_link_attributes(xd);
+
+		dev_dbg(&xd->dev, "lane bonding disabled\n");
+	}
+}
+EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_disable);
+
 /**
  * tb_xdomain_enable_paths() - Enable DMA paths for XDomain connection
  * @xd: XDomain connection
diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h
index e441af88ed77..0a747f92847e 100644
--- a/include/linux/thunderbolt.h
+++ b/include/linux/thunderbolt.h
@@ -247,6 +247,8 @@ struct tb_xdomain {
 	u8 depth;
 };
 
+int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd);
+void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd);
 int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16 transmit_path,
 			    u16 transmit_ring, u16 receive_path,
 			    u16 receive_ring);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 06/10] thunderbolt: Create debugfs directory automatically for services
  2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
                   ` (4 preceding siblings ...)
  2020-11-10  9:19 ` [PATCH v2 05/10] thunderbolt: Add functions for enabling and disabling lane bonding on XDomain Mika Westerberg
@ 2020-11-10  9:19 ` Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 07/10] thunderbolt: Make it possible to allocate one directional DMA tunnel Mika Westerberg
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-10  9:19 UTC (permalink / raw)
  To: linux-usb
  Cc: Michael Jamet, Yehezkel Bernat, Andreas Noever, Isaac Hazan,
	Lukas Wunner, David S . Miller, Greg Kroah-Hartman,
	Mika Westerberg, netdev

This allows service drivers to use it as parent directory if they need
to add their own debugfs entries.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Yehezkel Bernat <YehezkelShB@gmail.com>
---
 drivers/thunderbolt/debugfs.c | 24 ++++++++++++++++++++++++
 drivers/thunderbolt/tb.h      |  4 ++++
 drivers/thunderbolt/xdomain.c |  3 +++
 include/linux/thunderbolt.h   |  4 ++++
 4 files changed, 35 insertions(+)

diff --git a/drivers/thunderbolt/debugfs.c b/drivers/thunderbolt/debugfs.c
index 3680b2784ea1..e53ca8270acd 100644
--- a/drivers/thunderbolt/debugfs.c
+++ b/drivers/thunderbolt/debugfs.c
@@ -690,6 +690,30 @@ void tb_switch_debugfs_remove(struct tb_switch *sw)
 	debugfs_remove_recursive(sw->debugfs_dir);
 }
 
+/**
+ * tb_service_debugfs_init() - Add debugfs directory for service
+ * @svc: Thunderbolt service pointer
+ *
+ * Adds debugfs directory for service.
+ */
+void tb_service_debugfs_init(struct tb_service *svc)
+{
+	svc->debugfs_dir = debugfs_create_dir(dev_name(&svc->dev),
+					      tb_debugfs_root);
+}
+
+/**
+ * tb_service_debugfs_remove() - Remove service debugfs directory
+ * @svc: Thunderbolt service pointer
+ *
+ * Removes the previously created debugfs directory for @svc.
+ */
+void tb_service_debugfs_remove(struct tb_service *svc)
+{
+	debugfs_remove_recursive(svc->debugfs_dir);
+	svc->debugfs_dir = NULL;
+}
+
 void tb_debugfs_init(void)
 {
 	tb_debugfs_root = debugfs_create_dir("thunderbolt", NULL);
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index e98d3561648d..a21000649009 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -1027,11 +1027,15 @@ void tb_debugfs_init(void);
 void tb_debugfs_exit(void);
 void tb_switch_debugfs_init(struct tb_switch *sw);
 void tb_switch_debugfs_remove(struct tb_switch *sw);
+void tb_service_debugfs_init(struct tb_service *svc);
+void tb_service_debugfs_remove(struct tb_service *svc);
 #else
 static inline void tb_debugfs_init(void) { }
 static inline void tb_debugfs_exit(void) { }
 static inline void tb_switch_debugfs_init(struct tb_switch *sw) { }
 static inline void tb_switch_debugfs_remove(struct tb_switch *sw) { }
+static inline void tb_service_debugfs_init(struct tb_service *svc) { }
+static inline void tb_service_debugfs_remove(struct tb_service *svc) { }
 #endif
 
 #ifdef CONFIG_USB4_KUNIT_TEST
diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
index 65108216bfe3..1a0491b461fd 100644
--- a/drivers/thunderbolt/xdomain.c
+++ b/drivers/thunderbolt/xdomain.c
@@ -779,6 +779,7 @@ static void tb_service_release(struct device *dev)
 	struct tb_service *svc = container_of(dev, struct tb_service, dev);
 	struct tb_xdomain *xd = tb_service_parent(svc);
 
+	tb_service_debugfs_remove(svc);
 	ida_simple_remove(&xd->service_ids, svc->id);
 	kfree(svc->key);
 	kfree(svc);
@@ -892,6 +893,8 @@ static void enumerate_services(struct tb_xdomain *xd)
 		svc->dev.parent = &xd->dev;
 		dev_set_name(&svc->dev, "%s.%d", dev_name(&xd->dev), svc->id);
 
+		tb_service_debugfs_init(svc);
+
 		if (device_register(&svc->dev)) {
 			put_device(&svc->dev);
 			break;
diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h
index 0a747f92847e..a844fd5d96ab 100644
--- a/include/linux/thunderbolt.h
+++ b/include/linux/thunderbolt.h
@@ -350,6 +350,9 @@ void tb_unregister_protocol_handler(struct tb_protocol_handler *handler);
  * @prtcvers: Protocol version from the properties directory
  * @prtcrevs: Protocol software revision from the properties directory
  * @prtcstns: Protocol settings mask from the properties directory
+ * @debugfs_dir: Pointer to the service debugfs directory. Always created
+ *		 when debugfs is enabled. Can be used by service drivers to
+ *		 add their own entries under the service.
  *
  * Each domain exposes set of services it supports as collection of
  * properties. For each service there will be one corresponding
@@ -363,6 +366,7 @@ struct tb_service {
 	u32 prtcvers;
 	u32 prtcrevs;
 	u32 prtcstns;
+	struct dentry *debugfs_dir;
 };
 
 static inline struct tb_service *tb_service_get(struct tb_service *svc)
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 07/10] thunderbolt: Make it possible to allocate one directional DMA tunnel
  2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
                   ` (5 preceding siblings ...)
  2020-11-10  9:19 ` [PATCH v2 06/10] thunderbolt: Create debugfs directory automatically for services Mika Westerberg
@ 2020-11-10  9:19 ` Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 08/10] thunderbolt: Add support for end-to-end flow control Mika Westerberg
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-10  9:19 UTC (permalink / raw)
  To: linux-usb
  Cc: Michael Jamet, Yehezkel Bernat, Andreas Noever, Isaac Hazan,
	Lukas Wunner, David S . Miller, Greg Kroah-Hartman,
	Mika Westerberg, netdev

With DMA tunnels it is possible that the service using it does not
require bi-directional paths so make RX and TX optional (but of course
one of them needs to be set).

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Yehezkel Bernat <YehezkelShB@gmail.com>
---
 drivers/thunderbolt/tunnel.c | 50 ++++++++++++++++++++++--------------
 1 file changed, 31 insertions(+), 19 deletions(-)

diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index 829b6ccdd5d4..dcdf9c7a9cae 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -34,9 +34,6 @@
 #define TB_DP_AUX_PATH_OUT		1
 #define TB_DP_AUX_PATH_IN		2
 
-#define TB_DMA_PATH_OUT			0
-#define TB_DMA_PATH_IN			1
-
 static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" };
 
 #define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...)                   \
@@ -829,10 +826,10 @@ static void tb_dma_init_path(struct tb_path *path, unsigned int isb,
  * @nhi: Host controller port
  * @dst: Destination null port which the other domain is connected to
  * @transmit_ring: NHI ring number used to send packets towards the
- *		   other domain
+ *		   other domain. Set to %0 if TX path is not needed.
  * @transmit_path: HopID used for transmitting packets
  * @receive_ring: NHI ring number used to receive packets from the
- *		  other domain
+ *		  other domain. Set to %0 if RX path is not needed.
  * @reveive_path: HopID used for receiving packets
  *
  * Return: Returns a tb_tunnel on success or NULL on failure.
@@ -843,10 +840,19 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
 				      int receive_path)
 {
 	struct tb_tunnel *tunnel;
+	size_t npaths = 0, i = 0;
 	struct tb_path *path;
 	u32 credits;
 
-	tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_DMA);
+	if (receive_ring)
+		npaths++;
+	if (transmit_ring)
+		npaths++;
+
+	if (WARN_ON(!npaths))
+		return NULL;
+
+	tunnel = tb_tunnel_alloc(tb, npaths, TB_TUNNEL_DMA);
 	if (!tunnel)
 		return NULL;
 
@@ -856,22 +862,28 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
 
 	credits = tb_dma_credits(nhi);
 
-	path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0, "DMA RX");
-	if (!path) {
-		tb_tunnel_free(tunnel);
-		return NULL;
+	if (receive_ring) {
+		path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0,
+				     "DMA RX");
+		if (!path) {
+			tb_tunnel_free(tunnel);
+			return NULL;
+		}
+		tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL,
+				 credits);
+		tunnel->paths[i++] = path;
 	}
-	tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL,
-			 credits);
-	tunnel->paths[TB_DMA_PATH_IN] = path;
 
-	path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0, "DMA TX");
-	if (!path) {
-		tb_tunnel_free(tunnel);
-		return NULL;
+	if (transmit_ring) {
+		path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0,
+				     "DMA TX");
+		if (!path) {
+			tb_tunnel_free(tunnel);
+			return NULL;
+		}
+		tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits);
+		tunnel->paths[i++] = path;
 	}
-	tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits);
-	tunnel->paths[TB_DMA_PATH_OUT] = path;
 
 	return tunnel;
 }
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 08/10] thunderbolt: Add support for end-to-end flow control
  2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
                   ` (6 preceding siblings ...)
  2020-11-10  9:19 ` [PATCH v2 07/10] thunderbolt: Make it possible to allocate one directional DMA tunnel Mika Westerberg
@ 2020-11-10  9:19 ` Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 09/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-10  9:19 UTC (permalink / raw)
  To: linux-usb
  Cc: Michael Jamet, Yehezkel Bernat, Andreas Noever, Isaac Hazan,
	Lukas Wunner, David S . Miller, Greg Kroah-Hartman,
	Mika Westerberg, netdev

USB4 spec defines end-to-end (E2E) flow control that can be used between
hosts to prevent overflow of a RX ring. We previously had this partially
implemented but that code was removed with commit 53f13319d131
("thunderbolt: Get rid of E2E workaround") with the idea that we add it
back properly if there ever is need. Now that we are going to add DMA
traffic test driver (in subsequent patches) this can be useful.

For this reason we modify tb_ring_alloc_rx/tx() so that they accept
RING_FLAG_E2E and configure the hardware ring accordingly. The RX side
also requires passing TX HopID (e2e_tx_hop) used in the credit grant
packets.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Yehezkel Bernat <YehezkelShB@gmail.com>
---
 drivers/net/thunderbolt.c   |  2 +-
 drivers/thunderbolt/ctl.c   |  4 ++--
 drivers/thunderbolt/nhi.c   | 36 ++++++++++++++++++++++++++++++++----
 include/linux/thunderbolt.h |  8 +++++++-
 4 files changed, 42 insertions(+), 8 deletions(-)

diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c
index 3160443ef3b9..d7b5f87eaa15 100644
--- a/drivers/net/thunderbolt.c
+++ b/drivers/net/thunderbolt.c
@@ -866,7 +866,7 @@ static int tbnet_open(struct net_device *dev)
 	eof_mask = BIT(TBIP_PDF_FRAME_END);
 
 	ring = tb_ring_alloc_rx(xd->tb->nhi, -1, TBNET_RING_SIZE,
-				RING_FLAG_FRAME, sof_mask, eof_mask,
+				RING_FLAG_FRAME, 0, sof_mask, eof_mask,
 				tbnet_start_poll, net);
 	if (!ring) {
 		netdev_err(dev, "failed to allocate Rx ring\n");
diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index 9894b8f63064..1d86e27a0ef3 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -628,8 +628,8 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data)
 	if (!ctl->tx)
 		goto err;
 
-	ctl->rx = tb_ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND, 0xffff,
-				0xffff, NULL, NULL);
+	ctl->rx = tb_ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND, 0, 0xffff,
+				   0xffff, NULL, NULL);
 	if (!ctl->rx)
 		goto err;
 
diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index 3f79baa54829..a69bc6b49405 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -483,7 +483,7 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
 
 static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
 				     bool transmit, unsigned int flags,
-				     u16 sof_mask, u16 eof_mask,
+				     int e2e_tx_hop, u16 sof_mask, u16 eof_mask,
 				     void (*start_poll)(void *),
 				     void *poll_data)
 {
@@ -506,6 +506,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
 	ring->is_tx = transmit;
 	ring->size = size;
 	ring->flags = flags;
+	ring->e2e_tx_hop = e2e_tx_hop;
 	ring->sof_mask = sof_mask;
 	ring->eof_mask = eof_mask;
 	ring->head = 0;
@@ -550,7 +551,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
 struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,
 				 unsigned int flags)
 {
-	return tb_ring_alloc(nhi, hop, size, true, flags, 0, 0, NULL, NULL);
+	return tb_ring_alloc(nhi, hop, size, true, flags, 0, 0, 0, NULL, NULL);
 }
 EXPORT_SYMBOL_GPL(tb_ring_alloc_tx);
 
@@ -560,6 +561,7 @@ EXPORT_SYMBOL_GPL(tb_ring_alloc_tx);
  * @hop: HopID (ring) to allocate. Pass %-1 for automatic allocation.
  * @size: Number of entries in the ring
  * @flags: Flags for the ring
+ * @e2e_tx_hop: Transmit HopID when E2E is enabled in @flags
  * @sof_mask: Mask of PDF values that start a frame
  * @eof_mask: Mask of PDF values that end a frame
  * @start_poll: If not %NULL the ring will call this function when an
@@ -568,10 +570,11 @@ EXPORT_SYMBOL_GPL(tb_ring_alloc_tx);
  * @poll_data: Optional data passed to @start_poll
  */
 struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,
-				 unsigned int flags, u16 sof_mask, u16 eof_mask,
+				 unsigned int flags, int e2e_tx_hop,
+				 u16 sof_mask, u16 eof_mask,
 				 void (*start_poll)(void *), void *poll_data)
 {
-	return tb_ring_alloc(nhi, hop, size, false, flags, sof_mask, eof_mask,
+	return tb_ring_alloc(nhi, hop, size, false, flags, e2e_tx_hop, sof_mask, eof_mask,
 			     start_poll, poll_data);
 }
 EXPORT_SYMBOL_GPL(tb_ring_alloc_rx);
@@ -618,6 +621,31 @@ void tb_ring_start(struct tb_ring *ring)
 		ring_iowrite32options(ring, sof_eof_mask, 4);
 		ring_iowrite32options(ring, flags, 0);
 	}
+
+	/*
+	 * Now that the ring valid bit is set we can configure E2E if
+	 * enabled for the ring.
+	 */
+	if (ring->flags & RING_FLAG_E2E) {
+		if (!ring->is_tx) {
+			u32 hop;
+
+			hop = ring->e2e_tx_hop << REG_RX_OPTIONS_E2E_HOP_SHIFT;
+			hop &= REG_RX_OPTIONS_E2E_HOP_MASK;
+			flags |= hop;
+
+			dev_dbg(&ring->nhi->pdev->dev,
+				"enabling E2E for %s %d with TX HopID %d\n",
+				RING_TYPE(ring), ring->hop, ring->e2e_tx_hop);
+		} else {
+			dev_dbg(&ring->nhi->pdev->dev, "enabling E2E for %s %d\n",
+				RING_TYPE(ring), ring->hop);
+		}
+
+		flags |= RING_FLAG_E2E_FLOW_CONTROL;
+		ring_iowrite32options(ring, flags, 0);
+	}
+
 	ring_interrupt_active(ring, true);
 	ring->running = true;
 err:
diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h
index a844fd5d96ab..034dccf93955 100644
--- a/include/linux/thunderbolt.h
+++ b/include/linux/thunderbolt.h
@@ -481,6 +481,8 @@ struct tb_nhi {
  * @irq: MSI-X irq number if the ring uses MSI-X. %0 otherwise.
  * @vector: MSI-X vector number the ring uses (only set if @irq is > 0)
  * @flags: Ring specific flags
+ * @e2e_tx_hop: Transmit HopID when E2E is enabled. Only applicable to
+ *		RX ring. For TX ring this should be set to %0.
  * @sof_mask: Bit mask used to detect start of frame PDF
  * @eof_mask: Bit mask used to detect end of frame PDF
  * @start_poll: Called when ring interrupt is triggered to start
@@ -504,6 +506,7 @@ struct tb_ring {
 	int irq;
 	u8 vector;
 	unsigned int flags;
+	int e2e_tx_hop;
 	u16 sof_mask;
 	u16 eof_mask;
 	void (*start_poll)(void *data);
@@ -514,6 +517,8 @@ struct tb_ring {
 #define RING_FLAG_NO_SUSPEND	BIT(0)
 /* Configure the ring to be in frame mode */
 #define RING_FLAG_FRAME		BIT(1)
+/* Enable end-to-end flow control */
+#define RING_FLAG_E2E		BIT(2)
 
 struct ring_frame;
 typedef void (*ring_cb)(struct tb_ring *, struct ring_frame *, bool canceled);
@@ -562,7 +567,8 @@ struct ring_frame {
 struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,
 				 unsigned int flags);
 struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,
-				 unsigned int flags, u16 sof_mask, u16 eof_mask,
+				 unsigned int flags, int e2e_tx_hop,
+				 u16 sof_mask, u16 eof_mask,
 				 void (*start_poll)(void *), void *poll_data);
 void tb_ring_start(struct tb_ring *ring);
 void tb_ring_stop(struct tb_ring *ring);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 09/10] thunderbolt: Add DMA traffic test driver
  2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
                   ` (7 preceding siblings ...)
  2020-11-10  9:19 ` [PATCH v2 08/10] thunderbolt: Add support for end-to-end flow control Mika Westerberg
@ 2020-11-10  9:19 ` Mika Westerberg
  2020-11-10  9:19 ` [PATCH v2 10/10] MAINTAINERS: Add Isaac as maintainer of Thunderbolt " Mika Westerberg
  2020-11-10  9:30 ` [PATCH v2 00/10] thunderbolt: Add " Greg Kroah-Hartman
  10 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-10  9:19 UTC (permalink / raw)
  To: linux-usb
  Cc: Michael Jamet, Yehezkel Bernat, Andreas Noever, Isaac Hazan,
	Lukas Wunner, David S . Miller, Greg Kroah-Hartman,
	Mika Westerberg, netdev

From: Isaac Hazan <isaac.hazan@intel.com>

This driver allows sending DMA traffic over XDomain connection.
Specifically over a loopback connection using either a Thunderbolt/USB4
cable that is connected back to the host router port, or a special
loopback dongle that has RX and TX lines crossed. This can be useful at
manufacturing floor to check whether Thunderbolt/USB4 ports are
functional.

The driver exposes debugfs directory under the XDomain service that can
be used to configure the driver, start the test and check the results.

If a loopback dongle is used the steps to send and receive 1000 packets
can be done like:

  # modprobe thunderbolt_dma_test
  # echo 1000 > /sys/kernel/debug/thunderbolt/<service_id>/dma_test/packets_to_receive
  # echo 1000 > /sys/kernel/debug/thunderbolt/<service_id>/dma_test/packets_to_send
  # echo 1 > /sys/kernel/debug/thunderbolt/<service_id>/dma_test/test
  # cat /sys/kernel/debug/thunderbolt/<service_id>/dma_test/status

When a cable is connected back to host then there are two Thunderbolt
services, one is configured for receiving (does not matter which one):

  # modprobe thunderbolt_dma_test
  # echo 1000 > /sys/kernel/debug/thunderbolt/<service_a>/dma_test/packets_to_receive
  # echo 1 > /sys/kernel/debug/thunderbolt/<service_a>/dma_test/test

The other one for sending:

  # echo 1000 > /sys/kernel/debug/thunderbolt/<service_b>/dma_test/packets_to_send
  # echo 1 > /sys/kernel/debug/thunderbolt/<service_b>/dma_test/test

Results can be read from both services status attributes.

Signed-off-by: Isaac Hazan <isaac.hazan@intel.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Yehezkel Bernat <YehezkelShB@gmail.com>
---
 drivers/thunderbolt/Kconfig    |  13 +
 drivers/thunderbolt/Makefile   |   3 +
 drivers/thunderbolt/dma_test.c | 736 +++++++++++++++++++++++++++++++++
 3 files changed, 752 insertions(+)
 create mode 100644 drivers/thunderbolt/dma_test.c

diff --git a/drivers/thunderbolt/Kconfig b/drivers/thunderbolt/Kconfig
index 7fc058f81d00..4bfec8a28064 100644
--- a/drivers/thunderbolt/Kconfig
+++ b/drivers/thunderbolt/Kconfig
@@ -31,4 +31,17 @@ config USB4_KUNIT_TEST
 	bool "KUnit tests"
 	depends on KUNIT=y
 
+config USB4_DMA_TEST
+	tristate "DMA traffic test driver"
+	depends on DEBUG_FS
+	help
+	  This allows sending and receiving DMA traffic through loopback
+	  connection. Loopback connection can be done by either special
+	  dongle that has TX/RX lines crossed, or by simply connecting a
+	  cable back to the host. Only enable this if you know what you
+	  are doing. Normal users and distro kernels should say N here.
+
+	  To compile this driver a module, choose M here. The module will be
+	  called thunderbolt_dma_test.
+
 endif # USB4
diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile
index 571537371072..7aa48f6c41d9 100644
--- a/drivers/thunderbolt/Makefile
+++ b/drivers/thunderbolt/Makefile
@@ -7,3 +7,6 @@ thunderbolt-objs += nvm.o retimer.o quirks.o
 thunderbolt-${CONFIG_ACPI} += acpi.o
 thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o
 thunderbolt-${CONFIG_USB4_KUNIT_TEST} += test.o
+
+thunderbolt_dma_test-${CONFIG_USB4_DMA_TEST} += dma_test.o
+obj-$(CONFIG_USB4_DMA_TEST) += thunderbolt_dma_test.o
diff --git a/drivers/thunderbolt/dma_test.c b/drivers/thunderbolt/dma_test.c
new file mode 100644
index 000000000000..f924423fa180
--- /dev/null
+++ b/drivers/thunderbolt/dma_test.c
@@ -0,0 +1,736 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMA traffic test driver
+ *
+ * Copyright (C) 2020, Intel Corporation
+ * Authors: Isaac Hazan <isaac.hazan@intel.com>
+ *	    Mika Westerberg <mika.westerberg@linux.intel.com>
+ */
+
+#include <linux/acpi.h>
+#include <linux/completion.h>
+#include <linux/debugfs.h>
+#include <linux/module.h>
+#include <linux/sizes.h>
+#include <linux/thunderbolt.h>
+
+#define DMA_TEST_HOPID			8
+#define DMA_TEST_TX_RING_SIZE		64
+#define DMA_TEST_RX_RING_SIZE		256
+#define DMA_TEST_FRAME_SIZE		SZ_4K
+#define DMA_TEST_DATA_PATTERN		0x0123456789abcdefLL
+#define DMA_TEST_MAX_PACKETS		1000
+
+enum dma_test_frame_pdf {
+	DMA_TEST_PDF_FRAME_START = 1,
+	DMA_TEST_PDF_FRAME_END,
+};
+
+struct dma_test_frame {
+	struct dma_test *dma_test;
+	void *data;
+	struct ring_frame frame;
+};
+
+enum dma_test_test_error {
+	DMA_TEST_NO_ERROR,
+	DMA_TEST_INTERRUPTED,
+	DMA_TEST_BUFFER_ERROR,
+	DMA_TEST_DMA_ERROR,
+	DMA_TEST_CONFIG_ERROR,
+	DMA_TEST_SPEED_ERROR,
+	DMA_TEST_WIDTH_ERROR,
+	DMA_TEST_BONDING_ERROR,
+	DMA_TEST_PACKET_ERROR,
+};
+
+static const char * const dma_test_error_names[] = {
+	[DMA_TEST_NO_ERROR] = "no errors",
+	[DMA_TEST_INTERRUPTED] = "interrupted by signal",
+	[DMA_TEST_BUFFER_ERROR] = "no memory for packet buffers",
+	[DMA_TEST_DMA_ERROR] = "DMA ring setup failed",
+	[DMA_TEST_CONFIG_ERROR] = "configuration is not valid",
+	[DMA_TEST_SPEED_ERROR] = "unexpected link speed",
+	[DMA_TEST_WIDTH_ERROR] = "unexpected link width",
+	[DMA_TEST_BONDING_ERROR] = "lane bonding configuration error",
+	[DMA_TEST_PACKET_ERROR] = "packet check failed",
+};
+
+enum dma_test_result {
+	DMA_TEST_NOT_RUN,
+	DMA_TEST_SUCCESS,
+	DMA_TEST_FAIL,
+};
+
+static const char * const dma_test_result_names[] = {
+	[DMA_TEST_NOT_RUN] = "not run",
+	[DMA_TEST_SUCCESS] = "success",
+	[DMA_TEST_FAIL] = "failed",
+};
+
+/**
+ * struct dma_test - DMA test device driver private data
+ * @svc: XDomain service the driver is bound to
+ * @xd: XDomain the service belongs to
+ * @rx_ring: Software ring holding RX frames
+ * @tx_ring: Software ring holding TX frames
+ * @packets_to_send: Number of packets to send
+ * @packets_to_receive: Number of packets to receive
+ * @packets_sent: Actual number of packets sent
+ * @packets_received: Actual number of packets received
+ * @link_speed: Expected link speed (Gb/s), %0 to use whatever is negotiated
+ * @link_width: Expected link width (Gb/s), %0 to use whatever is negotiated
+ * @crc_errors: Number of CRC errors during the test run
+ * @buffer_overflow_errors: Number of buffer overflow errors during the test
+ *			    run
+ * @result: Result of the last run
+ * @error_code: Error code of the last run
+ * @complete: Used to wait for the Rx to complete
+ * @lock: Lock serializing access to this structure
+ * @debugfs_dir: dentry of this dma_test
+ */
+struct dma_test {
+	const struct tb_service *svc;
+	struct tb_xdomain *xd;
+	struct tb_ring *rx_ring;
+	struct tb_ring *tx_ring;
+	unsigned int packets_to_send;
+	unsigned int packets_to_receive;
+	unsigned int packets_sent;
+	unsigned int packets_received;
+	unsigned int link_speed;
+	unsigned int link_width;
+	unsigned int crc_errors;
+	unsigned int buffer_overflow_errors;
+	enum dma_test_result result;
+	enum dma_test_test_error error_code;
+	struct completion complete;
+	struct mutex lock;
+	struct dentry *debugfs_dir;
+};
+
+/* DMA test property directory UUID: 3188cd10-6523-4a5a-a682-fdca07a248d8 */
+static const uuid_t dma_test_dir_uuid =
+	UUID_INIT(0x3188cd10, 0x6523, 0x4a5a,
+		  0xa6, 0x82, 0xfd, 0xca, 0x07, 0xa2, 0x48, 0xd8);
+
+static struct tb_property_dir *dma_test_dir;
+static void *dma_test_pattern;
+
+static void dma_test_free_rings(struct dma_test *dt)
+{
+	if (dt->rx_ring) {
+		tb_ring_free(dt->rx_ring);
+		dt->rx_ring = NULL;
+	}
+	if (dt->tx_ring) {
+		tb_ring_free(dt->tx_ring);
+		dt->tx_ring = NULL;
+	}
+}
+
+static int dma_test_start_rings(struct dma_test *dt)
+{
+	unsigned int flags = RING_FLAG_FRAME;
+	struct tb_xdomain *xd = dt->xd;
+	int ret, e2e_tx_hop = 0;
+	struct tb_ring *ring;
+
+	/*
+	 * If we are both sender and receiver (traffic goes over a
+	 * special loopback dongle) enable E2E flow control. This avoids
+	 * losing packets.
+	 */
+	if (dt->packets_to_send && dt->packets_to_receive)
+		flags |= RING_FLAG_E2E;
+
+	if (dt->packets_to_send) {
+		ring = tb_ring_alloc_tx(xd->tb->nhi, -1, DMA_TEST_TX_RING_SIZE,
+					flags);
+		if (!ring)
+			return -ENOMEM;
+
+		dt->tx_ring = ring;
+		e2e_tx_hop = ring->hop;
+	}
+
+	if (dt->packets_to_receive) {
+		u16 sof_mask, eof_mask;
+
+		sof_mask = BIT(DMA_TEST_PDF_FRAME_START);
+		eof_mask = BIT(DMA_TEST_PDF_FRAME_END);
+
+		ring = tb_ring_alloc_rx(xd->tb->nhi, -1, DMA_TEST_RX_RING_SIZE,
+					flags, e2e_tx_hop, sof_mask, eof_mask,
+					NULL, NULL);
+		if (!ring) {
+			dma_test_free_rings(dt);
+			return -ENOMEM;
+		}
+
+		dt->rx_ring = ring;
+	}
+
+	ret = tb_xdomain_enable_paths(dt->xd, DMA_TEST_HOPID,
+				      dt->tx_ring ? dt->tx_ring->hop : 0,
+				      DMA_TEST_HOPID,
+				      dt->rx_ring ? dt->rx_ring->hop : 0);
+	if (ret) {
+		dma_test_free_rings(dt);
+		return ret;
+	}
+
+	if (dt->tx_ring)
+		tb_ring_start(dt->tx_ring);
+	if (dt->rx_ring)
+		tb_ring_start(dt->rx_ring);
+
+	return 0;
+}
+
+static void dma_test_stop_rings(struct dma_test *dt)
+{
+	if (dt->rx_ring)
+		tb_ring_stop(dt->rx_ring);
+	if (dt->tx_ring)
+		tb_ring_stop(dt->tx_ring);
+
+	if (tb_xdomain_disable_paths(dt->xd))
+		dev_warn(&dt->svc->dev, "failed to disable DMA paths\n");
+
+	dma_test_free_rings(dt);
+}
+
+static void dma_test_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
+				 bool canceled)
+{
+	struct dma_test_frame *tf = container_of(frame, typeof(*tf), frame);
+	struct dma_test *dt = tf->dma_test;
+	struct device *dma_dev = tb_ring_dma_device(dt->rx_ring);
+
+	dma_unmap_single(dma_dev, tf->frame.buffer_phy, DMA_TEST_FRAME_SIZE,
+			 DMA_FROM_DEVICE);
+	kfree(tf->data);
+
+	if (canceled) {
+		kfree(tf);
+		return;
+	}
+
+	dt->packets_received++;
+	dev_dbg(&dt->svc->dev, "packet %u/%u received\n", dt->packets_received,
+		dt->packets_to_receive);
+
+	if (tf->frame.flags & RING_DESC_CRC_ERROR)
+		dt->crc_errors++;
+	if (tf->frame.flags & RING_DESC_BUFFER_OVERRUN)
+		dt->buffer_overflow_errors++;
+
+	kfree(tf);
+
+	if (dt->packets_received == dt->packets_to_receive)
+		complete(&dt->complete);
+}
+
+static int dma_test_submit_rx(struct dma_test *dt, size_t npackets)
+{
+	struct device *dma_dev = tb_ring_dma_device(dt->rx_ring);
+	int i;
+
+	for (i = 0; i < npackets; i++) {
+		struct dma_test_frame *tf;
+		dma_addr_t dma_addr;
+
+		tf = kzalloc(sizeof(*tf), GFP_KERNEL);
+		if (!tf)
+			return -ENOMEM;
+
+		tf->data = kzalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL);
+		if (!tf->data) {
+			kfree(tf);
+			return -ENOMEM;
+		}
+
+		dma_addr = dma_map_single(dma_dev, tf->data, DMA_TEST_FRAME_SIZE,
+					  DMA_FROM_DEVICE);
+		if (dma_mapping_error(dma_dev, dma_addr)) {
+			kfree(tf->data);
+			kfree(tf);
+			return -ENOMEM;
+		}
+
+		tf->frame.buffer_phy = dma_addr;
+		tf->frame.callback = dma_test_rx_callback;
+		tf->dma_test = dt;
+		INIT_LIST_HEAD(&tf->frame.list);
+
+		tb_ring_rx(dt->rx_ring, &tf->frame);
+	}
+
+	return 0;
+}
+
+static void dma_test_tx_callback(struct tb_ring *ring, struct ring_frame *frame,
+				 bool canceled)
+{
+	struct dma_test_frame *tf = container_of(frame, typeof(*tf), frame);
+	struct dma_test *dt = tf->dma_test;
+	struct device *dma_dev = tb_ring_dma_device(dt->tx_ring);
+
+	dma_unmap_single(dma_dev, tf->frame.buffer_phy, DMA_TEST_FRAME_SIZE,
+			 DMA_TO_DEVICE);
+	kfree(tf->data);
+	kfree(tf);
+}
+
+static int dma_test_submit_tx(struct dma_test *dt, size_t npackets)
+{
+	struct device *dma_dev = tb_ring_dma_device(dt->tx_ring);
+	int i;
+
+	for (i = 0; i < npackets; i++) {
+		struct dma_test_frame *tf;
+		dma_addr_t dma_addr;
+
+		tf = kzalloc(sizeof(*tf), GFP_KERNEL);
+		if (!tf)
+			return -ENOMEM;
+
+		tf->frame.size = 0; /* means 4096 */
+		tf->dma_test = dt;
+
+		tf->data = kzalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL);
+		if (!tf->data) {
+			kfree(tf);
+			return -ENOMEM;
+		}
+
+		memcpy(tf->data, dma_test_pattern, DMA_TEST_FRAME_SIZE);
+
+		dma_addr = dma_map_single(dma_dev, tf->data, DMA_TEST_FRAME_SIZE,
+					  DMA_TO_DEVICE);
+		if (dma_mapping_error(dma_dev, dma_addr)) {
+			kfree(tf->data);
+			kfree(tf);
+			return -ENOMEM;
+		}
+
+		tf->frame.buffer_phy = dma_addr;
+		tf->frame.callback = dma_test_tx_callback;
+		tf->frame.sof = DMA_TEST_PDF_FRAME_START;
+		tf->frame.eof = DMA_TEST_PDF_FRAME_END;
+		INIT_LIST_HEAD(&tf->frame.list);
+
+		dt->packets_sent++;
+		dev_dbg(&dt->svc->dev, "packet %u/%u sent\n", dt->packets_sent,
+			dt->packets_to_send);
+
+		tb_ring_tx(dt->tx_ring, &tf->frame);
+	}
+
+	return 0;
+}
+
+#define DMA_TEST_DEBUGFS_ATTR(__fops, __get, __validate, __set)	\
+static int __fops ## _show(void *data, u64 *val)		\
+{								\
+	struct tb_service *svc = data;				\
+	struct dma_test *dt = tb_service_get_drvdata(svc);	\
+	int ret;						\
+								\
+	ret = mutex_lock_interruptible(&dt->lock);		\
+	if (ret)						\
+		return ret;					\
+	__get(dt, val);						\
+	mutex_unlock(&dt->lock);				\
+	return 0;						\
+}								\
+static int __fops ## _store(void *data, u64 val)		\
+{								\
+	struct tb_service *svc = data;				\
+	struct dma_test *dt = tb_service_get_drvdata(svc);	\
+	int ret;						\
+								\
+	ret = __validate(val);					\
+	if (ret)						\
+		return ret;					\
+	ret = mutex_lock_interruptible(&dt->lock);		\
+	if (ret)						\
+		return ret;					\
+	__set(dt, val);						\
+	mutex_unlock(&dt->lock);				\
+	return 0;						\
+}								\
+DEFINE_DEBUGFS_ATTRIBUTE(__fops ## _fops, __fops ## _show,	\
+			 __fops ## _store, "%llu\n")
+
+static void lanes_get(const struct dma_test *dt, u64 *val)
+{
+	*val = dt->link_width;
+}
+
+static int lanes_validate(u64 val)
+{
+	return val > 2 ? -EINVAL : 0;
+}
+
+static void lanes_set(struct dma_test *dt, u64 val)
+{
+	dt->link_width = val;
+}
+DMA_TEST_DEBUGFS_ATTR(lanes, lanes_get, lanes_validate, lanes_set);
+
+static void speed_get(const struct dma_test *dt, u64 *val)
+{
+	*val = dt->link_speed;
+}
+
+static int speed_validate(u64 val)
+{
+	switch (val) {
+	case 20:
+	case 10:
+	case 0:
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
+
+static void speed_set(struct dma_test *dt, u64 val)
+{
+	dt->link_speed = val;
+}
+DMA_TEST_DEBUGFS_ATTR(speed, speed_get, speed_validate, speed_set);
+
+static void packets_to_receive_get(const struct dma_test *dt, u64 *val)
+{
+	*val = dt->packets_to_receive;
+}
+
+static int packets_to_receive_validate(u64 val)
+{
+	return val > DMA_TEST_MAX_PACKETS ? -EINVAL : 0;
+}
+
+static void packets_to_receive_set(struct dma_test *dt, u64 val)
+{
+	dt->packets_to_receive = val;
+}
+DMA_TEST_DEBUGFS_ATTR(packets_to_receive, packets_to_receive_get,
+		      packets_to_receive_validate, packets_to_receive_set);
+
+static void packets_to_send_get(const struct dma_test *dt, u64 *val)
+{
+	*val = dt->packets_to_send;
+}
+
+static int packets_to_send_validate(u64 val)
+{
+	return val > DMA_TEST_MAX_PACKETS ? -EINVAL : 0;
+}
+
+static void packets_to_send_set(struct dma_test *dt, u64 val)
+{
+	dt->packets_to_send = val;
+}
+DMA_TEST_DEBUGFS_ATTR(packets_to_send, packets_to_send_get,
+		      packets_to_send_validate, packets_to_send_set);
+
+static int dma_test_set_bonding(struct dma_test *dt)
+{
+	switch (dt->link_width) {
+	case 2:
+		return tb_xdomain_lane_bonding_enable(dt->xd);
+	case 1:
+		tb_xdomain_lane_bonding_disable(dt->xd);
+		fallthrough;
+	default:
+		return 0;
+	}
+}
+
+static bool dma_test_validate_config(struct dma_test *dt)
+{
+	if (!dt->packets_to_send && !dt->packets_to_receive)
+		return false;
+	if (dt->packets_to_send && dt->packets_to_receive &&
+	    dt->packets_to_send != dt->packets_to_receive)
+		return false;
+	return true;
+}
+
+static void dma_test_check_errors(struct dma_test *dt, int ret)
+{
+	if (!dt->error_code) {
+		if (dt->link_speed && dt->xd->link_speed != dt->link_speed) {
+			dt->error_code = DMA_TEST_SPEED_ERROR;
+		} else if (dt->link_width &&
+			   dt->xd->link_width != dt->link_width) {
+			dt->error_code = DMA_TEST_WIDTH_ERROR;
+		} else if (dt->packets_to_send != dt->packets_sent ||
+			 dt->packets_to_receive != dt->packets_received ||
+			 dt->crc_errors || dt->buffer_overflow_errors) {
+			dt->error_code = DMA_TEST_PACKET_ERROR;
+		} else {
+			return;
+		}
+	}
+
+	dt->result = DMA_TEST_FAIL;
+}
+
+static int test_store(void *data, u64 val)
+{
+	struct tb_service *svc = data;
+	struct dma_test *dt = tb_service_get_drvdata(svc);
+	int ret;
+
+	if (val != 1)
+		return -EINVAL;
+
+	ret = mutex_lock_interruptible(&dt->lock);
+	if (ret)
+		return ret;
+
+	dt->packets_sent = 0;
+	dt->packets_received = 0;
+	dt->crc_errors = 0;
+	dt->buffer_overflow_errors = 0;
+	dt->result = DMA_TEST_SUCCESS;
+	dt->error_code = DMA_TEST_NO_ERROR;
+
+	dev_dbg(&svc->dev, "DMA test starting\n");
+	if (dt->link_speed)
+		dev_dbg(&svc->dev, "link_speed: %u Gb/s\n", dt->link_speed);
+	if (dt->link_width)
+		dev_dbg(&svc->dev, "link_width: %u\n", dt->link_width);
+	dev_dbg(&svc->dev, "packets_to_send: %u\n", dt->packets_to_send);
+	dev_dbg(&svc->dev, "packets_to_receive: %u\n", dt->packets_to_receive);
+
+	if (!dma_test_validate_config(dt)) {
+		dev_err(&svc->dev, "invalid test configuration\n");
+		dt->error_code = DMA_TEST_CONFIG_ERROR;
+		goto out_unlock;
+	}
+
+	ret = dma_test_set_bonding(dt);
+	if (ret) {
+		dev_err(&svc->dev, "failed to set lanes\n");
+		dt->error_code = DMA_TEST_BONDING_ERROR;
+		goto out_unlock;
+	}
+
+	ret = dma_test_start_rings(dt);
+	if (ret) {
+		dev_err(&svc->dev, "failed to enable DMA rings\n");
+		dt->error_code = DMA_TEST_DMA_ERROR;
+		goto out_unlock;
+	}
+
+	if (dt->packets_to_receive) {
+		reinit_completion(&dt->complete);
+		ret = dma_test_submit_rx(dt, dt->packets_to_receive);
+		if (ret) {
+			dev_err(&svc->dev, "failed to submit receive buffers\n");
+			dt->error_code = DMA_TEST_BUFFER_ERROR;
+			goto out_stop;
+		}
+	}
+
+	if (dt->packets_to_send) {
+		ret = dma_test_submit_tx(dt, dt->packets_to_send);
+		if (ret) {
+			dev_err(&svc->dev, "failed to submit transmit buffers\n");
+			dt->error_code = DMA_TEST_BUFFER_ERROR;
+			goto out_stop;
+		}
+	}
+
+	if (dt->packets_to_receive) {
+		ret = wait_for_completion_interruptible(&dt->complete);
+		if (ret) {
+			dt->error_code = DMA_TEST_INTERRUPTED;
+			goto out_stop;
+		}
+	}
+
+out_stop:
+	dma_test_stop_rings(dt);
+out_unlock:
+	dma_test_check_errors(dt, ret);
+	mutex_unlock(&dt->lock);
+
+	dev_dbg(&svc->dev, "DMA test %s\n", dma_test_result_names[dt->result]);
+	return ret;
+}
+DEFINE_DEBUGFS_ATTRIBUTE(test_fops, NULL, test_store, "%llu\n");
+
+static int status_show(struct seq_file *s, void *not_used)
+{
+	struct tb_service *svc = s->private;
+	struct dma_test *dt = tb_service_get_drvdata(svc);
+	int ret;
+
+	ret = mutex_lock_interruptible(&dt->lock);
+	if (ret)
+		return ret;
+
+	seq_printf(s, "result: %s\n", dma_test_result_names[dt->result]);
+	if (dt->result == DMA_TEST_NOT_RUN)
+		goto out_unlock;
+
+	seq_printf(s, "packets received: %u\n", dt->packets_received);
+	seq_printf(s, "packets sent: %u\n", dt->packets_sent);
+	seq_printf(s, "CRC errors: %u\n", dt->crc_errors);
+	seq_printf(s, "buffer overflow errors: %u\n",
+		   dt->buffer_overflow_errors);
+	seq_printf(s, "error: %s\n", dma_test_error_names[dt->error_code]);
+
+out_unlock:
+	mutex_unlock(&dt->lock);
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(status);
+
+static void dma_test_debugfs_init(struct tb_service *svc)
+{
+	struct dma_test *dt = tb_service_get_drvdata(svc);
+
+	dt->debugfs_dir = debugfs_create_dir("dma_test", svc->debugfs_dir);
+
+	debugfs_create_file("lanes", 0600, dt->debugfs_dir, svc, &lanes_fops);
+	debugfs_create_file("speed", 0600, dt->debugfs_dir, svc, &speed_fops);
+	debugfs_create_file("packets_to_receive", 0600, dt->debugfs_dir, svc,
+			    &packets_to_receive_fops);
+	debugfs_create_file("packets_to_send", 0600, dt->debugfs_dir, svc,
+			    &packets_to_send_fops);
+	debugfs_create_file("status", 0400, dt->debugfs_dir, svc, &status_fops);
+	debugfs_create_file("test", 0200, dt->debugfs_dir, svc, &test_fops);
+}
+
+static int dma_test_probe(struct tb_service *svc, const struct tb_service_id *id)
+{
+	struct tb_xdomain *xd = tb_service_parent(svc);
+	struct dma_test *dt;
+
+	dt = devm_kzalloc(&svc->dev, sizeof(*dt), GFP_KERNEL);
+	if (!dt)
+		return -ENOMEM;
+
+	dt->svc = svc;
+	dt->xd = xd;
+	mutex_init(&dt->lock);
+	init_completion(&dt->complete);
+
+	tb_service_set_drvdata(svc, dt);
+	dma_test_debugfs_init(svc);
+
+	return 0;
+}
+
+static void dma_test_remove(struct tb_service *svc)
+{
+	struct dma_test *dt = tb_service_get_drvdata(svc);
+
+	mutex_lock(&dt->lock);
+	debugfs_remove_recursive(dt->debugfs_dir);
+	mutex_unlock(&dt->lock);
+}
+
+static int __maybe_unused dma_test_suspend(struct device *dev)
+{
+	/*
+	 * No need to do anything special here. If userspace is writing
+	 * to the test attribute when suspend started, it comes out from
+	 * wait_for_completion_interruptible() with -ERESTARTSYS and the
+	 * DMA test fails tearing down the rings. Once userspace is
+	 * thawed the kernel restarts the write syscall effectively
+	 * re-running the test.
+	 */
+	return 0;
+}
+
+static int __maybe_unused dma_test_resume(struct device *dev)
+{
+	return 0;
+}
+
+static const struct dev_pm_ops dma_test_pm_ops = {
+	SET_SYSTEM_SLEEP_PM_OPS(dma_test_suspend, dma_test_resume)
+};
+
+static const struct tb_service_id dma_test_ids[] = {
+	{ TB_SERVICE("dma_test", 1) },
+	{ },
+};
+MODULE_DEVICE_TABLE(tbsvc, dma_test_ids);
+
+static struct tb_service_driver dma_test_driver = {
+	.driver = {
+		.owner = THIS_MODULE,
+		.name = "thunderbolt_dma_test",
+		.pm = &dma_test_pm_ops,
+	},
+	.probe = dma_test_probe,
+	.remove = dma_test_remove,
+	.id_table = dma_test_ids,
+};
+
+static int __init dma_test_init(void)
+{
+	u64 data_value = DMA_TEST_DATA_PATTERN;
+	int i, ret;
+
+	dma_test_pattern = kmalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL);
+	if (!dma_test_pattern)
+		return -ENOMEM;
+
+	for (i = 0; i <	DMA_TEST_FRAME_SIZE / sizeof(data_value); i++)
+		((u32 *)dma_test_pattern)[i] = data_value++;
+
+	dma_test_dir = tb_property_create_dir(&dma_test_dir_uuid);
+	if (!dma_test_dir) {
+		ret = -ENOMEM;
+		goto err_free_pattern;
+	}
+
+	tb_property_add_immediate(dma_test_dir, "prtcid", 1);
+	tb_property_add_immediate(dma_test_dir, "prtcvers", 1);
+	tb_property_add_immediate(dma_test_dir, "prtcrevs", 0);
+	tb_property_add_immediate(dma_test_dir, "prtcstns", 0);
+
+	ret = tb_register_property_dir("dma_test", dma_test_dir);
+	if (ret)
+		goto err_free_dir;
+
+	ret = tb_register_service_driver(&dma_test_driver);
+	if (ret)
+		goto err_unregister_dir;
+
+	return 0;
+
+err_unregister_dir:
+	tb_unregister_property_dir("dma_test", dma_test_dir);
+err_free_dir:
+	tb_property_free_dir(dma_test_dir);
+err_free_pattern:
+	kfree(dma_test_pattern);
+
+	return ret;
+}
+module_init(dma_test_init);
+
+static void __exit dma_test_exit(void)
+{
+	tb_unregister_service_driver(&dma_test_driver);
+	tb_unregister_property_dir("dma_test", dma_test_dir);
+	tb_property_free_dir(dma_test_dir);
+	kfree(dma_test_pattern);
+}
+module_exit(dma_test_exit);
+
+MODULE_AUTHOR("Isaac Hazan <isaac.hazan@intel.com>");
+MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>");
+MODULE_DESCRIPTION("DMA traffic test driver");
+MODULE_LICENSE("GPL v2");
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 10/10] MAINTAINERS: Add Isaac as maintainer of Thunderbolt DMA traffic test driver
  2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
                   ` (8 preceding siblings ...)
  2020-11-10  9:19 ` [PATCH v2 09/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
@ 2020-11-10  9:19 ` Mika Westerberg
  2020-11-10  9:30 ` [PATCH v2 00/10] thunderbolt: Add " Greg Kroah-Hartman
  10 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-10  9:19 UTC (permalink / raw)
  To: linux-usb
  Cc: Michael Jamet, Yehezkel Bernat, Andreas Noever, Isaac Hazan,
	Lukas Wunner, David S . Miller, Greg Kroah-Hartman,
	Mika Westerberg, netdev

From: Isaac Hazan <isaac.hazan@intel.com>

I will be maintaining the Thunderbolt DMA traffic test driver.

Signed-off-by: Isaac Hazan <isaac.hazan@intel.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Yehezkel Bernat <YehezkelShB@gmail.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 3da6d8c154e4..83c4c66f8188 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17378,6 +17378,12 @@ W:	http://thinkwiki.org/wiki/Ibm-acpi
 T:	git git://repo.or.cz/linux-2.6/linux-acpi-2.6/ibm-acpi-2.6.git
 F:	drivers/platform/x86/thinkpad_acpi.c
 
+THUNDERBOLT DMA TRAFFIC TEST DRIVER
+M:	Isaac Hazan <isaac.hazan@intel.com>
+L:	linux-usb@vger.kernel.org
+S:	Maintained
+F:	drivers/thunderbolt/dma_test.c
+
 THUNDERBOLT DRIVER
 M:	Andreas Noever <andreas.noever@gmail.com>
 M:	Michael Jamet <michael.jamet@intel.com>
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver
  2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
                   ` (9 preceding siblings ...)
  2020-11-10  9:19 ` [PATCH v2 10/10] MAINTAINERS: Add Isaac as maintainer of Thunderbolt " Mika Westerberg
@ 2020-11-10  9:30 ` Greg Kroah-Hartman
  2020-11-11  7:23   ` Mika Westerberg
  10 siblings, 1 reply; 13+ messages in thread
From: Greg Kroah-Hartman @ 2020-11-10  9:30 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: linux-usb, Michael Jamet, Yehezkel Bernat, Andreas Noever,
	Isaac Hazan, Lukas Wunner, David S . Miller, netdev

On Tue, Nov 10, 2020 at 12:19:47PM +0300, Mika Westerberg wrote:
> Hi all,
> 
> This series adds a new Thunderbolt service driver that can be used on
> manufacturing floor to test that each Thunderbolt/USB4 port is functional.
> It can be done either using a special loopback dongle that has RX and TX
> lanes crossed, or by connecting a cable back to the host (for those who
> don't have these dongles).
> 
> This takes advantage of the existing XDomain protocol and creates XDomain
> devices for the loops back to the host where the DMA traffic test driver
> can bind to.
> 
> The DMA traffic test driver creates a tunnel through the fabric and then
> sends and receives data frames over the tunnel checking for different
> errors.
> 
> The previous version can be found here:
> 
>   https://lore.kernel.org/linux-usb/20201104140030.6853-1-mika.westerberg@linux.intel.com/
> 
> Changes from the previous version:
> 
>   * Fix resource leak in tb_xdp_handle_request() (patch 2/10)
>   * Use debugfs_remove_recursive() in tb_service_debugfs_remove() (patch 6/10)
>   * Add tags from Yehezkel

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver
  2020-11-10  9:30 ` [PATCH v2 00/10] thunderbolt: Add " Greg Kroah-Hartman
@ 2020-11-11  7:23   ` Mika Westerberg
  0 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2020-11-11  7:23 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-usb, Michael Jamet, Yehezkel Bernat, Andreas Noever,
	Isaac Hazan, Lukas Wunner, David S . Miller, netdev

On Tue, Nov 10, 2020 at 10:30:45AM +0100, Greg Kroah-Hartman wrote:
> On Tue, Nov 10, 2020 at 12:19:47PM +0300, Mika Westerberg wrote:
> > Hi all,
> > 
> > This series adds a new Thunderbolt service driver that can be used on
> > manufacturing floor to test that each Thunderbolt/USB4 port is functional.
> > It can be done either using a special loopback dongle that has RX and TX
> > lanes crossed, or by connecting a cable back to the host (for those who
> > don't have these dongles).
> > 
> > This takes advantage of the existing XDomain protocol and creates XDomain
> > devices for the loops back to the host where the DMA traffic test driver
> > can bind to.
> > 
> > The DMA traffic test driver creates a tunnel through the fabric and then
> > sends and receives data frames over the tunnel checking for different
> > errors.
> > 
> > The previous version can be found here:
> > 
> >   https://lore.kernel.org/linux-usb/20201104140030.6853-1-mika.westerberg@linux.intel.com/
> > 
> > Changes from the previous version:
> > 
> >   * Fix resource leak in tb_xdp_handle_request() (patch 2/10)
> >   * Use debugfs_remove_recursive() in tb_service_debugfs_remove() (patch 6/10)
> >   * Add tags from Yehezkel
> 
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Thanks!

Applied the series to thunderbolt.git/next.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-11-11  7:25 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-10  9:19 [PATCH v2 00/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
2020-11-10  9:19 ` [PATCH v2 01/10] thunderbolt: Do not clear USB4 router protocol adapter IFC and ISE bits Mika Westerberg
2020-11-10  9:19 ` [PATCH v2 02/10] thunderbolt: Find XDomain by route instead of UUID Mika Westerberg
2020-11-10  9:19 ` [PATCH v2 03/10] thunderbolt: Create XDomain devices for loops back to the host Mika Westerberg
2020-11-10  9:19 ` [PATCH v2 04/10] thunderbolt: Add link_speed and link_width to XDomain Mika Westerberg
2020-11-10  9:19 ` [PATCH v2 05/10] thunderbolt: Add functions for enabling and disabling lane bonding on XDomain Mika Westerberg
2020-11-10  9:19 ` [PATCH v2 06/10] thunderbolt: Create debugfs directory automatically for services Mika Westerberg
2020-11-10  9:19 ` [PATCH v2 07/10] thunderbolt: Make it possible to allocate one directional DMA tunnel Mika Westerberg
2020-11-10  9:19 ` [PATCH v2 08/10] thunderbolt: Add support for end-to-end flow control Mika Westerberg
2020-11-10  9:19 ` [PATCH v2 09/10] thunderbolt: Add DMA traffic test driver Mika Westerberg
2020-11-10  9:19 ` [PATCH v2 10/10] MAINTAINERS: Add Isaac as maintainer of Thunderbolt " Mika Westerberg
2020-11-10  9:30 ` [PATCH v2 00/10] thunderbolt: Add " Greg Kroah-Hartman
2020-11-11  7:23   ` Mika Westerberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.