All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/17] thunderbolt: Tunneling improvements
@ 2020-06-15 14:26 Mika Westerberg
  2020-06-15 14:26 ` [PATCH 01/17] thunderbolt: Fix path indices used in USB3 tunnel discovery Mika Westerberg
                   ` (17 more replies)
  0 siblings, 18 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

Hi all,

This series improves the Thunderbolt/USB4 driver to support tree topologies
that are now possible with USB4 devices (it is possible with TBT devices
but there are no such devices available in the market with more than two
ports).

We also take advantage of KUnit and add unit tests for path walking and
tunneling (in cases where hardware is not needed). In addition we add
initial support for USB3 tunnel bandwidth management so that the driver can
share isochronous bandwidth between USB3 and DisplayPort.

Mika Westerberg (17):
  thunderbolt: Fix path indices used in USB3 tunnel discovery
  thunderbolt: Make tb_next_port_on_path() work with tree topologies
  thunderbolt: Make tb_path_alloc() work with tree topologies
  thunderbolt: Check that both ports are reachable when allocating path
  thunderbolt: Handle incomplete PCIe/USB3 paths correctly in discovery
  thunderbolt: Increase path length in discovery
  thunderbolt: Add KUnit tests for path walking
  thunderbolt: Add DP IN resources for all routers
  thunderbolt: Do not tunnel USB3 if link is not USB4
  thunderbolt: Make usb4_switch_map_usb3_down() also return enabled ports
  thunderbolt: Make usb4_switch_map_pcie_down() also return enabled ports
  thunderbolt: Report consumed bandwidth in both directions
  thunderbolt: Increase DP DPRX wait timeout
  thunderbolt: Implement USB3 bandwidth negotiation routines
  thunderbolt: Make tb_port_get_link_speed() available to other files
  thunderbolt: Add USB3 bandwidth management
  thunderbolt: Add KUnit tests for tunneling

 drivers/thunderbolt/Kconfig   |    5 +
 drivers/thunderbolt/Makefile  |    2 +
 drivers/thunderbolt/path.c    |   38 +-
 drivers/thunderbolt/switch.c  |   25 +-
 drivers/thunderbolt/tb.c      |  378 ++++++--
 drivers/thunderbolt/tb.h      |   35 +-
 drivers/thunderbolt/tb_regs.h |   20 +
 drivers/thunderbolt/test.c    | 1626 +++++++++++++++++++++++++++++++++
 drivers/thunderbolt/tunnel.c  |  326 ++++++-
 drivers/thunderbolt/tunnel.h  |   37 +-
 drivers/thunderbolt/usb4.c    |  369 +++++++-
 11 files changed, 2709 insertions(+), 152 deletions(-)
 create mode 100644 drivers/thunderbolt/test.c

-- 
2.27.0.rc2


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 01/17] thunderbolt: Fix path indices used in USB3 tunnel discovery
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-25 12:51   ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 02/17] thunderbolt: Make tb_next_port_on_path() work with tree topologies Mika Westerberg
                   ` (16 subsequent siblings)
  17 siblings, 1 reply; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

The USB3 discovery used wrong indices when tunnel is discovered. It
should use TB_USB3_PATH_DOWN for path that flows downstream and
TB_USB3_PATH_UP when it flows upstream. This should not affect the
functionality but better to fix it.

Fixes: e6f818585713 ("thunderbolt: Add support for USB 3.x tunnels")
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Cc: stable@vger.kernel.org # v5.6+
---
 drivers/thunderbolt/tunnel.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index dbe90bcf4ad4..c144ca9b032c 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -913,21 +913,21 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
 	 * case.
 	 */
 	path = tb_path_discover(down, TB_USB3_HOPID, NULL, -1,
-				&tunnel->dst_port, "USB3 Up");
+				&tunnel->dst_port, "USB3 Down");
 	if (!path) {
 		/* Just disable the downstream port */
 		tb_usb3_port_enable(down, false);
 		goto err_free;
 	}
-	tunnel->paths[TB_USB3_PATH_UP] = path;
-	tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_UP]);
+	tunnel->paths[TB_USB3_PATH_DOWN] = path;
+	tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_DOWN]);
 
 	path = tb_path_discover(tunnel->dst_port, -1, down, TB_USB3_HOPID, NULL,
-				"USB3 Down");
+				"USB3 Up");
 	if (!path)
 		goto err_deactivate;
-	tunnel->paths[TB_USB3_PATH_DOWN] = path;
-	tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_DOWN]);
+	tunnel->paths[TB_USB3_PATH_UP] = path;
+	tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_UP]);
 
 	/* Validate that the tunnel is complete */
 	if (!tb_port_is_usb3_up(tunnel->dst_port)) {
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 02/17] thunderbolt: Make tb_next_port_on_path() work with tree topologies
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
  2020-06-15 14:26 ` [PATCH 01/17] thunderbolt: Fix path indices used in USB3 tunnel discovery Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 03/17] thunderbolt: Make tb_path_alloc() " Mika Westerberg
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

USB4 makes it possible to have tree topology of devices connected in the
same way than USB3. This was actually possible in Thunderbolt 1, 2 and 3
as well but all the available devices only had two ports which allows
building only daisy-chains of devices.

With USB4 it is possible for example that there is DP IN adapter as part
of eGPU device router and that should be tunneled over the tree topology
to a DP OUT adapter. This updates the tb_next_port_on_path() to support
such topologies.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/switch.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 95b75a712ade..29db484d2c74 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -850,6 +850,13 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid)
 	ida_simple_remove(&port->out_hopids, hopid);
 }
 
+static inline bool tb_switch_is_reachable(const struct tb_switch *parent,
+					  const struct tb_switch *sw)
+{
+	u64 mask = (1ULL << parent->config.depth * 8) - 1;
+	return (tb_route(parent) & mask) == (tb_route(sw) & mask);
+}
+
 /**
  * tb_next_port_on_path() - Return next port for given port on a path
  * @start: Start port of the walk
@@ -879,12 +886,12 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
 		return end;
 	}
 
-	if (start->sw->config.depth < end->sw->config.depth) {
+	if (tb_switch_is_reachable(prev->sw, end->sw)) {
+		next = tb_port_at(tb_route(end->sw), prev->sw);
+		/* Walk down the topology if next == prev */
 		if (prev->remote &&
-		    prev->remote->sw->config.depth > prev->sw->config.depth)
+		    (next == prev || next->dual_link_port == prev))
 			next = prev->remote;
-		else
-			next = tb_port_at(tb_route(end->sw), prev->sw);
 	} else {
 		if (tb_is_upstream_port(prev)) {
 			next = prev->remote;
@@ -901,7 +908,7 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
 		}
 	}
 
-	return next;
+	return next != prev ? next : NULL;
 }
 
 static int tb_port_get_link_speed(struct tb_port *port)
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 03/17] thunderbolt: Make tb_path_alloc() work with tree topologies
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
  2020-06-15 14:26 ` [PATCH 01/17] thunderbolt: Fix path indices used in USB3 tunnel discovery Mika Westerberg
  2020-06-15 14:26 ` [PATCH 02/17] thunderbolt: Make tb_next_port_on_path() work with tree topologies Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 04/17] thunderbolt: Check that both ports are reachable when allocating path Mika Westerberg
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

With USB4, topologies are not limited to daisy-chains anymore so when
calculating how many hops are between two ports we need to walk the
whole path instead.

Add helper function tb_for_each_port_on_path() that can be used to walk
over each port on a path and make tb_path_alloc() to use it.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/path.c | 12 ++++++------
 drivers/thunderbolt/tb.h   | 12 ++++++++++++
 2 files changed, 18 insertions(+), 6 deletions(-)

diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c
index ad58559ea88e..77abb1fa80c0 100644
--- a/drivers/thunderbolt/path.c
+++ b/drivers/thunderbolt/path.c
@@ -239,12 +239,12 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
 	if (!path)
 		return NULL;
 
-	/*
-	 * Number of hops on a path is the distance between the two
-	 * switches plus the source adapter port.
-	 */
-	num_hops = abs(tb_route_length(tb_route(src->sw)) -
-		       tb_route_length(tb_route(dst->sw))) + 1;
+	i = 0;
+	tb_for_each_port_on_path(src, dst, in_port)
+		i++;
+
+	/* Each hop takes two ports */
+	num_hops = i / 2;
 
 	path->hops = kcalloc(num_hops, sizeof(*path->hops), GFP_KERNEL);
 	if (!path->hops) {
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 2eb2bcd3cca3..6916168e2c76 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -741,6 +741,18 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid);
 struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
 				     struct tb_port *prev);
 
+/**
+ * tb_for_each_port_on_path() - Iterate over each port on path
+ * @src: Source port
+ * @dst: Destination port
+ * @p: Port used as iterator
+ *
+ * Walks over each port on path from @src to @dst.
+ */
+#define tb_for_each_port_on_path(src, dst, p)				\
+	for ((p) = tb_next_port_on_path((src), (dst), NULL); (p);	\
+	     (p) = tb_next_port_on_path((src), (dst), (p)))
+
 int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
 int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
 int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 04/17] thunderbolt: Check that both ports are reachable when allocating path
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (2 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 03/17] thunderbolt: Make tb_path_alloc() " Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 05/17] thunderbolt: Handle incomplete PCIe/USB3 paths correctly in discovery Mika Westerberg
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

Add sanity check that given src and dst ports are reachable through path
walk before allocating a path. If they are not then bail out early.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/path.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c
index 77abb1fa80c0..854ff3412161 100644
--- a/drivers/thunderbolt/path.c
+++ b/drivers/thunderbolt/path.c
@@ -229,7 +229,7 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
 			      struct tb_port *dst, int dst_hopid, int link_nr,
 			      const char *name)
 {
-	struct tb_port *in_port, *out_port;
+	struct tb_port *in_port, *out_port, *first_port, *last_port;
 	int in_hopid, out_hopid;
 	struct tb_path *path;
 	size_t num_hops;
@@ -239,9 +239,20 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
 	if (!path)
 		return NULL;
 
+	first_port = last_port = NULL;
 	i = 0;
-	tb_for_each_port_on_path(src, dst, in_port)
+	tb_for_each_port_on_path(src, dst, in_port) {
+		if (!first_port)
+			first_port = in_port;
+		last_port = in_port;
 		i++;
+	}
+
+	/* Check that src and dst are reachable */
+	if (first_port != src || last_port != dst) {
+		kfree(path);
+		return NULL;
+	}
 
 	/* Each hop takes two ports */
 	num_hops = i / 2;
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 05/17] thunderbolt: Handle incomplete PCIe/USB3 paths correctly in discovery
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (3 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 04/17] thunderbolt: Check that both ports are reachable when allocating path Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 06/17] thunderbolt: Increase path length " Mika Westerberg
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

If the path is not complete when we do discovery the number of hops may
be less than with the full path. As an example when this can happen is
that user unloads the driver, disconnects the topology, and loads the
driver back. If there is PCIe or USB3 tunnel involved this may happen.

Take this into account in tb_pcie_init_path() and tb_usb3_init_path()
and prevent potential access over array limits.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tunnel.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index c144ca9b032c..5bdb8b11345e 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -124,8 +124,9 @@ static void tb_pci_init_path(struct tb_path *path)
 	path->drop_packages = 0;
 	path->nfc_credits = 0;
 	path->hops[0].initial_credits = 7;
-	path->hops[1].initial_credits =
-		tb_initial_credits(path->hops[1].in_port->sw);
+	if (path->path_length > 1)
+		path->hops[1].initial_credits =
+			tb_initial_credits(path->hops[1].in_port->sw);
 }
 
 /**
@@ -879,8 +880,9 @@ static void tb_usb3_init_path(struct tb_path *path)
 	path->drop_packages = 0;
 	path->nfc_credits = 0;
 	path->hops[0].initial_credits = 7;
-	path->hops[1].initial_credits =
-		tb_initial_credits(path->hops[1].in_port->sw);
+	if (path->path_length > 1)
+		path->hops[1].initial_credits =
+			tb_initial_credits(path->hops[1].in_port->sw);
 }
 
 /**
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 06/17] thunderbolt: Increase path length in discovery
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (4 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 05/17] thunderbolt: Handle incomplete PCIe/USB3 paths correctly in discovery Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 07/17] thunderbolt: Add KUnit tests for path walking Mika Westerberg
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

Currently we have only supported paths that follow daisy-chain topology
but USB4 also allows to build trees of devices. For this reason increase
maximum path length we use for discovery to be from the lowest level to
the host router and back to the same level.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 6916168e2c76..b53ef5be7263 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -286,7 +286,11 @@ struct tb_path {
 
 /* HopIDs 0-7 are reserved by the Thunderbolt protocol */
 #define TB_PATH_MIN_HOPID	8
-#define TB_PATH_MAX_HOPS	7
+/*
+ * Support paths from the farthest (depth 6) router to the host and back
+ * to the same level (not necessarily to the same router).
+ */
+#define TB_PATH_MAX_HOPS	(7 * 2)
 
 /**
  * struct tb_cm_ops - Connection manager specific operations vector
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 07/17] thunderbolt: Add KUnit tests for path walking
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (5 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 06/17] thunderbolt: Increase path length " Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 08/17] thunderbolt: Add DP IN resources for all routers Mika Westerberg
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

This adds KUnit tests for path walking which is only dependent on
software structures, so no hardware is needed to run these.

We make these available only when both KUnit and the driver itself are
built into the kernel image. The reason for this is that KUnit adds its
own module_init() call in kunit_test_suite() which generates linker
error because the driver does the same in nhi.c. This should be fine for
now because these tests are only meant to run by developers anyway.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/Kconfig  |    5 +
 drivers/thunderbolt/Makefile |    2 +
 drivers/thunderbolt/test.c   | 1228 ++++++++++++++++++++++++++++++++++
 3 files changed, 1235 insertions(+)
 create mode 100644 drivers/thunderbolt/test.c

diff --git a/drivers/thunderbolt/Kconfig b/drivers/thunderbolt/Kconfig
index daa9bb52fc77..354e61c0f2e5 100644
--- a/drivers/thunderbolt/Kconfig
+++ b/drivers/thunderbolt/Kconfig
@@ -15,3 +15,8 @@ menuconfig USB4
 
 	  To compile this driver a module, choose M here. The module will be
 	  called thunderbolt.
+
+config USB4_KUNIT_TEST
+	bool "KUnit tests"
+	depends on KUNIT=y
+	depends on USB4=y
diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile
index eae28dd45250..68f7a19690d8 100644
--- a/drivers/thunderbolt/Makefile
+++ b/drivers/thunderbolt/Makefile
@@ -2,3 +2,5 @@
 obj-${CONFIG_USB4} := thunderbolt.o
 thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o
 thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o
+
+obj-${CONFIG_USB4_KUNIT_TEST} += test.o
diff --git a/drivers/thunderbolt/test.c b/drivers/thunderbolt/test.c
new file mode 100644
index 000000000000..9e60bab46d34
--- /dev/null
+++ b/drivers/thunderbolt/test.c
@@ -0,0 +1,1228 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit tests
+ *
+ * Copyright (C) 2020, Intel Corporation
+ * Author: Mika Westerberg <mika.westerberg@linux.intel.com>
+ */
+
+#include <kunit/test.h>
+#include <linux/idr.h>
+
+#include "tb.h"
+
+static int __ida_init(struct kunit_resource *res, void *context)
+{
+	struct ida *ida = context;
+
+	ida_init(ida);
+	res->allocation = ida;
+	return 0;
+}
+
+static void __ida_destroy(struct kunit_resource *res)
+{
+	struct ida *ida = res->allocation;
+
+	ida_destroy(ida);
+}
+
+static void kunit_ida_init(struct kunit *test, struct ida *ida)
+{
+	kunit_alloc_resource(test, __ida_init, __ida_destroy, GFP_KERNEL, ida);
+}
+
+static struct tb_switch *alloc_switch(struct kunit *test, u64 route,
+				      u8 upstream_port, u8 max_port_number)
+{
+	struct tb_switch *sw;
+	size_t size;
+	int i;
+
+	sw = kunit_kzalloc(test, sizeof(*sw), GFP_KERNEL);
+	if (!sw)
+		return NULL;
+
+	sw->config.upstream_port_number = upstream_port;
+	sw->config.depth = tb_route_length(route);
+	sw->config.route_hi = upper_32_bits(route);
+	sw->config.route_lo = lower_32_bits(route);
+	sw->config.enabled = 0;
+	sw->config.max_port_number = max_port_number;
+
+	size = (sw->config.max_port_number + 1) * sizeof(*sw->ports);
+	sw->ports = kunit_kzalloc(test, size, GFP_KERNEL);
+	if (!sw->ports)
+		return NULL;
+
+	for (i = 0; i <= sw->config.max_port_number; i++) {
+		sw->ports[i].sw = sw;
+		sw->ports[i].port = i;
+		sw->ports[i].config.port_number = i;
+		if (i) {
+			kunit_ida_init(test, &sw->ports[i].in_hopids);
+			kunit_ida_init(test, &sw->ports[i].out_hopids);
+		}
+	}
+
+	return sw;
+}
+
+static struct tb_switch *alloc_host(struct kunit *test)
+{
+	struct tb_switch *sw;
+
+	sw = alloc_switch(test, 0, 7, 13);
+	if (!sw)
+		return NULL;
+
+	sw->config.vendor_id = 0x8086;
+	sw->config.device_id = 0x9a1b;
+
+	sw->ports[0].config.type = TB_TYPE_PORT;
+	sw->ports[0].config.max_in_hop_id = 7;
+	sw->ports[0].config.max_out_hop_id = 7;
+
+	sw->ports[1].config.type = TB_TYPE_PORT;
+	sw->ports[1].config.max_in_hop_id = 19;
+	sw->ports[1].config.max_out_hop_id = 19;
+	sw->ports[1].dual_link_port = &sw->ports[2];
+
+	sw->ports[2].config.type = TB_TYPE_PORT;
+	sw->ports[2].config.max_in_hop_id = 19;
+	sw->ports[2].config.max_out_hop_id = 19;
+	sw->ports[2].dual_link_port = &sw->ports[1];
+	sw->ports[2].link_nr = 1;
+
+	sw->ports[3].config.type = TB_TYPE_PORT;
+	sw->ports[3].config.max_in_hop_id = 19;
+	sw->ports[3].config.max_out_hop_id = 19;
+	sw->ports[3].dual_link_port = &sw->ports[4];
+
+	sw->ports[4].config.type = TB_TYPE_PORT;
+	sw->ports[4].config.max_in_hop_id = 19;
+	sw->ports[4].config.max_out_hop_id = 19;
+	sw->ports[4].dual_link_port = &sw->ports[3];
+	sw->ports[4].link_nr = 1;
+
+	sw->ports[5].config.type = TB_TYPE_DP_HDMI_IN;
+	sw->ports[5].config.max_in_hop_id = 9;
+	sw->ports[5].config.max_out_hop_id = 9;
+	sw->ports[5].cap_adap = -1;
+
+	sw->ports[6].config.type = TB_TYPE_DP_HDMI_IN;
+	sw->ports[6].config.max_in_hop_id = 9;
+	sw->ports[6].config.max_out_hop_id = 9;
+	sw->ports[6].cap_adap = -1;
+
+	sw->ports[7].config.type = TB_TYPE_NHI;
+	sw->ports[7].config.max_in_hop_id = 11;
+	sw->ports[7].config.max_out_hop_id = 11;
+
+	sw->ports[8].config.type = TB_TYPE_PCIE_DOWN;
+	sw->ports[8].config.max_in_hop_id = 8;
+	sw->ports[8].config.max_out_hop_id = 8;
+
+	sw->ports[9].config.type = TB_TYPE_PCIE_DOWN;
+	sw->ports[9].config.max_in_hop_id = 8;
+	sw->ports[9].config.max_out_hop_id = 8;
+
+	sw->ports[10].disabled = true;
+	sw->ports[11].disabled = true;
+
+	sw->ports[12].config.type = TB_TYPE_USB3_DOWN;
+	sw->ports[12].config.max_in_hop_id = 8;
+	sw->ports[12].config.max_out_hop_id = 8;
+
+	sw->ports[13].config.type = TB_TYPE_USB3_DOWN;
+	sw->ports[13].config.max_in_hop_id = 8;
+	sw->ports[13].config.max_out_hop_id = 8;
+
+	return sw;
+}
+
+static struct tb_switch *alloc_dev_default(struct kunit *test,
+					   struct tb_switch *parent,
+					   u64 route, bool bonded)
+{
+	struct tb_port *port, *upstream_port;
+	struct tb_switch *sw;
+
+	sw = alloc_switch(test, route, 1, 19);
+	if (!sw)
+		return NULL;
+
+	sw->config.vendor_id = 0x8086;
+	sw->config.device_id = 0x15ef;
+
+	sw->ports[0].config.type = TB_TYPE_PORT;
+	sw->ports[0].config.max_in_hop_id = 8;
+	sw->ports[0].config.max_out_hop_id = 8;
+
+	sw->ports[1].config.type = TB_TYPE_PORT;
+	sw->ports[1].config.max_in_hop_id = 19;
+	sw->ports[1].config.max_out_hop_id = 19;
+	sw->ports[1].dual_link_port = &sw->ports[2];
+
+	sw->ports[2].config.type = TB_TYPE_PORT;
+	sw->ports[2].config.max_in_hop_id = 19;
+	sw->ports[2].config.max_out_hop_id = 19;
+	sw->ports[2].dual_link_port = &sw->ports[1];
+	sw->ports[2].link_nr = 1;
+
+	sw->ports[3].config.type = TB_TYPE_PORT;
+	sw->ports[3].config.max_in_hop_id = 19;
+	sw->ports[3].config.max_out_hop_id = 19;
+	sw->ports[3].dual_link_port = &sw->ports[4];
+
+	sw->ports[4].config.type = TB_TYPE_PORT;
+	sw->ports[4].config.max_in_hop_id = 19;
+	sw->ports[4].config.max_out_hop_id = 19;
+	sw->ports[4].dual_link_port = &sw->ports[3];
+	sw->ports[4].link_nr = 1;
+
+	sw->ports[5].config.type = TB_TYPE_PORT;
+	sw->ports[5].config.max_in_hop_id = 19;
+	sw->ports[5].config.max_out_hop_id = 19;
+	sw->ports[5].dual_link_port = &sw->ports[6];
+
+	sw->ports[6].config.type = TB_TYPE_PORT;
+	sw->ports[6].config.max_in_hop_id = 19;
+	sw->ports[6].config.max_out_hop_id = 19;
+	sw->ports[6].dual_link_port = &sw->ports[5];
+	sw->ports[6].link_nr = 1;
+
+	sw->ports[7].config.type = TB_TYPE_PORT;
+	sw->ports[7].config.max_in_hop_id = 19;
+	sw->ports[7].config.max_out_hop_id = 19;
+	sw->ports[7].dual_link_port = &sw->ports[8];
+
+	sw->ports[8].config.type = TB_TYPE_PORT;
+	sw->ports[8].config.max_in_hop_id = 19;
+	sw->ports[8].config.max_out_hop_id = 19;
+	sw->ports[8].dual_link_port = &sw->ports[7];
+	sw->ports[8].link_nr = 1;
+
+	sw->ports[9].config.type = TB_TYPE_PCIE_UP;
+	sw->ports[9].config.max_in_hop_id = 8;
+	sw->ports[9].config.max_out_hop_id = 8;
+
+	sw->ports[10].config.type = TB_TYPE_PCIE_DOWN;
+	sw->ports[10].config.max_in_hop_id = 8;
+	sw->ports[10].config.max_out_hop_id = 8;
+
+	sw->ports[11].config.type = TB_TYPE_PCIE_DOWN;
+	sw->ports[11].config.max_in_hop_id = 8;
+	sw->ports[11].config.max_out_hop_id = 8;
+
+	sw->ports[12].config.type = TB_TYPE_PCIE_DOWN;
+	sw->ports[12].config.max_in_hop_id = 8;
+	sw->ports[12].config.max_out_hop_id = 8;
+
+	sw->ports[13].config.type = TB_TYPE_DP_HDMI_OUT;
+	sw->ports[13].config.max_in_hop_id = 9;
+	sw->ports[13].config.max_out_hop_id = 9;
+	sw->ports[13].cap_adap = -1;
+
+	sw->ports[14].config.type = TB_TYPE_DP_HDMI_OUT;
+	sw->ports[14].config.max_in_hop_id = 9;
+	sw->ports[14].config.max_out_hop_id = 9;
+	sw->ports[14].cap_adap = -1;
+
+	sw->ports[15].disabled = true;
+
+	sw->ports[16].config.type = TB_TYPE_USB3_UP;
+	sw->ports[16].config.max_in_hop_id = 8;
+	sw->ports[16].config.max_out_hop_id = 8;
+
+	sw->ports[17].config.type = TB_TYPE_USB3_DOWN;
+	sw->ports[17].config.max_in_hop_id = 8;
+	sw->ports[17].config.max_out_hop_id = 8;
+
+	sw->ports[18].config.type = TB_TYPE_USB3_DOWN;
+	sw->ports[18].config.max_in_hop_id = 8;
+	sw->ports[18].config.max_out_hop_id = 8;
+
+	sw->ports[19].config.type = TB_TYPE_USB3_DOWN;
+	sw->ports[19].config.max_in_hop_id = 8;
+	sw->ports[19].config.max_out_hop_id = 8;
+
+	if (!parent)
+		return sw;
+
+	/* Link them */
+	upstream_port = tb_upstream_port(sw);
+	port = tb_port_at(route, parent);
+	port->remote = upstream_port;
+	upstream_port->remote = port;
+	if (port->dual_link_port && upstream_port->dual_link_port) {
+		port->dual_link_port->remote = upstream_port->dual_link_port;
+		upstream_port->dual_link_port->remote = port->dual_link_port;
+	}
+
+	if (bonded) {
+		/* Bonding is used */
+		port->bonded = true;
+		port->dual_link_port->bonded = true;
+		upstream_port->bonded = true;
+		upstream_port->dual_link_port->bonded = true;
+	}
+
+	return sw;
+}
+
+static struct tb_switch *alloc_dev_with_dpin(struct kunit *test,
+					     struct tb_switch *parent,
+					     u64 route, bool bonded)
+{
+	struct tb_switch *sw;
+
+	sw = alloc_dev_default(test, parent, route, bonded);
+	if (!sw)
+		return NULL;
+
+	sw->ports[13].config.type = TB_TYPE_DP_HDMI_IN;
+	sw->ports[13].config.max_in_hop_id = 9;
+	sw->ports[13].config.max_out_hop_id = 9;
+
+	sw->ports[14].config.type = TB_TYPE_DP_HDMI_IN;
+	sw->ports[14].config.max_in_hop_id = 9;
+	sw->ports[14].config.max_out_hop_id = 9;
+
+	return sw;
+}
+
+static void tb_test_path_basic(struct kunit *test)
+{
+	struct tb_port *src_port, *dst_port, *p;
+	struct tb_switch *host;
+
+	host = alloc_host(test);
+
+	src_port = &host->ports[5];
+	dst_port = src_port;
+
+	p = tb_next_port_on_path(src_port, dst_port, NULL);
+	KUNIT_EXPECT_PTR_EQ(test, p, dst_port);
+
+	p = tb_next_port_on_path(src_port, dst_port, p);
+	KUNIT_EXPECT_TRUE(test, !p);
+}
+
+static void tb_test_path_not_connected_walk(struct kunit *test)
+{
+	struct tb_port *src_port, *dst_port, *p;
+	struct tb_switch *host, *dev;
+
+	host = alloc_host(test);
+	/* No connection between host and dev */
+	dev = alloc_dev_default(test, NULL, 3, true);
+
+	src_port = &host->ports[12];
+	dst_port = &dev->ports[16];
+
+	p = tb_next_port_on_path(src_port, dst_port, NULL);
+	KUNIT_EXPECT_PTR_EQ(test, p, src_port);
+
+	p = tb_next_port_on_path(src_port, dst_port, p);
+	KUNIT_EXPECT_PTR_EQ(test, p, &host->ports[3]);
+
+	p = tb_next_port_on_path(src_port, dst_port, p);
+	KUNIT_EXPECT_TRUE(test, !p);
+
+	/* Other direction */
+
+	p = tb_next_port_on_path(dst_port, src_port, NULL);
+	KUNIT_EXPECT_PTR_EQ(test, p, dst_port);
+
+	p = tb_next_port_on_path(dst_port, src_port, p);
+	KUNIT_EXPECT_PTR_EQ(test, p, &dev->ports[1]);
+
+	p = tb_next_port_on_path(dst_port, src_port, p);
+	KUNIT_EXPECT_TRUE(test, !p);
+}
+
+struct port_expectation {
+	u64 route;
+	u8 port;
+	enum tb_port_type type;
+};
+
+static void tb_test_path_single_hop_walk(struct kunit *test)
+{
+	/*
+	 * Walks from Host PCIe downstream port to Device #1 PCIe
+	 * upstream port.
+	 *
+	 *   [Host]
+	 *   1 |
+	 *   1 |
+	 *  [Device]
+	 */
+	static const struct port_expectation test_data[] = {
+		{ .route = 0x0, .port = 8, .type = TB_TYPE_PCIE_DOWN },
+		{ .route = 0x0, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x1, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x1, .port = 9, .type = TB_TYPE_PCIE_UP },
+	};
+	struct tb_port *src_port, *dst_port, *p;
+	struct tb_switch *host, *dev;
+	int i;
+
+	host = alloc_host(test);
+	dev = alloc_dev_default(test, host, 1, true);
+
+	src_port = &host->ports[8];
+	dst_port = &dev->ports[9];
+
+	/* Walk both directions */
+
+	i = 0;
+	tb_for_each_port_on_path(src_port, dst_port, p) {
+		KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
+		KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
+				test_data[i].type);
+		i++;
+	}
+
+	KUNIT_EXPECT_EQ(test, i, (int)ARRAY_SIZE(test_data));
+
+	i = ARRAY_SIZE(test_data) - 1;
+	tb_for_each_port_on_path(dst_port, src_port, p) {
+		KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
+		KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
+				test_data[i].type);
+		i--;
+	}
+
+	KUNIT_EXPECT_EQ(test, i, -1);
+}
+
+static void tb_test_path_daisy_chain_walk(struct kunit *test)
+{
+	/*
+	 * Walks from Host DP IN to Device #2 DP OUT.
+	 *
+	 *           [Host]
+	 *            1 |
+	 *            1 |
+	 *         [Device #1]
+	 *       3 /
+	 *      1 /
+	 * [Device #2]
+	 */
+	static const struct port_expectation test_data[] = {
+		{ .route = 0x0, .port = 5, .type = TB_TYPE_DP_HDMI_IN },
+		{ .route = 0x0, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x1, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x1, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x301, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x301, .port = 13, .type = TB_TYPE_DP_HDMI_OUT },
+	};
+	struct tb_port *src_port, *dst_port, *p;
+	struct tb_switch *host, *dev1, *dev2;
+	int i;
+
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, true);
+	dev2 = alloc_dev_default(test, dev1, 0x301, true);
+
+	src_port = &host->ports[5];
+	dst_port = &dev2->ports[13];
+
+	/* Walk both directions */
+
+	i = 0;
+	tb_for_each_port_on_path(src_port, dst_port, p) {
+		KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
+		KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
+				test_data[i].type);
+		i++;
+	}
+
+	KUNIT_EXPECT_EQ(test, i, (int)ARRAY_SIZE(test_data));
+
+	i = ARRAY_SIZE(test_data) - 1;
+	tb_for_each_port_on_path(dst_port, src_port, p) {
+		KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
+		KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
+				test_data[i].type);
+		i--;
+	}
+
+	KUNIT_EXPECT_EQ(test, i, -1);
+}
+
+static void tb_test_path_simple_tree_walk(struct kunit *test)
+{
+	/*
+	 * Walks from Host DP IN to Device #3 DP OUT.
+	 *
+	 *           [Host]
+	 *            1 |
+	 *            1 |
+	 *         [Device #1]
+	 *       3 /   | 5  \ 7
+	 *      1 /    |     \ 1
+	 * [Device #2] |    [Device #4]
+	 *             | 1
+	 *         [Device #3]
+	 */
+	static const struct port_expectation test_data[] = {
+		{ .route = 0x0, .port = 5, .type = TB_TYPE_DP_HDMI_IN },
+		{ .route = 0x0, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x1, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x1, .port = 5, .type = TB_TYPE_PORT },
+		{ .route = 0x501, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x501, .port = 13, .type = TB_TYPE_DP_HDMI_OUT },
+	};
+	struct tb_port *src_port, *dst_port, *p;
+	struct tb_switch *host, *dev1, *dev3;
+	int i;
+
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, true);
+	alloc_dev_default(test, dev1, 0x301, true);
+	dev3 = alloc_dev_default(test, dev1, 0x501, true);
+	alloc_dev_default(test, dev1, 0x701, true);
+
+	src_port = &host->ports[5];
+	dst_port = &dev3->ports[13];
+
+	/* Walk both directions */
+
+	i = 0;
+	tb_for_each_port_on_path(src_port, dst_port, p) {
+		KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
+		KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
+				test_data[i].type);
+		i++;
+	}
+
+	KUNIT_EXPECT_EQ(test, i, (int)ARRAY_SIZE(test_data));
+
+	i = ARRAY_SIZE(test_data) - 1;
+	tb_for_each_port_on_path(dst_port, src_port, p) {
+		KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
+		KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
+				test_data[i].type);
+		i--;
+	}
+
+	KUNIT_EXPECT_EQ(test, i, -1);
+}
+
+static void tb_test_path_complex_tree_walk(struct kunit *test)
+{
+	/*
+	 * Walks from Device #3 DP IN to Device #9 DP OUT.
+	 *
+	 *           [Host]
+	 *            1 |
+	 *            1 |
+	 *         [Device #1]
+	 *       3 /   | 5  \ 7
+	 *      1 /    |     \ 1
+	 * [Device #2] |    [Device #5]
+	 *    5 |      | 1         \ 7
+	 *    1 |  [Device #4]      \ 1
+	 * [Device #3]             [Device #6]
+	 *                       3 /
+	 *                      1 /
+	 *                    [Device #7]
+	 *                  3 /      | 5
+	 *                 1 /       |
+	 *               [Device #8] | 1
+	 *                       [Device #9]
+	 */
+	static const struct port_expectation test_data[] = {
+		{ .route = 0x50301, .port = 13, .type = TB_TYPE_DP_HDMI_IN },
+		{ .route = 0x50301, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x301, .port = 5, .type = TB_TYPE_PORT },
+		{ .route = 0x301, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x1, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x1, .port = 7, .type = TB_TYPE_PORT },
+		{ .route = 0x701, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x701, .port = 7, .type = TB_TYPE_PORT },
+		{ .route = 0x70701, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x70701, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x3070701, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x3070701, .port = 5, .type = TB_TYPE_PORT },
+		{ .route = 0x503070701, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x503070701, .port = 14, .type = TB_TYPE_DP_HDMI_OUT },
+	};
+	struct tb_switch *host, *dev1, *dev2, *dev3, *dev5, *dev6, *dev7, *dev9;
+	struct tb_port *src_port, *dst_port, *p;
+	int i;
+
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, true);
+	dev2 = alloc_dev_default(test, dev1, 0x301, true);
+	dev3 = alloc_dev_with_dpin(test, dev2, 0x50301, true);
+	alloc_dev_default(test, dev1, 0x501, true);
+	dev5 = alloc_dev_default(test, dev1, 0x701, true);
+	dev6 = alloc_dev_default(test, dev5, 0x70701, true);
+	dev7 = alloc_dev_default(test, dev6, 0x3070701, true);
+	alloc_dev_default(test, dev7, 0x303070701, true);
+	dev9 = alloc_dev_default(test, dev7, 0x503070701, true);
+
+	src_port = &dev3->ports[13];
+	dst_port = &dev9->ports[14];
+
+	/* Walk both directions */
+
+	i = 0;
+	tb_for_each_port_on_path(src_port, dst_port, p) {
+		KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
+		KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
+				test_data[i].type);
+		i++;
+	}
+
+	KUNIT_EXPECT_EQ(test, i, (int)ARRAY_SIZE(test_data));
+
+	i = ARRAY_SIZE(test_data) - 1;
+	tb_for_each_port_on_path(dst_port, src_port, p) {
+		KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
+		KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
+				test_data[i].type);
+		i--;
+	}
+
+	KUNIT_EXPECT_EQ(test, i, -1);
+}
+
+static void tb_test_path_max_length_walk(struct kunit *test)
+{
+	struct tb_switch *host, *dev1, *dev2, *dev3, *dev4, *dev5, *dev6;
+	struct tb_switch *dev7, *dev8, *dev9, *dev10, *dev11, *dev12;
+	struct tb_port *src_port, *dst_port, *p;
+	int i;
+
+	/*
+	 * Walks from Device #6 DP IN to Device #12 DP OUT.
+	 *
+	 *          [Host]
+	 *         1 /  \ 3
+	 *        1 /    \ 1
+	 * [Device #1]   [Device #7]
+	 *     3 |           | 3
+	 *     1 |           | 1
+	 * [Device #2]   [Device #8]
+	 *     3 |           | 3
+	 *     1 |           | 1
+	 * [Device #3]   [Device #9]
+	 *     3 |           | 3
+	 *     1 |           | 1
+	 * [Device #4]   [Device #10]
+	 *     3 |           | 3
+	 *     1 |           | 1
+	 * [Device #5]   [Device #11]
+	 *     3 |           | 3
+	 *     1 |           | 1
+	 * [Device #6]   [Device #12]
+	 */
+	static const struct port_expectation test_data[] = {
+		{ .route = 0x30303030301, .port = 13, .type = TB_TYPE_DP_HDMI_IN },
+		{ .route = 0x30303030301, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x303030301, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x303030301, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x3030301, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x3030301, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x30301, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x30301, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x301, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x301, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x1, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x1, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x0, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x0, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x3, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x3, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x303, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x303, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x30303, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x30303, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x3030303, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x3030303, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x303030303, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x303030303, .port = 3, .type = TB_TYPE_PORT },
+		{ .route = 0x30303030303, .port = 1, .type = TB_TYPE_PORT },
+		{ .route = 0x30303030303, .port = 13, .type = TB_TYPE_DP_HDMI_OUT },
+	};
+
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, true);
+	dev2 = alloc_dev_default(test, dev1, 0x301, true);
+	dev3 = alloc_dev_default(test, dev2, 0x30301, true);
+	dev4 = alloc_dev_default(test, dev3, 0x3030301, true);
+	dev5 = alloc_dev_default(test, dev4, 0x303030301, true);
+	dev6 = alloc_dev_with_dpin(test, dev5, 0x30303030301, true);
+	dev7 = alloc_dev_default(test, host, 0x3, true);
+	dev8 = alloc_dev_default(test, dev7, 0x303, true);
+	dev9 = alloc_dev_default(test, dev8, 0x30303, true);
+	dev10 = alloc_dev_default(test, dev9, 0x3030303, true);
+	dev11 = alloc_dev_default(test, dev10, 0x303030303, true);
+	dev12 = alloc_dev_default(test, dev11, 0x30303030303, true);
+
+	src_port = &dev6->ports[13];
+	dst_port = &dev12->ports[13];
+
+	/* Walk both directions */
+
+	i = 0;
+	tb_for_each_port_on_path(src_port, dst_port, p) {
+		KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
+		KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
+				test_data[i].type);
+		i++;
+	}
+
+	KUNIT_EXPECT_EQ(test, i, (int)ARRAY_SIZE(test_data));
+
+	i = ARRAY_SIZE(test_data) - 1;
+	tb_for_each_port_on_path(dst_port, src_port, p) {
+		KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
+		KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
+				test_data[i].type);
+		i--;
+	}
+
+	KUNIT_EXPECT_EQ(test, i, -1);
+}
+
+static void tb_test_path_not_connected(struct kunit *test)
+{
+	struct tb_switch *host, *dev1, *dev2;
+	struct tb_port *down, *up;
+	struct tb_path *path;
+
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x3, false);
+	/* Not connected to anything */
+	dev2 = alloc_dev_default(test, NULL, 0x303, false);
+
+	down = &dev1->ports[10];
+	up = &dev2->ports[9];
+
+	path = tb_path_alloc(NULL, down, 8, up, 8, 0, "PCIe Down");
+	KUNIT_ASSERT_TRUE(test, path == NULL);
+	path = tb_path_alloc(NULL, down, 8, up, 8, 1, "PCIe Down");
+	KUNIT_ASSERT_TRUE(test, path == NULL);
+}
+
+struct hop_expectation {
+	u64 route;
+	u8 in_port;
+	enum tb_port_type in_type;
+	u8 out_port;
+	enum tb_port_type out_type;
+};
+
+static void tb_test_path_not_bonded_lane0(struct kunit *test)
+{
+	/*
+	 * PCIe path from host to device using lane 0.
+	 *
+	 *   [Host]
+	 *   3 |: 4
+	 *   1 |: 2
+	 *  [Device]
+	 */
+	static const struct hop_expectation test_data[] = {
+		{
+			.route = 0x0,
+			.in_port = 9,
+			.in_type = TB_TYPE_PCIE_DOWN,
+			.out_port = 3,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x3,
+			.in_port = 1,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 9,
+			.out_type = TB_TYPE_PCIE_UP,
+		},
+	};
+	struct tb_switch *host, *dev;
+	struct tb_port *down, *up;
+	struct tb_path *path;
+	int i;
+
+	host = alloc_host(test);
+	dev = alloc_dev_default(test, host, 0x3, false);
+
+	down = &host->ports[9];
+	up = &dev->ports[9];
+
+	path = tb_path_alloc(NULL, down, 8, up, 8, 0, "PCIe Down");
+	KUNIT_ASSERT_TRUE(test, path != NULL);
+	KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
+	for (i = 0; i < ARRAY_SIZE(test_data); i++) {
+		const struct tb_port *in_port, *out_port;
+
+		in_port = path->hops[i].in_port;
+		out_port = path->hops[i].out_port;
+
+		KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
+				test_data[i].in_type);
+		KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
+				test_data[i].out_type);
+	}
+	tb_path_free(path);
+}
+
+static void tb_test_path_not_bonded_lane1(struct kunit *test)
+{
+	/*
+	 * DP Video path from host to device using lane 1. Paths like
+	 * these are only used with Thunderbolt 1 devices where lane
+	 * bonding is not possible. USB4 specifically does not allow
+	 * paths like this (you either use lane 0 where lane 1 is
+	 * disabled or both lanes are bonded).
+	 *
+	 *   [Host]
+	 *   1 :| 2
+	 *   1 :| 2
+	 *  [Device]
+	 */
+	static const struct hop_expectation test_data[] = {
+		{
+			.route = 0x0,
+			.in_port = 5,
+			.in_type = TB_TYPE_DP_HDMI_IN,
+			.out_port = 2,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x1,
+			.in_port = 2,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 13,
+			.out_type = TB_TYPE_DP_HDMI_OUT,
+		},
+	};
+	struct tb_switch *host, *dev;
+	struct tb_port *in, *out;
+	struct tb_path *path;
+	int i;
+
+	host = alloc_host(test);
+	dev = alloc_dev_default(test, host, 0x1, false);
+
+	in = &host->ports[5];
+	out = &dev->ports[13];
+
+	path = tb_path_alloc(NULL, in, 9, out, 9, 1, "Video");
+	KUNIT_ASSERT_TRUE(test, path != NULL);
+	KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
+	for (i = 0; i < ARRAY_SIZE(test_data); i++) {
+		const struct tb_port *in_port, *out_port;
+
+		in_port = path->hops[i].in_port;
+		out_port = path->hops[i].out_port;
+
+		KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
+				test_data[i].in_type);
+		KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
+				test_data[i].out_type);
+	}
+	tb_path_free(path);
+}
+
+static void tb_test_path_not_bonded_lane1_chain(struct kunit *test)
+{
+	/*
+	 * DP Video path from host to device 3 using lane 1.
+	 *
+	 *    [Host]
+	 *    1 :| 2
+	 *    1 :| 2
+	 *  [Device #1]
+	 *    7 :| 8
+	 *    1 :| 2
+	 *  [Device #2]
+	 *    5 :| 6
+	 *    1 :| 2
+	 *  [Device #3]
+	 */
+	static const struct hop_expectation test_data[] = {
+		{
+			.route = 0x0,
+			.in_port = 5,
+			.in_type = TB_TYPE_DP_HDMI_IN,
+			.out_port = 2,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x1,
+			.in_port = 2,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 8,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x701,
+			.in_port = 2,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 6,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x50701,
+			.in_port = 2,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 13,
+			.out_type = TB_TYPE_DP_HDMI_OUT,
+		},
+	};
+	struct tb_switch *host, *dev1, *dev2, *dev3;
+	struct tb_port *in, *out;
+	struct tb_path *path;
+	int i;
+
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, false);
+	dev2 = alloc_dev_default(test, dev1, 0x701, false);
+	dev3 = alloc_dev_default(test, dev2, 0x50701, false);
+
+	in = &host->ports[5];
+	out = &dev3->ports[13];
+
+	path = tb_path_alloc(NULL, in, 9, out, 9, 1, "Video");
+	KUNIT_ASSERT_TRUE(test, path != NULL);
+	KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
+	for (i = 0; i < ARRAY_SIZE(test_data); i++) {
+		const struct tb_port *in_port, *out_port;
+
+		in_port = path->hops[i].in_port;
+		out_port = path->hops[i].out_port;
+
+		KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
+				test_data[i].in_type);
+		KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
+				test_data[i].out_type);
+	}
+	tb_path_free(path);
+}
+
+static void tb_test_path_not_bonded_lane1_chain_reverse(struct kunit *test)
+{
+	/*
+	 * DP Video path from device 3 to host using lane 1.
+	 *
+	 *    [Host]
+	 *    1 :| 2
+	 *    1 :| 2
+	 *  [Device #1]
+	 *    7 :| 8
+	 *    1 :| 2
+	 *  [Device #2]
+	 *    5 :| 6
+	 *    1 :| 2
+	 *  [Device #3]
+	 */
+	static const struct hop_expectation test_data[] = {
+		{
+			.route = 0x50701,
+			.in_port = 13,
+			.in_type = TB_TYPE_DP_HDMI_IN,
+			.out_port = 2,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x701,
+			.in_port = 6,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 2,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x1,
+			.in_port = 8,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 2,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x0,
+			.in_port = 2,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 5,
+			.out_type = TB_TYPE_DP_HDMI_IN,
+		},
+	};
+	struct tb_switch *host, *dev1, *dev2, *dev3;
+	struct tb_port *in, *out;
+	struct tb_path *path;
+	int i;
+
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, false);
+	dev2 = alloc_dev_default(test, dev1, 0x701, false);
+	dev3 = alloc_dev_with_dpin(test, dev2, 0x50701, false);
+
+	in = &dev3->ports[13];
+	out = &host->ports[5];
+
+	path = tb_path_alloc(NULL, in, 9, out, 9, 1, "Video");
+	KUNIT_ASSERT_TRUE(test, path != NULL);
+	KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
+	for (i = 0; i < ARRAY_SIZE(test_data); i++) {
+		const struct tb_port *in_port, *out_port;
+
+		in_port = path->hops[i].in_port;
+		out_port = path->hops[i].out_port;
+
+		KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
+				test_data[i].in_type);
+		KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
+				test_data[i].out_type);
+	}
+	tb_path_free(path);
+}
+
+static void tb_test_path_mixed_chain(struct kunit *test)
+{
+	/*
+	 * DP Video path from host to device 4 where first and last link
+	 * is bonded.
+	 *
+	 *    [Host]
+	 *    1 |
+	 *    1 |
+	 *  [Device #1]
+	 *    7 :| 8
+	 *    1 :| 2
+	 *  [Device #2]
+	 *    5 :| 6
+	 *    1 :| 2
+	 *  [Device #3]
+	 *    3 |
+	 *    1 |
+	 *  [Device #4]
+	 */
+	static const struct hop_expectation test_data[] = {
+		{
+			.route = 0x0,
+			.in_port = 5,
+			.in_type = TB_TYPE_DP_HDMI_IN,
+			.out_port = 1,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x1,
+			.in_port = 1,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 8,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x701,
+			.in_port = 2,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 6,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x50701,
+			.in_port = 2,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 3,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x3050701,
+			.in_port = 1,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 13,
+			.out_type = TB_TYPE_DP_HDMI_OUT,
+		},
+	};
+	struct tb_switch *host, *dev1, *dev2, *dev3, *dev4;
+	struct tb_port *in, *out;
+	struct tb_path *path;
+	int i;
+
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, true);
+	dev2 = alloc_dev_default(test, dev1, 0x701, false);
+	dev3 = alloc_dev_default(test, dev2, 0x50701, false);
+	dev4 = alloc_dev_default(test, dev3, 0x3050701, true);
+
+	in = &host->ports[5];
+	out = &dev4->ports[13];
+
+	path = tb_path_alloc(NULL, in, 9, out, 9, 1, "Video");
+	KUNIT_ASSERT_TRUE(test, path != NULL);
+	KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
+	for (i = 0; i < ARRAY_SIZE(test_data); i++) {
+		const struct tb_port *in_port, *out_port;
+
+		in_port = path->hops[i].in_port;
+		out_port = path->hops[i].out_port;
+
+		KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
+				test_data[i].in_type);
+		KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
+				test_data[i].out_type);
+	}
+	tb_path_free(path);
+}
+
+static void tb_test_path_mixed_chain_reverse(struct kunit *test)
+{
+	/*
+	 * DP Video path from device 4 to host where first and last link
+	 * is bonded.
+	 *
+	 *    [Host]
+	 *    1 |
+	 *    1 |
+	 *  [Device #1]
+	 *    7 :| 8
+	 *    1 :| 2
+	 *  [Device #2]
+	 *    5 :| 6
+	 *    1 :| 2
+	 *  [Device #3]
+	 *    3 |
+	 *    1 |
+	 *  [Device #4]
+	 */
+	static const struct hop_expectation test_data[] = {
+		{
+			.route = 0x3050701,
+			.in_port = 13,
+			.in_type = TB_TYPE_DP_HDMI_OUT,
+			.out_port = 1,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x50701,
+			.in_port = 3,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 2,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x701,
+			.in_port = 6,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 2,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x1,
+			.in_port = 8,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 1,
+			.out_type = TB_TYPE_PORT,
+		},
+		{
+			.route = 0x0,
+			.in_port = 1,
+			.in_type = TB_TYPE_PORT,
+			.out_port = 5,
+			.out_type = TB_TYPE_DP_HDMI_IN,
+		},
+	};
+	struct tb_switch *host, *dev1, *dev2, *dev3, *dev4;
+	struct tb_port *in, *out;
+	struct tb_path *path;
+	int i;
+
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, true);
+	dev2 = alloc_dev_default(test, dev1, 0x701, false);
+	dev3 = alloc_dev_default(test, dev2, 0x50701, false);
+	dev4 = alloc_dev_default(test, dev3, 0x3050701, true);
+
+	in = &dev4->ports[13];
+	out = &host->ports[5];
+
+	path = tb_path_alloc(NULL, in, 9, out, 9, 1, "Video");
+	KUNIT_ASSERT_TRUE(test, path != NULL);
+	KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
+	for (i = 0; i < ARRAY_SIZE(test_data); i++) {
+		const struct tb_port *in_port, *out_port;
+
+		in_port = path->hops[i].in_port;
+		out_port = path->hops[i].out_port;
+
+		KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
+				test_data[i].in_type);
+		KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
+		KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
+		KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
+				test_data[i].out_type);
+	}
+	tb_path_free(path);
+}
+
+static struct kunit_case tb_test_cases[] = {
+	KUNIT_CASE(tb_test_path_basic),
+	KUNIT_CASE(tb_test_path_not_connected_walk),
+	KUNIT_CASE(tb_test_path_single_hop_walk),
+	KUNIT_CASE(tb_test_path_daisy_chain_walk),
+	KUNIT_CASE(tb_test_path_simple_tree_walk),
+	KUNIT_CASE(tb_test_path_complex_tree_walk),
+	KUNIT_CASE(tb_test_path_max_length_walk),
+	KUNIT_CASE(tb_test_path_not_connected),
+	KUNIT_CASE(tb_test_path_not_bonded_lane0),
+	KUNIT_CASE(tb_test_path_not_bonded_lane1),
+	KUNIT_CASE(tb_test_path_not_bonded_lane1_chain),
+	KUNIT_CASE(tb_test_path_not_bonded_lane1_chain_reverse),
+	KUNIT_CASE(tb_test_path_mixed_chain),
+	KUNIT_CASE(tb_test_path_mixed_chain_reverse),
+	{ }
+};
+
+static struct kunit_suite tb_test_suite = {
+	.name = "thunderbolt",
+	.test_cases = tb_test_cases,
+};
+kunit_test_suite(tb_test_suite);
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 08/17] thunderbolt: Add DP IN resources for all routers
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (6 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 07/17] thunderbolt: Add KUnit tests for path walking Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 09/17] thunderbolt: Do not tunnel USB3 if link is not USB4 Mika Westerberg
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

USB4 spec allows DP tunneling from any router that has DP IN adapter,
not just from host router. The driver currently only added the DP IN
resources for the host router because Thunderbolt 1, 2 and 3 devices do
not have DP IN adapters. However, USB4 allows device routers to have DP
IN adapter as well so update the driver to add DP IN resources for each
device that has one. One example would be an eGPU enclosure where the
eGPU output is forwarded to DP IN port and then tunneled over the USB4
fabric.

Only limitation we add now is that the DP IN and DP OUT that gets paired
for tunnel creation should both be under the same topology starting from
host router downstream port. In other words we do not create DP tunnels
across host router at this time even though that is possible as well but
it complicates the bandwidth management and there is no real use-case
for this anyway.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c | 50 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 46 insertions(+), 4 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 107cd232f486..55daa7f1a87d 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -404,6 +404,7 @@ static void tb_scan_port(struct tb_port *port)
 	if (tcm->hotplug_active && tb_tunnel_usb3(sw->tb, sw))
 		tb_sw_warn(sw, "USB3 tunnel creation failed\n");
 
+	tb_add_dp_resources(sw);
 	tb_scan_switch(sw);
 }
 
@@ -573,6 +574,43 @@ static int tb_available_bw(struct tb_cm *tcm, struct tb_port *in,
 	return available_bw;
 }
 
+static struct tb_port *tb_find_dp_out(struct tb *tb, struct tb_port *in)
+{
+	struct tb_port *host_port, *port;
+	struct tb_cm *tcm = tb_priv(tb);
+
+	host_port = tb_route(in->sw) ?
+		tb_port_at(tb_route(in->sw), tb->root_switch) : NULL;
+
+	list_for_each_entry(port, &tcm->dp_resources, list) {
+		if (!tb_port_is_dpout(port))
+			continue;
+
+		if (tb_port_is_enabled(port)) {
+			tb_port_dbg(port, "in use\n");
+			continue;
+		}
+
+		tb_port_dbg(port, "DP OUT available\n");
+
+		/*
+		 * Keep the DP tunnel under the topology starting from
+		 * the same host router downstream port.
+		 */
+		if (host_port && tb_route(port->sw)) {
+			struct tb_port *p;
+
+			p = tb_port_at(tb_route(port->sw), tb->root_switch);
+			if (p != host_port)
+				continue;
+		}
+
+		return port;
+	}
+
+	return NULL;
+}
+
 static void tb_tunnel_dp(struct tb *tb)
 {
 	struct tb_cm *tcm = tb_priv(tb);
@@ -589,17 +627,21 @@ static void tb_tunnel_dp(struct tb *tb)
 	in = NULL;
 	out = NULL;
 	list_for_each_entry(port, &tcm->dp_resources, list) {
+		if (!tb_port_is_dpin(port))
+			continue;
+
 		if (tb_port_is_enabled(port)) {
 			tb_port_dbg(port, "in use\n");
 			continue;
 		}
 
-		tb_port_dbg(port, "available\n");
+		tb_port_dbg(port, "DP IN available\n");
 
-		if (!in && tb_port_is_dpin(port))
+		out = tb_find_dp_out(tb, port);
+		if (out) {
 			in = port;
-		else if (!out && tb_port_is_dpout(port))
-			out = port;
+			break;
+		}
 	}
 
 	if (!in) {
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 09/17] thunderbolt: Do not tunnel USB3 if link is not USB4
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (7 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 08/17] thunderbolt: Add DP IN resources for all routers Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-07-17  6:16   ` Prashant Malani
  2020-06-15 14:26 ` [PATCH 10/17] thunderbolt: Make usb4_switch_map_usb3_down() also return enabled ports Mika Westerberg
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

USB3 tunneling is possible only over USB4 link so don't create USB3
tunnels if that's not the case.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c      |  3 +++
 drivers/thunderbolt/tb.h      |  2 ++
 drivers/thunderbolt/tb_regs.h |  1 +
 drivers/thunderbolt/usb4.c    | 24 +++++++++++++++++++++---
 4 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 55daa7f1a87d..2da82259e77c 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -235,6 +235,9 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
 	if (!up)
 		return 0;
 
+	if (!sw->link_usb4)
+		return 0;
+
 	/*
 	 * Look up available down port. Since we are chaining it should
 	 * be found right above this switch.
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index b53ef5be7263..de8124949eaf 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -97,6 +97,7 @@ struct tb_switch_tmu {
  * @device_name: Name of the device (or %NULL if not known)
  * @link_speed: Speed of the link in Gb/s
  * @link_width: Width of the link (1 or 2)
+ * @link_usb4: Upstream link is USB4
  * @generation: Switch Thunderbolt generation
  * @cap_plug_events: Offset to the plug events capability (%0 if not found)
  * @cap_lc: Offset to the link controller capability (%0 if not found)
@@ -136,6 +137,7 @@ struct tb_switch {
 	const char *device_name;
 	unsigned int link_speed;
 	unsigned int link_width;
+	bool link_usb4;
 	unsigned int generation;
 	int cap_plug_events;
 	int cap_lc;
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index c29c5075525a..77d4b8598835 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -290,6 +290,7 @@ struct tb_regs_port_header {
 /* USB4 port registers */
 #define PORT_CS_18				0x12
 #define PORT_CS_18_BE				BIT(8)
+#define PORT_CS_18_TCM				BIT(9)
 #define PORT_CS_19				0x13
 #define PORT_CS_19_PC				BIT(3)
 
diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
index 50c7534ba31e..393771d50962 100644
--- a/drivers/thunderbolt/usb4.c
+++ b/drivers/thunderbolt/usb4.c
@@ -192,6 +192,20 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status)
 	return 0;
 }
 
+static bool link_is_usb4(struct tb_port *port)
+{
+	u32 val;
+
+	if (!port->cap_usb4)
+		return false;
+
+	if (tb_port_read(port, &val, TB_CFG_PORT,
+			 port->cap_usb4 + PORT_CS_18, 1))
+		return false;
+
+	return !(val & PORT_CS_18_TCM);
+}
+
 /**
  * usb4_switch_setup() - Additional setup for USB4 device
  * @sw: USB4 router to setup
@@ -205,6 +219,7 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status)
  */
 int usb4_switch_setup(struct tb_switch *sw)
 {
+	struct tb_port *downstream_port;
 	struct tb_switch *parent;
 	bool tbt3, xhci;
 	u32 val = 0;
@@ -217,6 +232,11 @@ int usb4_switch_setup(struct tb_switch *sw)
 	if (ret)
 		return ret;
 
+	parent = tb_switch_parent(sw);
+	downstream_port = tb_port_at(tb_route(sw), parent);
+	sw->link_usb4 = link_is_usb4(downstream_port);
+	tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT3");
+
 	xhci = val & ROUTER_CS_6_HCI;
 	tbt3 = !(val & ROUTER_CS_6_TNS);
 
@@ -227,9 +247,7 @@ int usb4_switch_setup(struct tb_switch *sw)
 	if (ret)
 		return ret;
 
-	parent = tb_switch_parent(sw);
-
-	if (tb_switch_find_port(parent, TB_TYPE_USB3_DOWN)) {
+	if (sw->link_usb4 && tb_switch_find_port(parent, TB_TYPE_USB3_DOWN)) {
 		val |= ROUTER_CS_5_UTO;
 		xhci = false;
 	}
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 10/17] thunderbolt: Make usb4_switch_map_usb3_down() also return enabled ports
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (8 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 09/17] thunderbolt: Do not tunnel USB3 if link is not USB4 Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 11/17] thunderbolt: Make usb4_switch_map_pcie_down() " Mika Westerberg
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

We need to call this on enabled ports in order to find the mapping from
host router USB4 port to a USB 3.x downstream adapter, so make the
function return enabled ports as well.

While there fix parameter alignment in tb_find_usb3_down().

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c   | 14 +++-----------
 drivers/thunderbolt/usb4.c |  2 +-
 2 files changed, 4 insertions(+), 12 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 2da82259e77c..82f62a023a4b 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -206,22 +206,14 @@ static struct tb_port *tb_find_unused_port(struct tb_switch *sw,
 }
 
 static struct tb_port *tb_find_usb3_down(struct tb_switch *sw,
-					const struct tb_port *port)
+					 const struct tb_port *port)
 {
 	struct tb_port *down;
 
 	down = usb4_switch_map_usb3_down(sw, port);
-	if (down) {
-		if (WARN_ON(!tb_port_is_usb3_down(down)))
-			goto out;
-		if (WARN_ON(tb_usb3_port_is_enabled(down)))
-			goto out;
-
+	if (down && !tb_usb3_port_is_enabled(down))
 		return down;
-	}
-
-out:
-	return tb_find_unused_port(sw, TB_TYPE_USB3_DOWN);
+	return NULL;
 }
 
 static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
index 393771d50962..375a8c459201 100644
--- a/drivers/thunderbolt/usb4.c
+++ b/drivers/thunderbolt/usb4.c
@@ -759,7 +759,7 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
 		if (!tb_port_is_usb3_down(p))
 			continue;
 
-		if (usb_idx == usb4_idx && !tb_usb3_port_is_enabled(p))
+		if (usb_idx == usb4_idx)
 			return p;
 
 		usb_idx++;
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 11/17] thunderbolt: Make usb4_switch_map_pcie_down() also return enabled ports
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (9 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 10/17] thunderbolt: Make usb4_switch_map_usb3_down() also return enabled ports Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 12/17] thunderbolt: Report consumed bandwidth in both directions Mika Westerberg
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

Just for symmetry with the usb4_switch_map_usb3_down() make this one
also return ports that are enabled.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c   | 2 +-
 drivers/thunderbolt/usb4.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 82f62a023a4b..9dbdb11685fa 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -520,7 +520,7 @@ static struct tb_port *tb_find_pcie_down(struct tb_switch *sw,
 	if (down) {
 		if (WARN_ON(!tb_port_is_pcie_down(down)))
 			goto out;
-		if (WARN_ON(tb_pci_port_is_enabled(down)))
+		if (tb_pci_port_is_enabled(down))
 			goto out;
 
 		return down;
diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
index 375a8c459201..dd1c0498a8ee 100644
--- a/drivers/thunderbolt/usb4.c
+++ b/drivers/thunderbolt/usb4.c
@@ -728,7 +728,7 @@ struct tb_port *usb4_switch_map_pcie_down(struct tb_switch *sw,
 		if (!tb_port_is_pcie_down(p))
 			continue;
 
-		if (pcie_idx == usb4_idx && !tb_pci_port_is_enabled(p))
+		if (pcie_idx == usb4_idx)
 			return p;
 
 		pcie_idx++;
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 12/17] thunderbolt: Report consumed bandwidth in both directions
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (10 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 11/17] thunderbolt: Make usb4_switch_map_pcie_down() " Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 13/17] thunderbolt: Increase DP DPRX wait timeout Mika Westerberg
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

Whereas DisplayPort bandwidth is consumed only in one direction (from DP
IN adapter to DP OUT adapter), USB3 adds separate bandwidth for both
upstream and downstream directions.

For this reason extend the tunnel consumed bandwidth routines to support
both directions and implement this for DP.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c     |  9 ++++---
 drivers/thunderbolt/tunnel.c | 47 +++++++++++++++++++++++++++++-------
 drivers/thunderbolt/tunnel.h |  6 +++--
 3 files changed, 47 insertions(+), 15 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 9dbdb11685fa..53f9673c1395 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -535,7 +535,7 @@ static int tb_available_bw(struct tb_cm *tcm, struct tb_port *in,
 {
 	struct tb_switch *sw = out->sw;
 	struct tb_tunnel *tunnel;
-	int bw, available_bw = 40000;
+	int ret, bw, available_bw = 40000;
 
 	while (sw && sw != in->sw) {
 		bw = sw->link_speed * sw->link_width * 1000; /* Mb/s */
@@ -553,9 +553,10 @@ static int tb_available_bw(struct tb_cm *tcm, struct tb_port *in,
 			if (!tb_tunnel_switch_on_path(tunnel, sw))
 				continue;
 
-			consumed_bw = tb_tunnel_consumed_bandwidth(tunnel);
-			if (consumed_bw < 0)
-				return consumed_bw;
+			ret = tb_tunnel_consumed_bandwidth(tunnel, NULL,
+							   &consumed_bw);
+			if (ret)
+				return ret;
 
 			bw -= consumed_bw;
 		}
diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index 5bdb8b11345e..45f7a50a48ff 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -536,7 +536,8 @@ static int tb_dp_activate(struct tb_tunnel *tunnel, bool active)
 	return 0;
 }
 
-static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel)
+static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
+				    int *consumed_down)
 {
 	struct tb_port *in = tunnel->src_port;
 	const struct tb_switch *sw = in->sw;
@@ -580,10 +581,20 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel)
 		lanes = tb_dp_cap_get_lanes(val);
 	} else {
 		/* No bandwidth management for legacy devices  */
+		*consumed_up = 0;
+		*consumed_down = 0;
 		return 0;
 	}
 
-	return tb_dp_bandwidth(rate, lanes);
+	if (in->sw->config.depth < tunnel->dst_port->sw->config.depth) {
+		*consumed_up = 0;
+		*consumed_down = tb_dp_bandwidth(rate, lanes);
+	} else {
+		*consumed_up = tb_dp_bandwidth(rate, lanes);
+		*consumed_down = 0;
+	}
+
+	return 0;
 }
 
 static void tb_dp_init_aux_path(struct tb_path *path)
@@ -1174,21 +1185,39 @@ static bool tb_tunnel_is_active(const struct tb_tunnel *tunnel)
 /**
  * tb_tunnel_consumed_bandwidth() - Return bandwidth consumed by the tunnel
  * @tunnel: Tunnel to check
+ * @consumed_up: Consumed bandwidth in Mb/s from @dst_port to @src_port.
+ *		 Can be %NULL.
+ * @consumed_down: Consumed bandwidth in Mb/s from @src_port to @dst_port.
+ *		   Can be %NULL.
  *
- * Returns bandwidth currently consumed by @tunnel and %0 if the @tunnel
- * is not active or does consume bandwidth.
+ * Stores the amount of isochronous bandwidth @tunnel consumes in
+ * @consumed_up and @consumed_down. In case of success returns %0,
+ * negative errno otherwise.
  */
-int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel)
+int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
+				 int *consumed_down)
 {
+	int up_bw = 0, down_bw = 0;
+
 	if (!tb_tunnel_is_active(tunnel))
-		return 0;
+		goto out;
 
 	if (tunnel->consumed_bandwidth) {
-		int ret = tunnel->consumed_bandwidth(tunnel);
+		int ret;
 
-		tb_tunnel_dbg(tunnel, "consumed bandwidth %d Mb/s\n", ret);
-		return ret;
+		ret = tunnel->consumed_bandwidth(tunnel, &up_bw, &down_bw);
+		if (ret)
+			return ret;
+
+		tb_tunnel_dbg(tunnel, "consumed bandwidth %d/%d Mb/s\n", up_bw,
+			      down_bw);
 	}
 
+out:
+	if (consumed_up)
+		*consumed_up = up_bw;
+	if (consumed_down)
+		*consumed_down = down_bw;
+
 	return 0;
 }
diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h
index 3f5ba93225e7..cc952b2be792 100644
--- a/drivers/thunderbolt/tunnel.h
+++ b/drivers/thunderbolt/tunnel.h
@@ -42,7 +42,8 @@ struct tb_tunnel {
 	size_t npaths;
 	int (*init)(struct tb_tunnel *tunnel);
 	int (*activate)(struct tb_tunnel *tunnel, bool activate);
-	int (*consumed_bandwidth)(struct tb_tunnel *tunnel);
+	int (*consumed_bandwidth)(struct tb_tunnel *tunnel, int *consumed_up,
+				  int *consumed_down);
 	struct list_head list;
 	enum tb_tunnel_type type;
 	unsigned int max_bw;
@@ -69,7 +70,8 @@ void tb_tunnel_deactivate(struct tb_tunnel *tunnel);
 bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel);
 bool tb_tunnel_switch_on_path(const struct tb_tunnel *tunnel,
 			      const struct tb_switch *sw);
-int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel);
+int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
+				 int *consumed_down);
 
 static inline bool tb_tunnel_is_pci(const struct tb_tunnel *tunnel)
 {
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 13/17] thunderbolt: Increase DP DPRX wait timeout
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (11 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 12/17] thunderbolt: Report consumed bandwidth in both directions Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 14/17] thunderbolt: Implement USB3 bandwidth negotiation routines Mika Westerberg
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

Sometimes it takes longer for DPRX to be set so increase the timeout to
cope with this.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tunnel.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index 45f7a50a48ff..7896f8b7a69c 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -545,7 +545,7 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
 	int ret;
 
 	if (tb_dp_is_usb4(sw)) {
-		int timeout = 10;
+		int timeout = 20;
 
 		/*
 		 * Wait for DPRX done. Normally it should be already set
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 14/17] thunderbolt: Implement USB3 bandwidth negotiation routines
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (12 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 13/17] thunderbolt: Increase DP DPRX wait timeout Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 15/17] thunderbolt: Make tb_port_get_link_speed() available to other files Mika Westerberg
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

Each host router USB3 downstream adapter has a set of registers that are
used to negotiate bandwidth between the connection manager and the
internal xHCI controller. These registers allow dynamic bandwidth
management for USB3 isochronous traffic based on what is actually
consumed vs. allocated at any given time.

Implement these USB3 bandwidth negotiation routines to allow the
software connection manager take advantage of these.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.h      |   9 +
 drivers/thunderbolt/tb_regs.h |  19 ++
 drivers/thunderbolt/usb4.c    | 341 ++++++++++++++++++++++++++++++++++
 3 files changed, 369 insertions(+)

diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index de8124949eaf..cb53a94fe4f8 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -853,4 +853,13 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
 					  const struct tb_port *port);
 
 int usb4_port_unlock(struct tb_port *port);
+
+int usb4_usb3_port_max_link_rate(struct tb_port *port);
+int usb4_usb3_port_actual_link_rate(struct tb_port *port);
+int usb4_usb3_port_allocated_bandwidth(struct tb_port *port, int *upstream_bw,
+				       int *downstream_bw);
+int usb4_usb3_port_allocate_bandwidth(struct tb_port *port, int *upstream_bw,
+				      int *downstream_bw);
+int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw,
+				     int *downstream_bw);
 #endif
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index 77d4b8598835..4fc561347b7c 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -338,6 +338,25 @@ struct tb_regs_port_header {
 #define ADP_USB3_CS_0				0x00
 #define ADP_USB3_CS_0_V				BIT(30)
 #define ADP_USB3_CS_0_PE			BIT(31)
+#define ADP_USB3_CS_1				0x01
+#define ADP_USB3_CS_1_CUBW_MASK			GENMASK(11, 0)
+#define ADP_USB3_CS_1_CDBW_MASK			GENMASK(23, 12)
+#define ADP_USB3_CS_1_CDBW_SHIFT		12
+#define ADP_USB3_CS_1_HCA			BIT(31)
+#define ADP_USB3_CS_2				0x02
+#define ADP_USB3_CS_2_AUBW_MASK			GENMASK(11, 0)
+#define ADP_USB3_CS_2_ADBW_MASK			GENMASK(23, 12)
+#define ADP_USB3_CS_2_ADBW_SHIFT		12
+#define ADP_USB3_CS_2_CMR			BIT(31)
+#define ADP_USB3_CS_3				0x03
+#define ADP_USB3_CS_3_SCALE_MASK		GENMASK(5, 0)
+#define ADP_USB3_CS_4				0x04
+#define ADP_USB3_CS_4_ALR_MASK			GENMASK(6, 0)
+#define ADP_USB3_CS_4_ALR_20G			0x1
+#define ADP_USB3_CS_4_ULV			BIT(7)
+#define ADP_USB3_CS_4_MSLR_MASK			GENMASK(18, 12)
+#define ADP_USB3_CS_4_MSLR_SHIFT		12
+#define ADP_USB3_CS_4_MSLR_20G			0x1
 
 /* Hop register from TB_CFG_HOPS. 8 byte per entry. */
 struct tb_regs_hop {
diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
index dd1c0498a8ee..d1a554fd09ae 100644
--- a/drivers/thunderbolt/usb4.c
+++ b/drivers/thunderbolt/usb4.c
@@ -787,3 +787,344 @@ int usb4_port_unlock(struct tb_port *port)
 	val &= ~ADP_CS_4_LCK;
 	return tb_port_write(port, &val, TB_CFG_PORT, ADP_CS_4, 1);
 }
+
+static int usb4_port_wait_for_bit(struct tb_port *port, u32 offset, u32 bit,
+				  u32 value, int timeout_msec)
+{
+	ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec);
+
+	do {
+		u32 val;
+		int ret;
+
+		ret = tb_port_read(port, &val, TB_CFG_PORT, offset, 1);
+		if (ret)
+			return ret;
+
+		if ((val & bit) == value)
+			return 0;
+
+		usleep_range(50, 100);
+	} while (ktime_before(ktime_get(), timeout));
+
+	return -ETIMEDOUT;
+}
+
+/**
+ * usb4_usb3_port_max_link_rate() - Maximum support USB3 link rate
+ * @port: USB3 adapter port
+ *
+ * Return maximum supported link rate of a USB3 adapter in Mb/s.
+ * Negative errno in case of error.
+ */
+int usb4_usb3_port_max_link_rate(struct tb_port *port)
+{
+	int ret, lr;
+	u32 val;
+
+	if (!tb_port_is_usb3_down(port) && !tb_port_is_usb3_up(port))
+		return -EINVAL;
+
+	ret = tb_port_read(port, &val, TB_CFG_PORT,
+			   port->cap_adap + ADP_USB3_CS_4, 1);
+	if (ret)
+		return ret;
+
+	lr = (val & ADP_USB3_CS_4_MSLR_MASK) >> ADP_USB3_CS_4_MSLR_SHIFT;
+	return lr == ADP_USB3_CS_4_MSLR_20G ? 20000 : 10000;
+}
+
+/**
+ * usb4_usb3_port_actual_link_rate() - Established USB3 link rate
+ * @port: USB3 adapter port
+ *
+ * Return actual established link rate of a USB3 adapter in Mb/s. If the
+ * link is not up returns %0 and negative errno in case of failure.
+ */
+int usb4_usb3_port_actual_link_rate(struct tb_port *port)
+{
+	int ret, lr;
+	u32 val;
+
+	if (!tb_port_is_usb3_down(port) && !tb_port_is_usb3_up(port))
+		return -EINVAL;
+
+	ret = tb_port_read(port, &val, TB_CFG_PORT,
+			   port->cap_adap + ADP_USB3_CS_4, 1);
+	if (ret)
+		return ret;
+
+	if (!(val & ADP_USB3_CS_4_ULV))
+		return 0;
+
+	lr = val & ADP_USB3_CS_4_ALR_MASK;
+	return lr == ADP_USB3_CS_4_ALR_20G ? 20000 : 10000;
+}
+
+static int usb4_usb3_port_cm_request(struct tb_port *port, bool request)
+{
+	int ret;
+	u32 val;
+
+	if (!tb_port_is_usb3_down(port))
+		return -EINVAL;
+	if (tb_route(port->sw))
+		return -EINVAL;
+
+	ret = tb_port_read(port, &val, TB_CFG_PORT,
+			   port->cap_adap + ADP_USB3_CS_2, 1);
+	if (ret)
+		return ret;
+
+	if (request)
+		val |= ADP_USB3_CS_2_CMR;
+	else
+		val &= ~ADP_USB3_CS_2_CMR;
+
+	ret = tb_port_write(port, &val, TB_CFG_PORT,
+			    port->cap_adap + ADP_USB3_CS_2, 1);
+	if (ret)
+		return ret;
+
+	/*
+	 * We can use val here directly as the CMR bit is in the same place
+	 * as HCA. Just mask out others.
+	 */
+	val &= ADP_USB3_CS_2_CMR;
+	return usb4_port_wait_for_bit(port, port->cap_adap + ADP_USB3_CS_1,
+				      ADP_USB3_CS_1_HCA, val, 1500);
+}
+
+static inline int usb4_usb3_port_set_cm_request(struct tb_port *port)
+{
+	return usb4_usb3_port_cm_request(port, true);
+}
+
+static inline int usb4_usb3_port_clear_cm_request(struct tb_port *port)
+{
+	return usb4_usb3_port_cm_request(port, false);
+}
+
+static unsigned int usb3_bw_to_mbps(u32 bw, u8 scale)
+{
+	unsigned long uframes;
+
+	uframes = bw * 512 << scale;
+	return DIV_ROUND_CLOSEST(uframes * 8000, 1000 * 1000);
+}
+
+static u32 mbps_to_usb3_bw(unsigned int mbps, u8 scale)
+{
+	unsigned long uframes;
+
+	/* 1 uframe is 1/8 ms (125 us) -> 1 / 8000 s */
+	uframes = ((unsigned long)mbps * 1000 *  1000) / 8000;
+	return DIV_ROUND_UP(uframes, 512 << scale);
+}
+
+static int usb4_usb3_port_read_allocated_bandwidth(struct tb_port *port,
+						   int *upstream_bw,
+						   int *downstream_bw)
+{
+	u32 val, bw, scale;
+	int ret;
+
+	ret = tb_port_read(port, &val, TB_CFG_PORT,
+			   port->cap_adap + ADP_USB3_CS_2, 1);
+	if (ret)
+		return ret;
+
+	ret = tb_port_read(port, &scale, TB_CFG_PORT,
+			   port->cap_adap + ADP_USB3_CS_3, 1);
+	if (ret)
+		return ret;
+
+	scale &= ADP_USB3_CS_3_SCALE_MASK;
+
+	bw = val & ADP_USB3_CS_2_AUBW_MASK;
+	*upstream_bw = usb3_bw_to_mbps(bw, scale);
+
+	bw = (val & ADP_USB3_CS_2_ADBW_MASK) >> ADP_USB3_CS_2_ADBW_SHIFT;
+	*downstream_bw = usb3_bw_to_mbps(bw, scale);
+
+	return 0;
+}
+
+/**
+ * usb4_usb3_port_allocated_bandwidth() - Bandwidth allocated for USB3
+ * @port: USB3 adapter port
+ * @upstream_bw: Allocated upstream bandwidth is stored here
+ * @downstream_bw: Allocated downstream bandwidth is stored here
+ *
+ * Stores currently allocated USB3 bandwidth into @upstream_bw and
+ * @downstream_bw in Mb/s. Returns %0 in case of success and negative
+ * errno in failure.
+ */
+int usb4_usb3_port_allocated_bandwidth(struct tb_port *port, int *upstream_bw,
+				       int *downstream_bw)
+{
+	int ret;
+
+	ret = usb4_usb3_port_set_cm_request(port);
+	if (ret)
+		return ret;
+
+	ret = usb4_usb3_port_read_allocated_bandwidth(port, upstream_bw,
+						      downstream_bw);
+	usb4_usb3_port_clear_cm_request(port);
+
+	return ret;
+}
+
+static int usb4_usb3_port_read_consumed_bandwidth(struct tb_port *port,
+						  int *upstream_bw,
+						  int *downstream_bw)
+{
+	u32 val, bw, scale;
+	int ret;
+
+	ret = tb_port_read(port, &val, TB_CFG_PORT,
+			   port->cap_adap + ADP_USB3_CS_1, 1);
+	if (ret)
+		return ret;
+
+	ret = tb_port_read(port, &scale, TB_CFG_PORT,
+			   port->cap_adap + ADP_USB3_CS_3, 1);
+	if (ret)
+		return ret;
+
+	scale &= ADP_USB3_CS_3_SCALE_MASK;
+
+	bw = val & ADP_USB3_CS_1_CUBW_MASK;
+	*upstream_bw = usb3_bw_to_mbps(bw, scale);
+
+	bw = (val & ADP_USB3_CS_1_CDBW_MASK) >> ADP_USB3_CS_1_CDBW_SHIFT;
+	*downstream_bw = usb3_bw_to_mbps(bw, scale);
+
+	return 0;
+}
+
+static int usb4_usb3_port_write_allocated_bandwidth(struct tb_port *port,
+						    int upstream_bw,
+						    int downstream_bw)
+{
+	u32 val, ubw, dbw, scale;
+	int ret;
+
+	/* Read the used scale, hardware default is 0 */
+	ret = tb_port_read(port, &scale, TB_CFG_PORT,
+			   port->cap_adap + ADP_USB3_CS_3, 1);
+	if (ret)
+		return ret;
+
+	scale &= ADP_USB3_CS_3_SCALE_MASK;
+	ubw = mbps_to_usb3_bw(upstream_bw, scale);
+	dbw = mbps_to_usb3_bw(downstream_bw, scale);
+
+	ret = tb_port_read(port, &val, TB_CFG_PORT,
+			   port->cap_adap + ADP_USB3_CS_2, 1);
+	if (ret)
+		return ret;
+
+	val &= ~(ADP_USB3_CS_2_AUBW_MASK | ADP_USB3_CS_2_ADBW_MASK);
+	val |= dbw << ADP_USB3_CS_2_ADBW_SHIFT;
+	val |= ubw;
+
+	return tb_port_write(port, &val, TB_CFG_PORT,
+			     port->cap_adap + ADP_USB3_CS_2, 1);
+}
+
+/**
+ * usb4_usb3_port_allocate_bandwidth() - Allocate bandwidth for USB3
+ * @port: USB3 adapter port
+ * @upstream_bw: New upstream bandwidth
+ * @downstream_bw: New downstream bandwidth
+ *
+ * This can be used to set how much bandwidth is allocated for the USB3
+ * tunneled isochronous traffic. @upstream_bw and @downstream_bw are the
+ * new values programmed to the USB3 adapter allocation registers. If
+ * the values are lower than what is currently consumed the allocation
+ * is set to what is currently consumed instead (consumed bandwidth
+ * cannot be taken away by CM). The actual new values are returned in
+ * @upstream_bw and @downstream_bw.
+ *
+ * Returns %0 in case of success and negative errno if there was a
+ * failure.
+ */
+int usb4_usb3_port_allocate_bandwidth(struct tb_port *port, int *upstream_bw,
+				      int *downstream_bw)
+{
+	int ret, consumed_up, consumed_down, allocate_up, allocate_down;
+
+	ret = usb4_usb3_port_set_cm_request(port);
+	if (ret)
+		return ret;
+
+	ret = usb4_usb3_port_read_consumed_bandwidth(port, &consumed_up,
+						     &consumed_down);
+	if (ret)
+		goto err_request;
+
+	/* Don't allow it go lower than what is consumed */
+	allocate_up = max(*upstream_bw, consumed_up);
+	allocate_down = max(*downstream_bw, consumed_down);
+
+	ret = usb4_usb3_port_write_allocated_bandwidth(port, allocate_up,
+						       allocate_down);
+	if (ret)
+		goto err_request;
+
+	*upstream_bw = allocate_up;
+	*downstream_bw = allocate_down;
+
+err_request:
+	usb4_usb3_port_clear_cm_request(port);
+	return ret;
+}
+
+/**
+ * usb4_usb3_port_release_bandwidth() - Release allocated USB3 bandwidth
+ * @port: USB3 adapter port
+ * @upstream_bw: New allocated upstream bandwidth
+ * @downstream_bw: New allocated downstream bandwidth
+ *
+ * Releases USB3 allocated bandwidth down to what is actually consumed.
+ * The new bandwidth is returned in @upstream_bw and @downstream_bw.
+ *
+ * Returns 0% in success and negative errno in case of failure.
+ */
+int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw,
+				     int *downstream_bw)
+{
+	int ret, consumed_up, consumed_down;
+
+	ret = usb4_usb3_port_set_cm_request(port);
+	if (ret)
+		return ret;
+
+	ret = usb4_usb3_port_read_consumed_bandwidth(port, &consumed_up,
+						     &consumed_down);
+	if (ret)
+		goto err_request;
+
+	/*
+	 * Always keep 1000 Mb/s to make sure xHCI has at least some
+	 * bandwidth available for isochronous traffic.
+	 */
+	if (consumed_up < 1000)
+		consumed_up = 1000;
+	if (consumed_down < 1000)
+		consumed_down = 1000;
+
+	ret = usb4_usb3_port_write_allocated_bandwidth(port, consumed_up,
+						       consumed_down);
+	if (ret)
+		goto err_request;
+
+	*upstream_bw = consumed_up;
+	*downstream_bw = consumed_down;
+
+err_request:
+	usb4_usb3_port_clear_cm_request(port);
+	return ret;
+}
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 15/17] thunderbolt: Make tb_port_get_link_speed() available to other files
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (13 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 14/17] thunderbolt: Implement USB3 bandwidth negotiation routines Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 16/17] thunderbolt: Add USB3 bandwidth management Mika Westerberg
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

We need to call this from tb.c when we improve the bandwidth management
to take USB3 into account.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/switch.c | 8 +++++++-
 drivers/thunderbolt/tb.h     | 2 ++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 29db484d2c74..c01176429d5f 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -911,7 +911,13 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
 	return next != prev ? next : NULL;
 }
 
-static int tb_port_get_link_speed(struct tb_port *port)
+/**
+ * tb_port_get_link_speed() - Get current link speed
+ * @port: Port to check (USB4 or CIO)
+ *
+ * Returns link speed in Gb/s or negative errno in case of failure.
+ */
+int tb_port_get_link_speed(struct tb_port *port)
 {
 	u32 val, speed;
 	int ret;
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index cb53a94fe4f8..c6f18200fe92 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -759,6 +759,8 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
 	for ((p) = tb_next_port_on_path((src), (dst), NULL); (p);	\
 	     (p) = tb_next_port_on_path((src), (dst), (p)))
 
+int tb_port_get_link_speed(struct tb_port *port);
+
 int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
 int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
 int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 16/17] thunderbolt: Add USB3 bandwidth management
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (14 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 15/17] thunderbolt: Make tb_port_get_link_speed() available to other files Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-15 14:26 ` [PATCH 17/17] thunderbolt: Add KUnit tests for tunneling Mika Westerberg
  2020-06-29 15:39 ` [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

USB3 supports both isochronous and non-isochronous traffic. The former
requires guaranteed bandwidth and can take up to 90% of the total
bandwidth. With USB4 USB3 is tunneled over USB4 fabric which means that
we need to make sure there is enough bandwidth allocated for the USB3
tunnels in addition to DisplayPort tunnels.

Whereas DisplayPort bandwidth management is static and done before the
DP tunnel is established, the USB3 bandwidth management is dynamic and
allows increasing and decreasing the allocated bandwidth according to
what is currently consumed. This is done through host router USB3
downstream adapter registers.

This adds USB3 bandwidth management to the software connection manager
so that we always try to allocate maximum bandwidth for DP tunnels and
what is left is allocated for USB3.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/path.c   |  13 +-
 drivers/thunderbolt/tb.c     | 340 ++++++++++++++++++++++++++---------
 drivers/thunderbolt/tb.h     |   4 +-
 drivers/thunderbolt/tunnel.c | 255 ++++++++++++++++++++++++--
 drivers/thunderbolt/tunnel.h |  31 +++-
 5 files changed, 532 insertions(+), 111 deletions(-)

diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c
index 854ff3412161..03e7b714deab 100644
--- a/drivers/thunderbolt/path.c
+++ b/drivers/thunderbolt/path.c
@@ -570,21 +570,20 @@ bool tb_path_is_invalid(struct tb_path *path)
 }
 
 /**
- * tb_path_switch_on_path() - Does the path go through certain switch
+ * tb_path_port_on_path() - Does the path go through certain port
  * @path: Path to check
- * @sw: Switch to check
+ * @port: Switch to check
  *
- * Goes over all hops on path and checks if @sw is any of them.
+ * Goes over all hops on path and checks if @port is any of them.
  * Direction does not matter.
  */
-bool tb_path_switch_on_path(const struct tb_path *path,
-			    const struct tb_switch *sw)
+bool tb_path_port_on_path(const struct tb_path *path, const struct tb_port *port)
 {
 	int i;
 
 	for (i = 0; i < path->path_length; i++) {
-		if (path->hops[i].in_port->sw == sw ||
-		    path->hops[i].out_port->sw == sw)
+		if (path->hops[i].in_port == port ||
+		    path->hops[i].out_port == port)
 			return true;
 	}
 
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 53f9673c1395..bbcf0f25617c 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -216,9 +216,187 @@ static struct tb_port *tb_find_usb3_down(struct tb_switch *sw,
 	return NULL;
 }
 
+static struct tb_tunnel *tb_find_tunnel(struct tb *tb, enum tb_tunnel_type type,
+					struct tb_port *src_port,
+					struct tb_port *dst_port)
+{
+	struct tb_cm *tcm = tb_priv(tb);
+	struct tb_tunnel *tunnel;
+
+	list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
+		if (tunnel->type == type &&
+		    ((src_port && src_port == tunnel->src_port) ||
+		     (dst_port && dst_port == tunnel->dst_port))) {
+			return tunnel;
+		}
+	}
+
+	return NULL;
+}
+
+static struct tb_tunnel *tb_find_first_usb3_tunnel(struct tb *tb,
+						   struct tb_port *src_port,
+						   struct tb_port *dst_port)
+{
+	struct tb_port *port, *usb3_down;
+	struct tb_switch *sw;
+
+	/* Pick the router that is deepest in the topology */
+	if (dst_port->sw->config.depth > src_port->sw->config.depth)
+		sw = dst_port->sw;
+	else
+		sw = src_port->sw;
+
+	/* Can't be the host router */
+	if (sw == tb->root_switch)
+		return NULL;
+
+	/* Find the downstream USB4 port that leads to this router */
+	port = tb_port_at(tb_route(sw), tb->root_switch);
+	/* Find the corresponding host router USB3 downstream port */
+	usb3_down = usb4_switch_map_usb3_down(tb->root_switch, port);
+	if (!usb3_down)
+		return NULL;
+
+	return tb_find_tunnel(tb, TB_TUNNEL_USB3, usb3_down, NULL);
+}
+
+static int tb_available_bandwidth(struct tb *tb, struct tb_port *src_port,
+	struct tb_port *dst_port, int *available_up, int *available_down)
+{
+	int usb3_consumed_up, usb3_consumed_down, ret;
+	struct tb_cm *tcm = tb_priv(tb);
+	struct tb_tunnel *tunnel;
+	struct tb_port *port;
+
+	tb_port_dbg(dst_port, "calculating available bandwidth\n");
+
+	tunnel = tb_find_first_usb3_tunnel(tb, src_port, dst_port);
+	if (tunnel) {
+		ret = tb_tunnel_consumed_bandwidth(tunnel, &usb3_consumed_up,
+						   &usb3_consumed_down);
+		if (ret)
+			return ret;
+	} else {
+		usb3_consumed_up = 0;
+		usb3_consumed_down = 0;
+	}
+
+	*available_up = *available_down = 40000;
+
+	/* Find the minimum available bandwidth over all links */
+	tb_for_each_port_on_path(src_port, dst_port, port) {
+		int link_speed, link_width, up_bw, down_bw;
+
+		if (!tb_port_is_null(port))
+			continue;
+
+		if (tb_is_upstream_port(port)) {
+			link_speed = port->sw->link_speed;
+		} else {
+			link_speed = tb_port_get_link_speed(port);
+			if (link_speed < 0)
+				return link_speed;
+		}
+
+		link_width = port->bonded ? 2 : 1;
+
+		up_bw = link_speed * link_width * 1000; /* Mb/s */
+		/* Leave 10% guard band */
+		up_bw -= up_bw / 10;
+		down_bw = up_bw;
+
+		tb_port_dbg(port, "link total bandwidth %d Mb/s\n", up_bw);
+
+		/*
+		 * Find all DP tunnels that cross the port and reduce
+		 * their consumed bandwidth from the available.
+		 */
+		list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
+			int dp_consumed_up, dp_consumed_down;
+
+			if (!tb_tunnel_is_dp(tunnel))
+				continue;
+
+			if (!tb_tunnel_port_on_path(tunnel, port))
+				continue;
+
+			ret = tb_tunnel_consumed_bandwidth(tunnel,
+							   &dp_consumed_up,
+							   &dp_consumed_down);
+			if (ret)
+				return ret;
+
+			up_bw -= dp_consumed_up;
+			down_bw -= dp_consumed_down;
+		}
+
+		/*
+		 * If USB3 is tunneled from the host router down to the
+		 * branch leading to port we need to take USB3 consumed
+		 * bandwidth into account regardless whether it actually
+		 * crosses the port.
+		 */
+		up_bw -= usb3_consumed_up;
+		down_bw -= usb3_consumed_down;
+
+		if (up_bw < *available_up)
+			*available_up = up_bw;
+		if (down_bw < *available_down)
+			*available_down = down_bw;
+	}
+
+	if (*available_up < 0)
+		*available_up = 0;
+	if (*available_down < 0)
+		*available_down = 0;
+
+	return 0;
+}
+
+static int tb_release_unused_usb3_bandwidth(struct tb *tb,
+					    struct tb_port *src_port,
+					    struct tb_port *dst_port)
+{
+	struct tb_tunnel *tunnel;
+
+	tunnel = tb_find_first_usb3_tunnel(tb, src_port, dst_port);
+	return tunnel ? tb_tunnel_release_unused_bandwidth(tunnel) : 0;
+}
+
+static void tb_reclaim_usb3_bandwidth(struct tb *tb, struct tb_port *src_port,
+				      struct tb_port *dst_port)
+{
+	int ret, available_up, available_down;
+	struct tb_tunnel *tunnel;
+
+	tunnel = tb_find_first_usb3_tunnel(tb, src_port, dst_port);
+	if (!tunnel)
+		return;
+
+	tb_dbg(tb, "reclaiming unused bandwidth for USB3\n");
+
+	/*
+	 * Calculate available bandwidth for the first hop USB3 tunnel.
+	 * That determines the whole USB3 bandwidth for this branch.
+	 */
+	ret = tb_available_bandwidth(tb, tunnel->src_port, tunnel->dst_port,
+				     &available_up, &available_down);
+	if (ret) {
+		tb_warn(tb, "failed to calculate available bandwidth\n");
+		return;
+	}
+
+	tb_dbg(tb, "available bandwidth for USB3 %d/%d Mb/s\n",
+	       available_up, available_down);
+
+	tb_tunnel_reclaim_available_bandwidth(tunnel, &available_up, &available_down);
+}
+
 static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
 {
 	struct tb_switch *parent = tb_switch_parent(sw);
+	int ret, available_up, available_down;
 	struct tb_port *up, *down, *port;
 	struct tb_cm *tcm = tb_priv(tb);
 	struct tb_tunnel *tunnel;
@@ -249,21 +427,48 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
 		parent_up = tb_switch_find_port(parent, TB_TYPE_USB3_UP);
 		if (!parent_up || !tb_port_is_enabled(parent_up))
 			return 0;
+
+		/* Make all unused bandwidth available for the new tunnel */
+		ret = tb_release_unused_usb3_bandwidth(tb, down, up);
+		if (ret)
+			return ret;
 	}
 
-	tunnel = tb_tunnel_alloc_usb3(tb, up, down);
-	if (!tunnel)
-		return -ENOMEM;
+	ret = tb_available_bandwidth(tb, down, up, &available_up,
+				     &available_down);
+	if (ret)
+		goto err_reclaim;
+
+	tb_port_dbg(up, "available bandwidth for new USB3 tunnel %d/%d Mb/s\n",
+		    available_up, available_down);
+
+	tunnel = tb_tunnel_alloc_usb3(tb, up, down, available_up,
+				      available_down);
+	if (!tunnel) {
+		ret = -ENOMEM;
+		goto err_reclaim;
+	}
 
 	if (tb_tunnel_activate(tunnel)) {
 		tb_port_info(up,
 			     "USB3 tunnel activation failed, aborting\n");
-		tb_tunnel_free(tunnel);
-		return -EIO;
+		ret = -EIO;
+		goto err_free;
 	}
 
 	list_add_tail(&tunnel->list, &tcm->tunnel_list);
+	if (tb_route(parent))
+		tb_reclaim_usb3_bandwidth(tb, down, up);
+
 	return 0;
+
+err_free:
+	tb_tunnel_free(tunnel);
+err_reclaim:
+	if (tb_route(parent))
+		tb_reclaim_usb3_bandwidth(tb, down, up);
+
+	return ret;
 }
 
 static int tb_create_usb3_tunnels(struct tb_switch *sw)
@@ -403,40 +608,40 @@ static void tb_scan_port(struct tb_port *port)
 	tb_scan_switch(sw);
 }
 
-static struct tb_tunnel *tb_find_tunnel(struct tb *tb, enum tb_tunnel_type type,
-					struct tb_port *src_port,
-					struct tb_port *dst_port)
-{
-	struct tb_cm *tcm = tb_priv(tb);
-	struct tb_tunnel *tunnel;
-
-	list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
-		if (tunnel->type == type &&
-		    ((src_port && src_port == tunnel->src_port) ||
-		     (dst_port && dst_port == tunnel->dst_port))) {
-			return tunnel;
-		}
-	}
-
-	return NULL;
-}
-
 static void tb_deactivate_and_free_tunnel(struct tb_tunnel *tunnel)
 {
+	struct tb_port *src_port, *dst_port;
+	struct tb *tb;
+
 	if (!tunnel)
 		return;
 
 	tb_tunnel_deactivate(tunnel);
 	list_del(&tunnel->list);
 
-	/*
-	 * In case of DP tunnel make sure the DP IN resource is deallocated
-	 * properly.
-	 */
-	if (tb_tunnel_is_dp(tunnel)) {
-		struct tb_port *in = tunnel->src_port;
+	tb = tunnel->tb;
+	src_port = tunnel->src_port;
+	dst_port = tunnel->dst_port;
+
+	switch (tunnel->type) {
+	case TB_TUNNEL_DP:
+		/*
+		 * In case of DP tunnel make sure the DP IN resource is
+		 * deallocated properly.
+		 */
+		tb_switch_dealloc_dp_resource(src_port->sw, src_port);
+		fallthrough;
 
-		tb_switch_dealloc_dp_resource(in->sw, in);
+	case TB_TUNNEL_USB3:
+		tb_reclaim_usb3_bandwidth(tb, src_port, dst_port);
+		break;
+
+	default:
+		/*
+		 * PCIe and DMA tunnels do not consume guaranteed
+		 * bandwidth.
+		 */
+		break;
 	}
 
 	tb_tunnel_free(tunnel);
@@ -530,46 +735,6 @@ static struct tb_port *tb_find_pcie_down(struct tb_switch *sw,
 	return tb_find_unused_port(sw, TB_TYPE_PCIE_DOWN);
 }
 
-static int tb_available_bw(struct tb_cm *tcm, struct tb_port *in,
-			   struct tb_port *out)
-{
-	struct tb_switch *sw = out->sw;
-	struct tb_tunnel *tunnel;
-	int ret, bw, available_bw = 40000;
-
-	while (sw && sw != in->sw) {
-		bw = sw->link_speed * sw->link_width * 1000; /* Mb/s */
-		/* Leave 10% guard band */
-		bw -= bw / 10;
-
-		/*
-		 * Check for any active DP tunnels that go through this
-		 * switch and reduce their consumed bandwidth from
-		 * available.
-		 */
-		list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
-			int consumed_bw;
-
-			if (!tb_tunnel_switch_on_path(tunnel, sw))
-				continue;
-
-			ret = tb_tunnel_consumed_bandwidth(tunnel, NULL,
-							   &consumed_bw);
-			if (ret)
-				return ret;
-
-			bw -= consumed_bw;
-		}
-
-		if (bw < available_bw)
-			available_bw = bw;
-
-		sw = tb_switch_parent(sw);
-	}
-
-	return available_bw;
-}
-
 static struct tb_port *tb_find_dp_out(struct tb *tb, struct tb_port *in)
 {
 	struct tb_port *host_port, *port;
@@ -609,10 +774,10 @@ static struct tb_port *tb_find_dp_out(struct tb *tb, struct tb_port *in)
 
 static void tb_tunnel_dp(struct tb *tb)
 {
+	int available_up, available_down, ret;
 	struct tb_cm *tcm = tb_priv(tb);
 	struct tb_port *port, *in, *out;
 	struct tb_tunnel *tunnel;
-	int available_bw;
 
 	/*
 	 * Find pair of inactive DP IN and DP OUT adapters and then
@@ -654,32 +819,41 @@ static void tb_tunnel_dp(struct tb *tb)
 		return;
 	}
 
-	/* Calculate available bandwidth between in and out */
-	available_bw = tb_available_bw(tcm, in, out);
-	if (available_bw < 0) {
-		tb_warn(tb, "failed to determine available bandwidth\n");
-		return;
+	/* Make all unused USB3 bandwidth available for the new DP tunnel */
+	ret = tb_release_unused_usb3_bandwidth(tb, in, out);
+	if (ret) {
+		tb_warn(tb, "failed to release unused bandwidth\n");
+		goto err_dealloc_dp;
 	}
 
-	tb_dbg(tb, "available bandwidth for new DP tunnel %u Mb/s\n",
-	       available_bw);
+	ret = tb_available_bandwidth(tb, in, out, &available_up,
+				     &available_down);
+	if (ret)
+		goto err_reclaim;
+
+	tb_dbg(tb, "available bandwidth for new DP tunnel %u/%u Mb/s\n",
+	       available_up, available_down);
 
-	tunnel = tb_tunnel_alloc_dp(tb, in, out, available_bw);
+	tunnel = tb_tunnel_alloc_dp(tb, in, out, available_up, available_down);
 	if (!tunnel) {
 		tb_port_dbg(out, "could not allocate DP tunnel\n");
-		goto dealloc_dp;
+		goto err_reclaim;
 	}
 
 	if (tb_tunnel_activate(tunnel)) {
 		tb_port_info(out, "DP tunnel activation failed, aborting\n");
-		tb_tunnel_free(tunnel);
-		goto dealloc_dp;
+		goto err_free;
 	}
 
 	list_add_tail(&tunnel->list, &tcm->tunnel_list);
+	tb_reclaim_usb3_bandwidth(tb, in, out);
 	return;
 
-dealloc_dp:
+err_free:
+	tb_tunnel_free(tunnel);
+err_reclaim:
+	tb_reclaim_usb3_bandwidth(tb, in, out);
+err_dealloc_dp:
 	tb_switch_dealloc_dp_resource(in->sw, in);
 }
 
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index c6f18200fe92..a62db231f07b 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -789,8 +789,8 @@ void tb_path_free(struct tb_path *path);
 int tb_path_activate(struct tb_path *path);
 void tb_path_deactivate(struct tb_path *path);
 bool tb_path_is_invalid(struct tb_path *path);
-bool tb_path_switch_on_path(const struct tb_path *path,
-			    const struct tb_switch *sw);
+bool tb_path_port_on_path(const struct tb_path *path,
+			  const struct tb_port *port);
 
 int tb_drom_read(struct tb_switch *sw);
 int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid);
diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index 7896f8b7a69c..2aae2c76d880 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -423,7 +423,7 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
 	u32 out_dp_cap, out_rate, out_lanes, in_dp_cap, in_rate, in_lanes, bw;
 	struct tb_port *out = tunnel->dst_port;
 	struct tb_port *in = tunnel->src_port;
-	int ret;
+	int ret, max_bw;
 
 	/*
 	 * Copy DP_LOCAL_CAP register to DP_REMOTE_CAP register for
@@ -472,10 +472,15 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
 	tb_port_dbg(out, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
 		    out_rate, out_lanes, bw);
 
-	if (tunnel->max_bw && bw > tunnel->max_bw) {
+	if (in->sw->config.depth < out->sw->config.depth)
+		max_bw = tunnel->max_down;
+	else
+		max_bw = tunnel->max_up;
+
+	if (max_bw && bw > max_bw) {
 		u32 new_rate, new_lanes, new_bw;
 
-		ret = tb_dp_reduce_bandwidth(tunnel->max_bw, in_rate, in_lanes,
+		ret = tb_dp_reduce_bandwidth(max_bw, in_rate, in_lanes,
 					     out_rate, out_lanes, &new_rate,
 					     &new_lanes);
 		if (ret) {
@@ -720,7 +725,10 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in)
  * @tb: Pointer to the domain structure
  * @in: DP in adapter port
  * @out: DP out adapter port
- * @max_bw: Maximum available bandwidth for the DP tunnel (%0 if not limited)
+ * @max_up: Maximum available upstream bandwidth for the DP tunnel (%0
+ *	    if not limited)
+ * @max_down: Maximum available downstream bandwidth for the DP tunnel
+ *	      (%0 if not limited)
  *
  * Allocates a tunnel between @in and @out that is capable of tunneling
  * Display Port traffic.
@@ -728,7 +736,8 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in)
  * Return: Returns a tb_tunnel on success or NULL on failure.
  */
 struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
-				     struct tb_port *out, int max_bw)
+				     struct tb_port *out, int max_up,
+				     int max_down)
 {
 	struct tb_tunnel *tunnel;
 	struct tb_path **paths;
@@ -746,7 +755,8 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
 	tunnel->consumed_bandwidth = tb_dp_consumed_bandwidth;
 	tunnel->src_port = in;
 	tunnel->dst_port = out;
-	tunnel->max_bw = max_bw;
+	tunnel->max_up = max_up;
+	tunnel->max_down = max_down;
 
 	paths = tunnel->paths;
 
@@ -866,6 +876,33 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
 	return tunnel;
 }
 
+static int tb_usb3_max_link_rate(struct tb_port *up, struct tb_port *down)
+{
+	int ret, up_max_rate, down_max_rate;
+
+	ret = usb4_usb3_port_max_link_rate(up);
+	if (ret < 0)
+		return ret;
+	up_max_rate = ret;
+
+	ret = usb4_usb3_port_max_link_rate(down);
+	if (ret < 0)
+		return ret;
+	down_max_rate = ret;
+
+	return min(up_max_rate, down_max_rate);
+}
+
+static int tb_usb3_init(struct tb_tunnel *tunnel)
+{
+	tb_tunnel_dbg(tunnel, "allocating initial bandwidth %d/%d Mb/s\n",
+		      tunnel->allocated_up, tunnel->allocated_down);
+
+	return usb4_usb3_port_allocate_bandwidth(tunnel->src_port,
+						 &tunnel->allocated_up,
+						 &tunnel->allocated_down);
+}
+
 static int tb_usb3_activate(struct tb_tunnel *tunnel, bool activate)
 {
 	int res;
@@ -880,6 +917,86 @@ static int tb_usb3_activate(struct tb_tunnel *tunnel, bool activate)
 	return 0;
 }
 
+static int tb_usb3_consumed_bandwidth(struct tb_tunnel *tunnel,
+		int *consumed_up, int *consumed_down)
+{
+	/*
+	 * PCIe tunneling affects the USB3 bandwidth so take that it
+	 * into account here.
+	 */
+	*consumed_up = tunnel->allocated_up * (3 + 1) / 3;
+	*consumed_down = tunnel->allocated_down * (3 + 1) / 3;
+	return 0;
+}
+
+static int tb_usb3_release_unused_bandwidth(struct tb_tunnel *tunnel)
+{
+	int ret;
+
+	ret = usb4_usb3_port_release_bandwidth(tunnel->src_port,
+					       &tunnel->allocated_up,
+					       &tunnel->allocated_down);
+	if (ret)
+		return ret;
+
+	tb_tunnel_dbg(tunnel, "decreased bandwidth allocation to %d/%d Mb/s\n",
+		      tunnel->allocated_up, tunnel->allocated_down);
+	return 0;
+}
+
+static void tb_usb3_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
+						int *available_up,
+						int *available_down)
+{
+	int ret, max_rate, allocate_up, allocate_down;
+
+	ret = usb4_usb3_port_actual_link_rate(tunnel->src_port);
+	if (ret <= 0) {
+		tb_tunnel_warn(tunnel, "tunnel is not up\n");
+		return;
+	}
+	/*
+	 * 90% of the max rate can be allocated for isochronous
+	 * transfers.
+	 */
+	max_rate = ret * 90 / 100;
+
+	/* No need to reclaim if already at maximum */
+	if (tunnel->allocated_up >= max_rate &&
+	    tunnel->allocated_down >= max_rate)
+		return;
+
+	/* Don't go lower than what is already allocated */
+	allocate_up = min(max_rate, *available_up);
+	if (allocate_up < tunnel->allocated_up)
+		allocate_up = tunnel->allocated_up;
+
+	allocate_down = min(max_rate, *available_down);
+	if (allocate_down < tunnel->allocated_down)
+		allocate_down = tunnel->allocated_down;
+
+	/* If no changes no need to do more */
+	if (allocate_up == tunnel->allocated_up &&
+	    allocate_down == tunnel->allocated_down)
+		return;
+
+	ret = usb4_usb3_port_allocate_bandwidth(tunnel->src_port, &allocate_up,
+						&allocate_down);
+	if (ret) {
+		tb_tunnel_info(tunnel, "failed to allocate bandwidth\n");
+		return;
+	}
+
+	tunnel->allocated_up = allocate_up;
+	*available_up -= tunnel->allocated_up;
+
+	tunnel->allocated_down = allocate_down;
+	*available_down -= tunnel->allocated_down;
+
+	tb_tunnel_dbg(tunnel, "increased bandwidth allocation to %d/%d Mb/s\n",
+		      tunnel->allocated_up, tunnel->allocated_down);
+}
+
 static void tb_usb3_init_path(struct tb_path *path)
 {
 	path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
@@ -960,6 +1077,29 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
 		goto err_deactivate;
 	}
 
+	if (!tb_route(down->sw)) {
+		int ret;
+
+		/*
+		 * Read the initial bandwidth allocation for the first
+		 * hop tunnel.
+		 */
+		ret = usb4_usb3_port_allocated_bandwidth(down,
+			&tunnel->allocated_up, &tunnel->allocated_down);
+		if (ret)
+			goto err_deactivate;
+
+		tb_tunnel_dbg(tunnel, "currently allocated bandwidth %d/%d Mb/s\n",
+			      tunnel->allocated_up, tunnel->allocated_down);
+
+		tunnel->init = tb_usb3_init;
+		tunnel->consumed_bandwidth = tb_usb3_consumed_bandwidth;
+		tunnel->release_unused_bandwidth =
+			tb_usb3_release_unused_bandwidth;
+		tunnel->reclaim_available_bandwidth =
+			tb_usb3_reclaim_available_bandwidth;
+	}
+
 	tb_tunnel_dbg(tunnel, "discovered\n");
 	return tunnel;
 
@@ -976,6 +1116,10 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
  * @tb: Pointer to the domain structure
  * @up: USB3 upstream adapter port
  * @down: USB3 downstream adapter port
+ * @max_up: Maximum available upstream bandwidth for the USB3 tunnel (%0
+ *	    if not limited).
+ * @max_down: Maximum available downstream bandwidth for the USB3 tunnel
+ *	      (%0 if not limited).
  *
  * Allocate an USB3 tunnel. The ports must be of type @TB_TYPE_USB3_UP and
  * @TB_TYPE_USB3_DOWN.
@@ -983,10 +1127,32 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
  * Return: Returns a tb_tunnel on success or %NULL on failure.
  */
 struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
-				       struct tb_port *down)
+				       struct tb_port *down, int max_up,
+				       int max_down)
 {
 	struct tb_tunnel *tunnel;
 	struct tb_path *path;
+	int max_rate = 0;
+
+	/*
+	 * Check that we have enough bandwidth available for the new
+	 * USB3 tunnel.
+	 */
+	if (max_up > 0 || max_down > 0) {
+		max_rate = tb_usb3_max_link_rate(down, up);
+		if (max_rate < 0)
+			return NULL;
+
+		/* Only 90% can be allocated for USB3 isochronous transfers */
+		max_rate = max_rate * 90 / 100;
+		tb_port_dbg(up, "required bandwidth for USB3 tunnel %d Mb/s\n",
+			    max_rate);
+
+		if (max_rate > max_up || max_rate > max_down) {
+			tb_port_warn(up, "not enough bandwidth for USB3 tunnel\n");
+			return NULL;
+		}
+	}
 
 	tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_USB3);
 	if (!tunnel)
@@ -995,6 +1161,8 @@ struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
 	tunnel->activate = tb_usb3_activate;
 	tunnel->src_port = down;
 	tunnel->dst_port = up;
+	tunnel->max_up = max_up;
+	tunnel->max_down = max_down;
 
 	path = tb_path_alloc(tb, down, TB_USB3_HOPID, up, TB_USB3_HOPID, 0,
 			     "USB3 Down");
@@ -1014,6 +1182,18 @@ struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
 	tb_usb3_init_path(path);
 	tunnel->paths[TB_USB3_PATH_UP] = path;
 
+	if (!tb_route(down->sw)) {
+		tunnel->allocated_up = max_rate;
+		tunnel->allocated_down = max_rate;
+
+		tunnel->init = tb_usb3_init;
+		tunnel->consumed_bandwidth = tb_usb3_consumed_bandwidth;
+		tunnel->release_unused_bandwidth =
+			tb_usb3_release_unused_bandwidth;
+		tunnel->reclaim_available_bandwidth =
+			tb_usb3_reclaim_available_bandwidth;
+	}
+
 	return tunnel;
 }
 
@@ -1146,22 +1326,23 @@ void tb_tunnel_deactivate(struct tb_tunnel *tunnel)
 }
 
 /**
- * tb_tunnel_switch_on_path() - Does the tunnel go through switch
+ * tb_tunnel_port_on_path() - Does the tunnel go through port
  * @tunnel: Tunnel to check
- * @sw: Switch to check
+ * @port: Port to check
  *
- * Returns true if @tunnel goes through @sw (direction does not matter),
+ * Returns true if @tunnel goes through @port (direction does not matter),
  * false otherwise.
  */
-bool tb_tunnel_switch_on_path(const struct tb_tunnel *tunnel,
-			      const struct tb_switch *sw)
+bool tb_tunnel_port_on_path(const struct tb_tunnel *tunnel,
+			    const struct tb_port *port)
 {
 	int i;
 
 	for (i = 0; i < tunnel->npaths; i++) {
 		if (!tunnel->paths[i])
 			continue;
-		if (tb_path_switch_on_path(tunnel->paths[i], sw))
+
+		if (tb_path_port_on_path(tunnel->paths[i], port))
 			return true;
 	}
 
@@ -1221,3 +1402,51 @@ int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
 
 	return 0;
 }
+
+/**
+ * tb_tunnel_release_unused_bandwidth() - Release unused bandwidth
+ * @tunnel: Tunnel whose unused bandwidth to release
+ *
+ * If tunnel supports dynamic bandwidth management (USB3 tunnels at the
+ * moment) this function makes it to release all the unused bandwidth.
+ *
+ * Returns %0 in case of success and negative errno otherwise.
+ */
+int tb_tunnel_release_unused_bandwidth(struct tb_tunnel *tunnel)
+{
+	if (!tb_tunnel_is_active(tunnel))
+		return 0;
+
+	if (tunnel->release_unused_bandwidth) {
+		int ret;
+
+		ret = tunnel->release_unused_bandwidth(tunnel);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+/**
+ * tb_tunnel_reclaim_available_bandwidth() - Reclaim available bandwidth
+ * @tunnel: Tunnel reclaiming available bandwidth
+ * @available_up: Available upstream bandwidth (in Mb/s)
+ * @available_down: Available downstream bandwidth (in Mb/s)
+ *
+ * Reclaims bandwidth from @available_up and @available_down and updates
+ * the variables accordingly (e.g decreases both according to what was
+ * reclaimed by the tunnel). If nothing was reclaimed the values are
+ * kept as is.
+ */
+void tb_tunnel_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
+					   int *available_up,
+					   int *available_down)
+{
+	if (!tb_tunnel_is_active(tunnel))
+		return;
+
+	if (tunnel->reclaim_available_bandwidth)
+		tunnel->reclaim_available_bandwidth(tunnel, available_up,
+						    available_down);
+}
diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h
index cc952b2be792..1d2a64eb060d 100644
--- a/drivers/thunderbolt/tunnel.h
+++ b/drivers/thunderbolt/tunnel.h
@@ -29,10 +29,16 @@ enum tb_tunnel_type {
  * @init: Optional tunnel specific initialization
  * @activate: Optional tunnel specific activation/deactivation
  * @consumed_bandwidth: Return how much bandwidth the tunnel consumes
+ * @release_unused_bandwidth: Release all unused bandwidth
+ * @reclaim_available_bandwidth: Reclaim back available bandwidth
  * @list: Tunnels are linked using this field
  * @type: Type of the tunnel
- * @max_bw: Maximum bandwidth (Mb/s) available for the tunnel (only for DP).
+ * @max_up: Maximum upstream bandwidth (Mb/s) available for the tunnel.
  *	    Only set if the bandwidth needs to be limited.
+ * @max_down: Maximum downstream bandwidth (Mb/s) available for the tunnel.
+ *	      Only set if the bandwidth needs to be limited.
+ * @allocated_up: Allocated upstream bandwidth (only for USB3)
+ * @allocated_down: Allocated downstream bandwidth (only for USB3)
  */
 struct tb_tunnel {
 	struct tb *tb;
@@ -44,9 +50,16 @@ struct tb_tunnel {
 	int (*activate)(struct tb_tunnel *tunnel, bool activate);
 	int (*consumed_bandwidth)(struct tb_tunnel *tunnel, int *consumed_up,
 				  int *consumed_down);
+	int (*release_unused_bandwidth)(struct tb_tunnel *tunnel);
+	void (*reclaim_available_bandwidth)(struct tb_tunnel *tunnel,
+					    int *available_up,
+					    int *available_down);
 	struct list_head list;
 	enum tb_tunnel_type type;
-	unsigned int max_bw;
+	int max_up;
+	int max_down;
+	int allocated_up;
+	int allocated_down;
 };
 
 struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down);
@@ -54,24 +67,30 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
 				      struct tb_port *down);
 struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in);
 struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
-				     struct tb_port *out, int max_bw);
+				     struct tb_port *out, int max_up,
+				     int max_down);
 struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
 				      struct tb_port *dst, int transmit_ring,
 				      int transmit_path, int receive_ring,
 				      int receive_path);
 struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down);
 struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
-				       struct tb_port *down);
+				       struct tb_port *down, int max_up,
+				       int max_down);
 
 void tb_tunnel_free(struct tb_tunnel *tunnel);
 int tb_tunnel_activate(struct tb_tunnel *tunnel);
 int tb_tunnel_restart(struct tb_tunnel *tunnel);
 void tb_tunnel_deactivate(struct tb_tunnel *tunnel);
 bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel);
-bool tb_tunnel_switch_on_path(const struct tb_tunnel *tunnel,
-			      const struct tb_switch *sw);
+bool tb_tunnel_port_on_path(const struct tb_tunnel *tunnel,
+			    const struct tb_port *port);
 int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
 				 int *consumed_down);
+int tb_tunnel_release_unused_bandwidth(struct tb_tunnel *tunnel);
+void tb_tunnel_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
+					   int *available_up,
+					   int *available_down);
 
 static inline bool tb_tunnel_is_pci(const struct tb_tunnel *tunnel)
 {
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 17/17] thunderbolt: Add KUnit tests for tunneling
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (15 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 16/17] thunderbolt: Add USB3 bandwidth management Mika Westerberg
@ 2020-06-15 14:26 ` Mika Westerberg
  2020-06-29 15:39 ` [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-15 14:26 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Mika Westerberg, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

We can test some parts of tunneling, like path allocation without access
to test hardware so add KUnit tests for PCIe, DP and USB3 tunneling.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/test.c | 398 +++++++++++++++++++++++++++++++++++++
 1 file changed, 398 insertions(+)

diff --git a/drivers/thunderbolt/test.c b/drivers/thunderbolt/test.c
index 9e60bab46d34..acb8b6256847 100644
--- a/drivers/thunderbolt/test.c
+++ b/drivers/thunderbolt/test.c
@@ -10,6 +10,7 @@
 #include <linux/idr.h>
 
 #include "tb.h"
+#include "tunnel.h"
 
 static int __ida_init(struct kunit_resource *res, void *context)
 {
@@ -1203,6 +1204,396 @@ static void tb_test_path_mixed_chain_reverse(struct kunit *test)
 	tb_path_free(path);
 }
 
+static void tb_test_tunnel_pcie(struct kunit *test)
+{
+	struct tb_switch *host, *dev1, *dev2;
+	struct tb_tunnel *tunnel1, *tunnel2;
+	struct tb_port *down, *up;
+
+	/*
+	 * Create PCIe tunnel between host and two devices.
+	 *
+	 *   [Host]
+	 *    1 |
+	 *    1 |
+	 *  [Device #1]
+	 *    5 |
+	 *    1 |
+	 *  [Device #2]
+	 */
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, true);
+	dev2 = alloc_dev_default(test, dev1, 0x501, true);
+
+	down = &host->ports[8];
+	up = &dev1->ports[9];
+	tunnel1 = tb_tunnel_alloc_pci(NULL, up, down);
+	KUNIT_ASSERT_TRUE(test, tunnel1 != NULL);
+	KUNIT_EXPECT_EQ(test, tunnel1->type, (enum tb_tunnel_type)TB_TUNNEL_PCI);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->src_port, down);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->dst_port, up);
+	KUNIT_ASSERT_EQ(test, tunnel1->npaths, (size_t)2);
+	KUNIT_ASSERT_EQ(test, tunnel1->paths[0]->path_length, 2);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[0].in_port, down);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[1].out_port, up);
+	KUNIT_ASSERT_EQ(test, tunnel1->paths[1]->path_length, 2);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[0].in_port, up);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[1].out_port, down);
+
+	down = &dev1->ports[10];
+	up = &dev2->ports[9];
+	tunnel2 = tb_tunnel_alloc_pci(NULL, up, down);
+	KUNIT_ASSERT_TRUE(test, tunnel2 != NULL);
+	KUNIT_EXPECT_EQ(test, tunnel2->type, (enum tb_tunnel_type)TB_TUNNEL_PCI);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->src_port, down);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->dst_port, up);
+	KUNIT_ASSERT_EQ(test, tunnel2->npaths, (size_t)2);
+	KUNIT_ASSERT_EQ(test, tunnel2->paths[0]->path_length, 2);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[0].in_port, down);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[1].out_port, up);
+	KUNIT_ASSERT_EQ(test, tunnel2->paths[1]->path_length, 2);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[0].in_port, up);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[1].out_port, down);
+
+	tb_tunnel_free(tunnel2);
+	tb_tunnel_free(tunnel1);
+}
+
+static void tb_test_tunnel_dp(struct kunit *test)
+{
+	struct tb_switch *host, *dev;
+	struct tb_port *in, *out;
+	struct tb_tunnel *tunnel;
+
+	/*
+	 * Create DP tunnel between Host and Device
+	 *
+	 *   [Host]
+	 *   1 |
+	 *   1 |
+	 *  [Device]
+	 */
+	host = alloc_host(test);
+	dev = alloc_dev_default(test, host, 0x3, true);
+
+	in = &host->ports[5];
+	out = &dev->ports[13];
+
+	tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
+	KUNIT_ASSERT_TRUE(test, tunnel != NULL);
+	KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 2);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[1].out_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 2);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[1].out_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 2);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[1].out_port, in);
+	tb_tunnel_free(tunnel);
+}
+
+static void tb_test_tunnel_dp_chain(struct kunit *test)
+{
+	struct tb_switch *host, *dev1, *dev4;
+	struct tb_port *in, *out;
+	struct tb_tunnel *tunnel;
+
+	/*
+	 * Create DP tunnel from Host DP IN to Device #4 DP OUT.
+	 *
+	 *           [Host]
+	 *            1 |
+	 *            1 |
+	 *         [Device #1]
+	 *       3 /   | 5  \ 7
+	 *      1 /    |     \ 1
+	 * [Device #2] |    [Device #4]
+	 *             | 1
+	 *         [Device #3]
+	 */
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, true);
+	alloc_dev_default(test, dev1, 0x301, true);
+	alloc_dev_default(test, dev1, 0x501, true);
+	dev4 = alloc_dev_default(test, dev1, 0x701, true);
+
+	in = &host->ports[5];
+	out = &dev4->ports[14];
+
+	tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
+	KUNIT_ASSERT_TRUE(test, tunnel != NULL);
+	KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 3);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[2].out_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 3);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[2].out_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 3);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[2].out_port, in);
+	tb_tunnel_free(tunnel);
+}
+
+static void tb_test_tunnel_dp_tree(struct kunit *test)
+{
+	struct tb_switch *host, *dev1, *dev2, *dev3, *dev5;
+	struct tb_port *in, *out;
+	struct tb_tunnel *tunnel;
+
+	/*
+	 * Create DP tunnel from Device #2 DP IN to Device #5 DP OUT.
+	 *
+	 *          [Host]
+	 *           3 |
+	 *           1 |
+	 *         [Device #1]
+	 *       3 /   | 5  \ 7
+	 *      1 /    |     \ 1
+	 * [Device #2] |    [Device #4]
+	 *             | 1
+	 *         [Device #3]
+	 *             | 5
+	 *             | 1
+	 *         [Device #5]
+	 */
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x3, true);
+	dev2 = alloc_dev_with_dpin(test, dev1, 0x303, true);
+	dev3 = alloc_dev_default(test, dev1, 0x503, true);
+	alloc_dev_default(test, dev1, 0x703, true);
+	dev5 = alloc_dev_default(test, dev3, 0x50503, true);
+
+	in = &dev2->ports[13];
+	out = &dev5->ports[13];
+
+	tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
+	KUNIT_ASSERT_TRUE(test, tunnel != NULL);
+	KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 4);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[3].out_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 4);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[3].out_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 4);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[3].out_port, in);
+	tb_tunnel_free(tunnel);
+}
+
+static void tb_test_tunnel_dp_max_length(struct kunit *test)
+{
+	struct tb_switch *host, *dev1, *dev2, *dev3, *dev4, *dev5, *dev6;
+	struct tb_switch *dev7, *dev8, *dev9, *dev10, *dev11, *dev12;
+	struct tb_port *in, *out;
+	struct tb_tunnel *tunnel;
+
+	/*
+	 * Creates DP tunnel from Device #6 to Device #12.
+	 *
+	 *          [Host]
+	 *         1 /  \ 3
+	 *        1 /    \ 1
+	 * [Device #1]   [Device #7]
+	 *     3 |           | 3
+	 *     1 |           | 1
+	 * [Device #2]   [Device #8]
+	 *     3 |           | 3
+	 *     1 |           | 1
+	 * [Device #3]   [Device #9]
+	 *     3 |           | 3
+	 *     1 |           | 1
+	 * [Device #4]   [Device #10]
+	 *     3 |           | 3
+	 *     1 |           | 1
+	 * [Device #5]   [Device #11]
+	 *     3 |           | 3
+	 *     1 |           | 1
+	 * [Device #6]   [Device #12]
+	 */
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, true);
+	dev2 = alloc_dev_default(test, dev1, 0x301, true);
+	dev3 = alloc_dev_default(test, dev2, 0x30301, true);
+	dev4 = alloc_dev_default(test, dev3, 0x3030301, true);
+	dev5 = alloc_dev_default(test, dev4, 0x303030301, true);
+	dev6 = alloc_dev_with_dpin(test, dev5, 0x30303030301, true);
+	dev7 = alloc_dev_default(test, host, 0x3, true);
+	dev8 = alloc_dev_default(test, dev7, 0x303, true);
+	dev9 = alloc_dev_default(test, dev8, 0x30303, true);
+	dev10 = alloc_dev_default(test, dev9, 0x3030303, true);
+	dev11 = alloc_dev_default(test, dev10, 0x303030303, true);
+	dev12 = alloc_dev_default(test, dev11, 0x30303030303, true);
+
+	in = &dev6->ports[13];
+	out = &dev12->ports[13];
+
+	tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
+	KUNIT_ASSERT_TRUE(test, tunnel != NULL);
+	KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 13);
+	/* First hop */
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in);
+	/* Middle */
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[6].in_port,
+			    &host->ports[1]);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[6].out_port,
+			    &host->ports[3]);
+	/* Last */
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[12].out_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 13);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[6].in_port,
+			    &host->ports[1]);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[6].out_port,
+			    &host->ports[3]);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[12].out_port, out);
+	KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 13);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[6].in_port,
+			    &host->ports[3]);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[6].out_port,
+			    &host->ports[1]);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[12].out_port, in);
+	tb_tunnel_free(tunnel);
+}
+
+static void tb_test_tunnel_usb3(struct kunit *test)
+{
+	struct tb_switch *host, *dev1, *dev2;
+	struct tb_tunnel *tunnel1, *tunnel2;
+	struct tb_port *down, *up;
+
+	/*
+	 * Create USB3 tunnel between host and two devices.
+	 *
+	 *   [Host]
+	 *    1 |
+	 *    1 |
+	 *  [Device #1]
+	 *          \ 7
+	 *           \ 1
+	 *         [Device #2]
+	 */
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x1, true);
+	dev2 = alloc_dev_default(test, dev1, 0x701, true);
+
+	down = &host->ports[12];
+	up = &dev1->ports[16];
+	tunnel1 = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0);
+	KUNIT_ASSERT_TRUE(test, tunnel1 != NULL);
+	KUNIT_EXPECT_EQ(test, tunnel1->type, (enum tb_tunnel_type)TB_TUNNEL_USB3);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->src_port, down);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->dst_port, up);
+	KUNIT_ASSERT_EQ(test, tunnel1->npaths, (size_t)2);
+	KUNIT_ASSERT_EQ(test, tunnel1->paths[0]->path_length, 2);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[0].in_port, down);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[1].out_port, up);
+	KUNIT_ASSERT_EQ(test, tunnel1->paths[1]->path_length, 2);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[0].in_port, up);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[1].out_port, down);
+
+	down = &dev1->ports[17];
+	up = &dev2->ports[16];
+	tunnel2 = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0);
+	KUNIT_ASSERT_TRUE(test, tunnel2 != NULL);
+	KUNIT_EXPECT_EQ(test, tunnel2->type, (enum tb_tunnel_type)TB_TUNNEL_USB3);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->src_port, down);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->dst_port, up);
+	KUNIT_ASSERT_EQ(test, tunnel2->npaths, (size_t)2);
+	KUNIT_ASSERT_EQ(test, tunnel2->paths[0]->path_length, 2);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[0].in_port, down);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[1].out_port, up);
+	KUNIT_ASSERT_EQ(test, tunnel2->paths[1]->path_length, 2);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[0].in_port, up);
+	KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[1].out_port, down);
+
+	tb_tunnel_free(tunnel2);
+	tb_tunnel_free(tunnel1);
+}
+
+static void tb_test_tunnel_port_on_path(struct kunit *test)
+{
+	struct tb_switch *host, *dev1, *dev2, *dev3, *dev4, *dev5;
+	struct tb_port *in, *out, *port;
+	struct tb_tunnel *dp_tunnel;
+
+	/*
+	 *          [Host]
+	 *           3 |
+	 *           1 |
+	 *         [Device #1]
+	 *       3 /   | 5  \ 7
+	 *      1 /    |     \ 1
+	 * [Device #2] |    [Device #4]
+	 *             | 1
+	 *         [Device #3]
+	 *             | 5
+	 *             | 1
+	 *         [Device #5]
+	 */
+	host = alloc_host(test);
+	dev1 = alloc_dev_default(test, host, 0x3, true);
+	dev2 = alloc_dev_with_dpin(test, dev1, 0x303, true);
+	dev3 = alloc_dev_default(test, dev1, 0x503, true);
+	dev4 = alloc_dev_default(test, dev1, 0x703, true);
+	dev5 = alloc_dev_default(test, dev3, 0x50503, true);
+
+	in = &dev2->ports[13];
+	out = &dev5->ports[13];
+
+	dp_tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
+	KUNIT_ASSERT_TRUE(test, dp_tunnel != NULL);
+
+	KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, in));
+	KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, out));
+
+	port = &host->ports[8];
+	KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port));
+
+	port = &host->ports[3];
+	KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port));
+
+	port = &dev1->ports[1];
+	KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port));
+
+	port = &dev1->ports[3];
+	KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port));
+
+	port = &dev1->ports[5];
+	KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port));
+
+	port = &dev1->ports[7];
+	KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port));
+
+	port = &dev3->ports[1];
+	KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port));
+
+	port = &dev5->ports[1];
+	KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port));
+
+	port = &dev4->ports[1];
+	KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port));
+
+	tb_tunnel_free(dp_tunnel);
+}
+
 static struct kunit_case tb_test_cases[] = {
 	KUNIT_CASE(tb_test_path_basic),
 	KUNIT_CASE(tb_test_path_not_connected_walk),
@@ -1218,6 +1609,13 @@ static struct kunit_case tb_test_cases[] = {
 	KUNIT_CASE(tb_test_path_not_bonded_lane1_chain_reverse),
 	KUNIT_CASE(tb_test_path_mixed_chain),
 	KUNIT_CASE(tb_test_path_mixed_chain_reverse),
+	KUNIT_CASE(tb_test_tunnel_pcie),
+	KUNIT_CASE(tb_test_tunnel_dp),
+	KUNIT_CASE(tb_test_tunnel_dp_chain),
+	KUNIT_CASE(tb_test_tunnel_dp_tree),
+	KUNIT_CASE(tb_test_tunnel_dp_max_length),
+	KUNIT_CASE(tb_test_tunnel_port_on_path),
+	KUNIT_CASE(tb_test_tunnel_usb3),
 	{ }
 };
 
-- 
2.27.0.rc2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 01/17] thunderbolt: Fix path indices used in USB3 tunnel discovery
  2020-06-15 14:26 ` [PATCH 01/17] thunderbolt: Fix path indices used in USB3 tunnel discovery Mika Westerberg
@ 2020-06-25 12:51   ` Mika Westerberg
  0 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-25 12:51 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

On Mon, Jun 15, 2020 at 05:26:29PM +0300, Mika Westerberg wrote:
> The USB3 discovery used wrong indices when tunnel is discovered. It
> should use TB_USB3_PATH_DOWN for path that flows downstream and
> TB_USB3_PATH_UP when it flows upstream. This should not affect the
> functionality but better to fix it.
> 
> Fixes: e6f818585713 ("thunderbolt: Add support for USB 3.x tunnels")
> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
> Cc: stable@vger.kernel.org # v5.6+

Applied to thunderbolt.git/fixes.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 00/17] thunderbolt: Tunneling improvements
  2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
                   ` (16 preceding siblings ...)
  2020-06-15 14:26 ` [PATCH 17/17] thunderbolt: Add KUnit tests for tunneling Mika Westerberg
@ 2020-06-29 15:39 ` Mika Westerberg
  17 siblings, 0 replies; 23+ messages in thread
From: Mika Westerberg @ 2020-06-29 15:39 UTC (permalink / raw)
  To: linux-usb
  Cc: Andreas Noever, Michael Jamet, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

On Mon, Jun 15, 2020 at 05:26:28PM +0300, Mika Westerberg wrote:
> Hi all,
> 
> This series improves the Thunderbolt/USB4 driver to support tree topologies
> that are now possible with USB4 devices (it is possible with TBT devices
> but there are no such devices available in the market with more than two
> ports).
> 
> We also take advantage of KUnit and add unit tests for path walking and
> tunneling (in cases where hardware is not needed). In addition we add
> initial support for USB3 tunnel bandwidth management so that the driver can
> share isochronous bandwidth between USB3 and DisplayPort.
> 
> Mika Westerberg (17):
>   thunderbolt: Fix path indices used in USB3 tunnel discovery
>   thunderbolt: Make tb_next_port_on_path() work with tree topologies
>   thunderbolt: Make tb_path_alloc() work with tree topologies
>   thunderbolt: Check that both ports are reachable when allocating path
>   thunderbolt: Handle incomplete PCIe/USB3 paths correctly in discovery
>   thunderbolt: Increase path length in discovery
>   thunderbolt: Add KUnit tests for path walking
>   thunderbolt: Add DP IN resources for all routers
>   thunderbolt: Do not tunnel USB3 if link is not USB4
>   thunderbolt: Make usb4_switch_map_usb3_down() also return enabled ports
>   thunderbolt: Make usb4_switch_map_pcie_down() also return enabled ports
>   thunderbolt: Report consumed bandwidth in both directions
>   thunderbolt: Increase DP DPRX wait timeout
>   thunderbolt: Implement USB3 bandwidth negotiation routines
>   thunderbolt: Make tb_port_get_link_speed() available to other files
>   thunderbolt: Add USB3 bandwidth management
>   thunderbolt: Add KUnit tests for tunneling

Queued these for v5.9.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 09/17] thunderbolt: Do not tunnel USB3 if link is not USB4
  2020-06-15 14:26 ` [PATCH 09/17] thunderbolt: Do not tunnel USB3 if link is not USB4 Mika Westerberg
@ 2020-07-17  6:16   ` Prashant Malani
  2020-07-20  9:02     ` Mika Westerberg
  0 siblings, 1 reply; 23+ messages in thread
From: Prashant Malani @ 2020-07-17  6:16 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: linux-usb, Andreas Noever, Michael Jamet, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

Hi Mika,

Sorry for the late comment..

On Mon, Jun 15, 2020 at 05:26:37PM +0300, Mika Westerberg wrote:
> USB3 tunneling is possible only over USB4 link so don't create USB3
> tunnels if that's not the case.
> 
> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
> ---
>  drivers/thunderbolt/tb.c      |  3 +++
>  drivers/thunderbolt/tb.h      |  2 ++
>  drivers/thunderbolt/tb_regs.h |  1 +
>  drivers/thunderbolt/usb4.c    | 24 +++++++++++++++++++++---
>  4 files changed, 27 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
> index 55daa7f1a87d..2da82259e77c 100644
> --- a/drivers/thunderbolt/tb.c
> +++ b/drivers/thunderbolt/tb.c
> @@ -235,6 +235,9 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
>  	if (!up)
>  		return 0;
>  
> +	if (!sw->link_usb4)
> +		return 0;
On both here and the previous "up" check; should we be returning 0?
Wouldn't it be better to return an appropriate error code? It sounds
like 0 is considered a success....


Best regards,

-Prashant
> +
>  	/*
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 09/17] thunderbolt: Do not tunnel USB3 if link is not USB4
  2020-07-17  6:16   ` Prashant Malani
@ 2020-07-20  9:02     ` Mika Westerberg
  2020-07-22  5:45       ` Prashant Malani
  0 siblings, 1 reply; 23+ messages in thread
From: Mika Westerberg @ 2020-07-20  9:02 UTC (permalink / raw)
  To: Prashant Malani
  Cc: linux-usb, Andreas Noever, Michael Jamet, Yehezkel Bernat,
	Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

On Thu, Jul 16, 2020 at 11:16:00PM -0700, Prashant Malani wrote:
> Hi Mika,
> 
> Sorry for the late comment..

Sorry for the late reply, was on vacation ;-)

> On Mon, Jun 15, 2020 at 05:26:37PM +0300, Mika Westerberg wrote:
> > USB3 tunneling is possible only over USB4 link so don't create USB3
> > tunnels if that's not the case.
> > 
> > Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
> > ---
> >  drivers/thunderbolt/tb.c      |  3 +++
> >  drivers/thunderbolt/tb.h      |  2 ++
> >  drivers/thunderbolt/tb_regs.h |  1 +
> >  drivers/thunderbolt/usb4.c    | 24 +++++++++++++++++++++---
> >  4 files changed, 27 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
> > index 55daa7f1a87d..2da82259e77c 100644
> > --- a/drivers/thunderbolt/tb.c
> > +++ b/drivers/thunderbolt/tb.c
> > @@ -235,6 +235,9 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
> >  	if (!up)
> >  		return 0;
> >  
> > +	if (!sw->link_usb4)
> > +		return 0;
> On both here and the previous "up" check; should we be returning 0?
> Wouldn't it be better to return an appropriate error code? It sounds
> like 0 is considered a success....

The idea here is that you can call this function for every type of
router (can be one without USB3 adapters so TBT 3,1,2) and it creates
the tunnel if conditions for USB3 tunneling are met. It is not
considered an error.

However, if the operations fail for some reason we return appropriate
error code.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 09/17] thunderbolt: Do not tunnel USB3 if link is not USB4
  2020-07-20  9:02     ` Mika Westerberg
@ 2020-07-22  5:45       ` Prashant Malani
  0 siblings, 0 replies; 23+ messages in thread
From: Prashant Malani @ 2020-07-22  5:45 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: open list:USB NETWORKING DRIVERS, Andreas Noever, Michael Jamet,
	Yehezkel Bernat, Greg Kroah-Hartman, Rajmohan Mani, Lukas Wunner

Hi Mika,

On Mon, Jul 20, 2020 at 2:02 AM Mika Westerberg
<mika.westerberg@linux.intel.com> wrote:
>
> On Thu, Jul 16, 2020 at 11:16:00PM -0700, Prashant Malani wrote:
> > Hi Mika,
> >
> > Sorry for the late comment..
>
> Sorry for the late reply, was on vacation ;-)
>
> > On Mon, Jun 15, 2020 at 05:26:37PM +0300, Mika Westerberg wrote:
> > > USB3 tunneling is possible only over USB4 link so don't create USB3
> > > tunnels if that's not the case.
> > >
> > > Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
> > > ---
> > >  drivers/thunderbolt/tb.c      |  3 +++
> > >  drivers/thunderbolt/tb.h      |  2 ++
> > >  drivers/thunderbolt/tb_regs.h |  1 +
> > >  drivers/thunderbolt/usb4.c    | 24 +++++++++++++++++++++---
> > >  4 files changed, 27 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
> > > index 55daa7f1a87d..2da82259e77c 100644
> > > --- a/drivers/thunderbolt/tb.c
> > > +++ b/drivers/thunderbolt/tb.c
> > > @@ -235,6 +235,9 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
> > >     if (!up)
> > >             return 0;
> > >
> > > +   if (!sw->link_usb4)
> > > +           return 0;
> > On both here and the previous "up" check; should we be returning 0?
> > Wouldn't it be better to return an appropriate error code? It sounds
> > like 0 is considered a success....
>
> The idea here is that you can call this function for every type of
> router (can be one without USB3 adapters so TBT 3,1,2) and it creates
> the tunnel if conditions for USB3 tunneling are met. It is not
> considered an error.
>
> However, if the operations fail for some reason we return appropriate
> error code.

Got it. Thanks for the explanation!

BR,

-Prashant

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2020-07-22  5:45 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-15 14:26 [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg
2020-06-15 14:26 ` [PATCH 01/17] thunderbolt: Fix path indices used in USB3 tunnel discovery Mika Westerberg
2020-06-25 12:51   ` Mika Westerberg
2020-06-15 14:26 ` [PATCH 02/17] thunderbolt: Make tb_next_port_on_path() work with tree topologies Mika Westerberg
2020-06-15 14:26 ` [PATCH 03/17] thunderbolt: Make tb_path_alloc() " Mika Westerberg
2020-06-15 14:26 ` [PATCH 04/17] thunderbolt: Check that both ports are reachable when allocating path Mika Westerberg
2020-06-15 14:26 ` [PATCH 05/17] thunderbolt: Handle incomplete PCIe/USB3 paths correctly in discovery Mika Westerberg
2020-06-15 14:26 ` [PATCH 06/17] thunderbolt: Increase path length " Mika Westerberg
2020-06-15 14:26 ` [PATCH 07/17] thunderbolt: Add KUnit tests for path walking Mika Westerberg
2020-06-15 14:26 ` [PATCH 08/17] thunderbolt: Add DP IN resources for all routers Mika Westerberg
2020-06-15 14:26 ` [PATCH 09/17] thunderbolt: Do not tunnel USB3 if link is not USB4 Mika Westerberg
2020-07-17  6:16   ` Prashant Malani
2020-07-20  9:02     ` Mika Westerberg
2020-07-22  5:45       ` Prashant Malani
2020-06-15 14:26 ` [PATCH 10/17] thunderbolt: Make usb4_switch_map_usb3_down() also return enabled ports Mika Westerberg
2020-06-15 14:26 ` [PATCH 11/17] thunderbolt: Make usb4_switch_map_pcie_down() " Mika Westerberg
2020-06-15 14:26 ` [PATCH 12/17] thunderbolt: Report consumed bandwidth in both directions Mika Westerberg
2020-06-15 14:26 ` [PATCH 13/17] thunderbolt: Increase DP DPRX wait timeout Mika Westerberg
2020-06-15 14:26 ` [PATCH 14/17] thunderbolt: Implement USB3 bandwidth negotiation routines Mika Westerberg
2020-06-15 14:26 ` [PATCH 15/17] thunderbolt: Make tb_port_get_link_speed() available to other files Mika Westerberg
2020-06-15 14:26 ` [PATCH 16/17] thunderbolt: Add USB3 bandwidth management Mika Westerberg
2020-06-15 14:26 ` [PATCH 17/17] thunderbolt: Add KUnit tests for tunneling Mika Westerberg
2020-06-29 15:39 ` [PATCH 00/17] thunderbolt: Tunneling improvements Mika Westerberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.