All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] thunderbolt: Add support for XDomain lane bonding
@ 2022-04-26 11:25 Mika Westerberg
  2022-04-26 11:25 ` [PATCH 1/5] thunderbolt: Add debug logging when lane is enabled/disabled Mika Westerberg
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Mika Westerberg @ 2022-04-26 11:25 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Andreas Noever, Lukas Wunner,
	Mika Westerberg

Hi all,

So far connecting two Thunderbolt/USB4 domains (hosts) the high-speed link
has been using a single lane. The reason for this was simplicity and also
the fact that the host DMA was not able to go over the 20 Gbit/s even if
the lanes were bonded. However, Intel Alder Lake and beyond actually can go
higher than that so now it makes more sense to take advantage of the lane
bonding. The USB4 Inter-Domain Service specification describes the
protocol and this patch series implements it for the Linux Thunderbolt/USB4
driver.

If both sides (hosts) of the link announce supporting this, we will
establish bonded link. This is only possible on systems with software based
connection manager (so Intel Alder Lake and beyond).

Mika Westerberg (5):
  thunderbolt: Add debug logging when lane is enabled/disabled
  thunderbolt: Move tb_port_state() prototype to correct place
  thunderbolt: Split setting link width and lane bonding into own functions
  thunderbolt: Ignore port locked error in tb_port_wait_for_link_width()
  thunderbolt: Add support for XDomain lane bonding

 drivers/thunderbolt/switch.c  | 109 ++++--
 drivers/thunderbolt/tb.c      |   6 -
 drivers/thunderbolt/tb.h      |   4 +-
 drivers/thunderbolt/tb_msgs.h |  39 +++
 drivers/thunderbolt/tb_regs.h |   5 +
 drivers/thunderbolt/xdomain.c | 609 ++++++++++++++++++++++++++++++----
 include/linux/thunderbolt.h   |  19 +-
 7 files changed, 688 insertions(+), 103 deletions(-)

-- 
2.35.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/5] thunderbolt: Add debug logging when lane is enabled/disabled
  2022-04-26 11:25 [PATCH 0/5] thunderbolt: Add support for XDomain lane bonding Mika Westerberg
@ 2022-04-26 11:25 ` Mika Westerberg
  2022-04-26 11:25 ` [PATCH 2/5] thunderbolt: Move tb_port_state() prototype to correct place Mika Westerberg
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Mika Westerberg @ 2022-04-26 11:25 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Andreas Noever, Lukas Wunner,
	Mika Westerberg

This is useful when debugging possible issues.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/switch.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index ac87e8b50e52..2d8a0fd3469c 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -693,8 +693,14 @@ static int __tb_port_enable(struct tb_port *port, bool enable)
 	else
 		phy |= LANE_ADP_CS_1_LD;
 
-	return tb_port_write(port, &phy, TB_CFG_PORT,
-			     port->cap_phy + LANE_ADP_CS_1, 1);
+
+	ret = tb_port_write(port, &phy, TB_CFG_PORT,
+			    port->cap_phy + LANE_ADP_CS_1, 1);
+	if (ret)
+		return ret;
+
+	tb_port_dbg(port, "lane %sabled\n", enable ? "en" : "dis");
+	return 0;
 }
 
 /**
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/5] thunderbolt: Move tb_port_state() prototype to correct place
  2022-04-26 11:25 [PATCH 0/5] thunderbolt: Add support for XDomain lane bonding Mika Westerberg
  2022-04-26 11:25 ` [PATCH 1/5] thunderbolt: Add debug logging when lane is enabled/disabled Mika Westerberg
@ 2022-04-26 11:25 ` Mika Westerberg
  2022-04-26 11:25 ` [PATCH 3/5] thunderbolt: Split setting link width and lane bonding into own functions Mika Westerberg
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Mika Westerberg @ 2022-04-26 11:25 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Andreas Noever, Lukas Wunner,
	Mika Westerberg

This should be before tb_wait_for_port() following how the functions in
switch.c are organized.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index ad025ff142ba..8848e8de1fc3 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -991,6 +991,7 @@ int tb_switch_pcie_l1_enable(struct tb_switch *sw);
 int tb_switch_xhci_connect(struct tb_switch *sw);
 void tb_switch_xhci_disconnect(struct tb_switch *sw);
 
+int tb_port_state(struct tb_port *port);
 int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged);
 int tb_port_add_nfc_credits(struct tb_port *port, int credits);
 int tb_port_clear_counter(struct tb_port *port, int counter);
@@ -1023,7 +1024,6 @@ static inline bool tb_port_use_credit_allocation(const struct tb_port *port)
 
 int tb_port_get_link_speed(struct tb_port *port);
 int tb_port_get_link_width(struct tb_port *port);
-int tb_port_state(struct tb_port *port);
 int tb_port_lane_bonding_enable(struct tb_port *port);
 void tb_port_lane_bonding_disable(struct tb_port *port);
 int tb_port_wait_for_link_width(struct tb_port *port, int width,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/5] thunderbolt: Split setting link width and lane bonding into own functions
  2022-04-26 11:25 [PATCH 0/5] thunderbolt: Add support for XDomain lane bonding Mika Westerberg
  2022-04-26 11:25 ` [PATCH 1/5] thunderbolt: Add debug logging when lane is enabled/disabled Mika Westerberg
  2022-04-26 11:25 ` [PATCH 2/5] thunderbolt: Move tb_port_state() prototype to correct place Mika Westerberg
@ 2022-04-26 11:25 ` Mika Westerberg
  2022-04-26 11:25 ` [PATCH 4/5] thunderbolt: Ignore port locked error in tb_port_wait_for_link_width() Mika Westerberg
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Mika Westerberg @ 2022-04-26 11:25 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Andreas Noever, Lukas Wunner,
	Mika Westerberg

When bonding lanes over XDomain the host that has "higher" UUID triggers
link re-train for bonding, and the host that has "lower" UUID just waits
for this to happen. To support this split setting the link width and
triggering the actual bonding a separate functions that can be called as
needed.

While there remove duplicated empty line in the kernel-doc comment of
tb_port_lane_bonding_disable().

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/switch.c | 86 ++++++++++++++++++++++++++++++------
 drivers/thunderbolt/tb.h     |  2 +
 2 files changed, 74 insertions(+), 14 deletions(-)

diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 2d8a0fd3469c..525be2aa3ad9 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -999,7 +999,17 @@ static bool tb_port_is_width_supported(struct tb_port *port, int width)
 	return !!(widths & width);
 }
 
-static int tb_port_set_link_width(struct tb_port *port, unsigned int width)
+/**
+ * tb_port_set_link_width() - Set target link width of the lane adapter
+ * @port: Lane adapter
+ * @width: Target link width (%1 or %2)
+ *
+ * Sets the target link width of the lane adapter to @width. Does not
+ * enable/disable lane bonding. For that call tb_port_set_lane_bonding().
+ *
+ * Return: %0 in case of success and negative errno in case of error
+ */
+int tb_port_set_link_width(struct tb_port *port, unsigned int width)
 {
 	u32 val;
 	int ret;
@@ -1026,12 +1036,58 @@ static int tb_port_set_link_width(struct tb_port *port, unsigned int width)
 		return -EINVAL;
 	}
 
-	val |= LANE_ADP_CS_1_LB;
-
 	return tb_port_write(port, &val, TB_CFG_PORT,
 			     port->cap_phy + LANE_ADP_CS_1, 1);
 }
 
+/**
+ * tb_port_set_lane_bonding() - Enable/disable lane bonding
+ * @port: Lane adapter
+ * @bonding: enable/disable bonding
+ *
+ * Enables or disables lane bonding. This should be called after target
+ * link width has been set (tb_port_set_link_width()). Note in most
+ * cases one should use tb_port_lane_bonding_enable() instead to enable
+ * lane bonding.
+ *
+ * As a side effect sets @port->bonding accordingly (and does the same
+ * for lane 1 too).
+ *
+ * Return: %0 in case of success and negative errno in case of error
+ */
+int tb_port_set_lane_bonding(struct tb_port *port, bool bonding)
+{
+	u32 val;
+	int ret;
+
+	if (!port->cap_phy)
+		return -EINVAL;
+
+	ret = tb_port_read(port, &val, TB_CFG_PORT,
+			   port->cap_phy + LANE_ADP_CS_1, 1);
+	if (ret)
+		return ret;
+
+	if (bonding)
+		val |= LANE_ADP_CS_1_LB;
+	else
+		val &= ~LANE_ADP_CS_1_LB;
+
+	ret = tb_port_write(port, &val, TB_CFG_PORT,
+			    port->cap_phy + LANE_ADP_CS_1, 1);
+	if (ret)
+		return ret;
+
+	/*
+	 * When lane 0 bonding is set it will affect lane 1 too so
+	 * update both.
+	 */
+	port->bonded = bonding;
+	port->dual_link_port->bonded = bonding;
+
+	return 0;
+}
+
 /**
  * tb_port_lane_bonding_enable() - Enable bonding on port
  * @port: port to enable
@@ -1056,22 +1112,27 @@ int tb_port_lane_bonding_enable(struct tb_port *port)
 	if (ret == 1) {
 		ret = tb_port_set_link_width(port, 2);
 		if (ret)
-			return ret;
+			goto err_lane0;
 	}
 
 	ret = tb_port_get_link_width(port->dual_link_port);
 	if (ret == 1) {
 		ret = tb_port_set_link_width(port->dual_link_port, 2);
-		if (ret) {
-			tb_port_set_link_width(port, 1);
-			return ret;
-		}
+		if (ret)
+			goto err_lane0;
 	}
 
-	port->bonded = true;
-	port->dual_link_port->bonded = true;
+	ret = tb_port_set_lane_bonding(port, true);
+	if (ret)
+		goto err_lane1;
 
 	return 0;
+
+err_lane1:
+	tb_port_set_link_width(port->dual_link_port, 1);
+err_lane0:
+	tb_port_set_link_width(port, 1);
+	return ret;
 }
 
 /**
@@ -1080,13 +1141,10 @@ int tb_port_lane_bonding_enable(struct tb_port *port)
  *
  * Disable bonding by setting the link width of the port and the
  * other port in case of dual link port.
- *
  */
 void tb_port_lane_bonding_disable(struct tb_port *port)
 {
-	port->dual_link_port->bonded = false;
-	port->bonded = false;
-
+	tb_port_set_lane_bonding(port, false);
 	tb_port_set_link_width(port->dual_link_port, 1);
 	tb_port_set_link_width(port, 1);
 }
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 8848e8de1fc3..4602c69913fa 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -1024,6 +1024,8 @@ static inline bool tb_port_use_credit_allocation(const struct tb_port *port)
 
 int tb_port_get_link_speed(struct tb_port *port);
 int tb_port_get_link_width(struct tb_port *port);
+int tb_port_set_link_width(struct tb_port *port, unsigned int width);
+int tb_port_set_lane_bonding(struct tb_port *port, bool bonding);
 int tb_port_lane_bonding_enable(struct tb_port *port);
 void tb_port_lane_bonding_disable(struct tb_port *port);
 int tb_port_wait_for_link_width(struct tb_port *port, int width,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/5] thunderbolt: Ignore port locked error in tb_port_wait_for_link_width()
  2022-04-26 11:25 [PATCH 0/5] thunderbolt: Add support for XDomain lane bonding Mika Westerberg
                   ` (2 preceding siblings ...)
  2022-04-26 11:25 ` [PATCH 3/5] thunderbolt: Split setting link width and lane bonding into own functions Mika Westerberg
@ 2022-04-26 11:25 ` Mika Westerberg
  2022-04-26 11:25 ` [PATCH 5/5] thunderbolt: Add support for XDomain lane bonding Mika Westerberg
  2022-05-05  6:29 ` [PATCH 0/5] " Mika Westerberg
  5 siblings, 0 replies; 7+ messages in thread
From: Mika Westerberg @ 2022-04-26 11:25 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Andreas Noever, Lukas Wunner,
	Mika Westerberg

Sometimes when polling for the port after target link width is changed
we get back port locked notification (because the link actually was
reset and then re-trained). Instead of bailing out we can ignore these
when polling for the width change as this is expected.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/switch.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 525be2aa3ad9..561e1d77240e 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -1168,10 +1168,17 @@ int tb_port_wait_for_link_width(struct tb_port *port, int width,
 
 	do {
 		ret = tb_port_get_link_width(port);
-		if (ret < 0)
-			return ret;
-		else if (ret == width)
+		if (ret < 0) {
+			/*
+			 * Sometimes we get port locked error when
+			 * polling the lanes so we can ignore it and
+			 * retry.
+			 */
+			if (ret != -EACCES)
+				return ret;
+		} else if (ret == width) {
 			return 0;
+		}
 
 		usleep_range(1000, 2000);
 	} while (ktime_before(ktime_get(), timeout));
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 5/5] thunderbolt: Add support for XDomain lane bonding
  2022-04-26 11:25 [PATCH 0/5] thunderbolt: Add support for XDomain lane bonding Mika Westerberg
                   ` (3 preceding siblings ...)
  2022-04-26 11:25 ` [PATCH 4/5] thunderbolt: Ignore port locked error in tb_port_wait_for_link_width() Mika Westerberg
@ 2022-04-26 11:25 ` Mika Westerberg
  2022-05-05  6:29 ` [PATCH 0/5] " Mika Westerberg
  5 siblings, 0 replies; 7+ messages in thread
From: Mika Westerberg @ 2022-04-26 11:25 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Andreas Noever, Lukas Wunner,
	Mika Westerberg

The USB4 Inter-Domain Service specification defines a protocol that can
be used to establish lane bonding between two USB4 domains (hosts). So
far we have not implemented it because the host controller DMA was not
fast enough to be able to go over 20 Gbits/s even if lanes were bonded.
However, starting from Intel Alder Lake CPUs the DMA can go over
20 Gbits/s so now it makes more sense to add this support to the driver.

Because both ends need to negotiate the bonding we add a simple state
machine that tracks the connection state and does the necessary steps
described by the USB4 Inter-Domain Service specification. We only
establish lane bonding when both sides of the link support it. Otherwise
we default to use the single lane. Also this is only done when software
connection manager is used. On systems with firmware based connection
manager, it handles the high-speed tunneling so bonding lanes is
specific to the implementation (Intel firmware based connection manager
does not support lane bonding).

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c      |   6 -
 drivers/thunderbolt/tb_msgs.h |  39 +++
 drivers/thunderbolt/tb_regs.h |   5 +
 drivers/thunderbolt/xdomain.c | 609 ++++++++++++++++++++++++++++++----
 include/linux/thunderbolt.h   |  19 +-
 5 files changed, 595 insertions(+), 83 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 44d04b651a8b..9a3214fb5038 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -169,12 +169,6 @@ static void tb_discover_tunnels(struct tb *tb)
 
 static int tb_port_configure_xdomain(struct tb_port *port)
 {
-	/*
-	 * XDomain paths currently only support single lane so we must
-	 * disable the other lane according to USB4 spec.
-	 */
-	tb_port_disable(port->dual_link_port);
-
 	if (tb_switch_is_usb4(port->sw))
 		return usb4_port_configure_xdomain(port);
 	return tb_lc_configure_xdomain(port);
diff --git a/drivers/thunderbolt/tb_msgs.h b/drivers/thunderbolt/tb_msgs.h
index fe1afa44c56d..33c4c7aed56d 100644
--- a/drivers/thunderbolt/tb_msgs.h
+++ b/drivers/thunderbolt/tb_msgs.h
@@ -527,6 +527,10 @@ enum tb_xdp_type {
 	PROPERTIES_CHANGED_RESPONSE,
 	ERROR_RESPONSE,
 	UUID_REQUEST = 12,
+	LINK_STATE_STATUS_REQUEST = 15,
+	LINK_STATE_STATUS_RESPONSE,
+	LINK_STATE_CHANGE_REQUEST,
+	LINK_STATE_CHANGE_RESPONSE,
 };
 
 struct tb_xdp_header {
@@ -540,6 +544,41 @@ struct tb_xdp_error_response {
 	u32 error;
 };
 
+struct tb_xdp_link_state_status {
+	struct tb_xdp_header hdr;
+};
+
+struct tb_xdp_link_state_status_response {
+	union {
+		struct tb_xdp_error_response err;
+		struct {
+			struct tb_xdp_header hdr;
+			u32 status;
+			u8 slw;
+			u8 tlw;
+			u8 sls;
+			u8 tls;
+		};
+	};
+};
+
+struct tb_xdp_link_state_change {
+	struct tb_xdp_header hdr;
+	u8 tlw;
+	u8 tls;
+	u16 reserved;
+};
+
+struct tb_xdp_link_state_change_response {
+	union {
+		struct tb_xdp_error_response err;
+		struct {
+			struct tb_xdp_header hdr;
+			u32 status;
+		};
+	};
+};
+
 struct tb_xdp_uuid {
 	struct tb_xdp_header hdr;
 };
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index b301eeb0c89b..6a16f61a72a1 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -311,11 +311,16 @@ struct tb_regs_port_header {
 
 /* Lane adapter registers */
 #define LANE_ADP_CS_0				0x00
+#define LANE_ADP_CS_0_SUPPORTED_SPEED_MASK	GENMASK(19, 16)
+#define LANE_ADP_CS_0_SUPPORTED_SPEED_SHIFT	16
 #define LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK	GENMASK(25, 20)
 #define LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT	20
+#define LANE_ADP_CS_0_SUPPORTED_WIDTH_DUAL	0x2
 #define LANE_ADP_CS_0_CL0S_SUPPORT		BIT(26)
 #define LANE_ADP_CS_0_CL1_SUPPORT		BIT(27)
 #define LANE_ADP_CS_1				0x01
+#define LANE_ADP_CS_1_TARGET_SPEED_MASK		GENMASK(3, 0)
+#define LANE_ADP_CS_1_TARGET_SPEED_GEN3		0xc
 #define LANE_ADP_CS_1_TARGET_WIDTH_MASK		GENMASK(9, 4)
 #define LANE_ADP_CS_1_TARGET_WIDTH_SHIFT	4
 #define LANE_ADP_CS_1_TARGET_WIDTH_SINGLE	0x1
diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
index 01d6b724ca51..c31c0d94d8b3 100644
--- a/drivers/thunderbolt/xdomain.c
+++ b/drivers/thunderbolt/xdomain.c
@@ -19,13 +19,38 @@
 
 #include "tb.h"
 
-#define XDOMAIN_DEFAULT_TIMEOUT			1000 /* ms */
-#define XDOMAIN_UUID_RETRIES			10
-#define XDOMAIN_PROPERTIES_RETRIES		10
-#define XDOMAIN_PROPERTIES_CHANGED_RETRIES	10
-#define XDOMAIN_BONDING_WAIT			100  /* ms */
+#define XDOMAIN_SHORT_TIMEOUT			100	/* ms */
+#define XDOMAIN_DEFAULT_TIMEOUT			1000	/* ms */
+#define XDOMAIN_BONDING_TIMEOUT			10000	/* ms */
+#define XDOMAIN_RETRIES				10
 #define XDOMAIN_DEFAULT_MAX_HOPID		15
 
+enum {
+	XDOMAIN_STATE_INIT,
+	XDOMAIN_STATE_UUID,
+	XDOMAIN_STATE_LINK_STATUS,
+	XDOMAIN_STATE_LINK_STATE_CHANGE,
+	XDOMAIN_STATE_LINK_STATUS2,
+	XDOMAIN_STATE_BONDING_UUID_LOW,
+	XDOMAIN_STATE_BONDING_UUID_HIGH,
+	XDOMAIN_STATE_PROPERTIES,
+	XDOMAIN_STATE_ENUMERATED,
+	XDOMAIN_STATE_ERROR,
+};
+
+static const char * const state_names[] = {
+	[XDOMAIN_STATE_INIT] = "INIT",
+	[XDOMAIN_STATE_UUID] = "UUID",
+	[XDOMAIN_STATE_LINK_STATUS] = "LINK_STATUS",
+	[XDOMAIN_STATE_LINK_STATE_CHANGE] = "LINK_STATE_CHANGE",
+	[XDOMAIN_STATE_LINK_STATUS2] = "LINK_STATUS2",
+	[XDOMAIN_STATE_BONDING_UUID_LOW] = "BONDING_UUID_LOW",
+	[XDOMAIN_STATE_BONDING_UUID_HIGH] = "BONDING_UUID_HIGH",
+	[XDOMAIN_STATE_PROPERTIES] = "PROPERTIES",
+	[XDOMAIN_STATE_ENUMERATED] = "ENUMERATED",
+	[XDOMAIN_STATE_ERROR] = "ERROR",
+};
+
 struct xdomain_request_work {
 	struct work_struct work;
 	struct tb_xdp_header *pkg;
@@ -235,7 +260,7 @@ static int tb_xdp_handle_error(const struct tb_xdp_error_response *res)
 }
 
 static int tb_xdp_uuid_request(struct tb_ctl *ctl, u64 route, int retry,
-			       uuid_t *uuid)
+			       uuid_t *uuid, u64 *remote_route)
 {
 	struct tb_xdp_uuid_response res;
 	struct tb_xdp_uuid req;
@@ -258,6 +283,8 @@ static int tb_xdp_uuid_request(struct tb_ctl *ctl, u64 route, int retry,
 		return ret;
 
 	uuid_copy(uuid, &res.src_uuid);
+	*remote_route = (u64)res.src_route_hi << 32 | res.src_route_lo;
+
 	return 0;
 }
 
@@ -473,6 +500,112 @@ tb_xdp_properties_changed_response(struct tb_ctl *ctl, u64 route, u8 sequence)
 				     TB_CFG_PKG_XDOMAIN_RESP);
 }
 
+static int tb_xdp_link_state_status_request(struct tb_ctl *ctl, u64 route,
+					    u8 sequence, u8 *slw, u8 *tlw,
+					    u8 *sls, u8 *tls)
+{
+	struct tb_xdp_link_state_status_response res;
+	struct tb_xdp_link_state_status req;
+	int ret;
+
+	memset(&req, 0, sizeof(req));
+	tb_xdp_fill_header(&req.hdr, route, sequence, LINK_STATE_STATUS_REQUEST,
+			   sizeof(req));
+
+	memset(&res, 0, sizeof(res));
+	ret = __tb_xdomain_request(ctl, &req, sizeof(req), TB_CFG_PKG_XDOMAIN_REQ,
+				   &res, sizeof(res), TB_CFG_PKG_XDOMAIN_RESP,
+				   XDOMAIN_DEFAULT_TIMEOUT);
+	if (ret)
+		return ret;
+
+	ret = tb_xdp_handle_error(&res.err);
+	if (ret)
+		return ret;
+
+	if (res.status != 0)
+		return -EREMOTEIO;
+
+	*slw = res.slw;
+	*tlw = res.tlw;
+	*sls = res.sls;
+	*tls = res.tls;
+
+	return 0;
+}
+
+static int tb_xdp_link_state_status_response(struct tb *tb, struct tb_ctl *ctl,
+					     struct tb_xdomain *xd, u8 sequence)
+{
+	struct tb_switch *sw = tb_to_switch(xd->dev.parent);
+	struct tb_xdp_link_state_status_response res;
+	struct tb_port *port = tb_port_at(xd->route, sw);
+	u32 val[2];
+	int ret;
+
+	memset(&res, 0, sizeof(res));
+	tb_xdp_fill_header(&res.hdr, xd->route, sequence,
+			   LINK_STATE_STATUS_RESPONSE, sizeof(res));
+
+	ret = tb_port_read(port, val, TB_CFG_PORT,
+			   port->cap_phy + LANE_ADP_CS_0, ARRAY_SIZE(val));
+	if (ret)
+		return ret;
+
+	res.slw = (val[0] & LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK) >>
+			LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT;
+	res.sls = (val[0] & LANE_ADP_CS_0_SUPPORTED_SPEED_MASK) >>
+			LANE_ADP_CS_0_SUPPORTED_SPEED_SHIFT;
+	res.tls = val[1] & LANE_ADP_CS_1_TARGET_SPEED_MASK;
+	res.tlw = (val[1] & LANE_ADP_CS_1_TARGET_WIDTH_MASK) >>
+			LANE_ADP_CS_1_TARGET_WIDTH_SHIFT;
+
+	return __tb_xdomain_response(ctl, &res, sizeof(res),
+				     TB_CFG_PKG_XDOMAIN_RESP);
+}
+
+static int tb_xdp_link_state_change_request(struct tb_ctl *ctl, u64 route,
+					    u8 sequence, u8 tlw, u8 tls)
+{
+	struct tb_xdp_link_state_change_response res;
+	struct tb_xdp_link_state_change req;
+	int ret;
+
+	memset(&req, 0, sizeof(req));
+	tb_xdp_fill_header(&req.hdr, route, sequence, LINK_STATE_CHANGE_REQUEST,
+			   sizeof(req));
+	req.tlw = tlw;
+	req.tls = tls;
+
+	memset(&res, 0, sizeof(res));
+	ret = __tb_xdomain_request(ctl, &req, sizeof(req), TB_CFG_PKG_XDOMAIN_REQ,
+				   &res, sizeof(res), TB_CFG_PKG_XDOMAIN_RESP,
+				   XDOMAIN_DEFAULT_TIMEOUT);
+	if (ret)
+		return ret;
+
+	ret = tb_xdp_handle_error(&res.err);
+	if (ret)
+		return ret;
+
+	return res.status != 0 ? -EREMOTEIO : 0;
+}
+
+static int tb_xdp_link_state_change_response(struct tb_ctl *ctl, u64 route,
+					     u8 sequence, u32 status)
+{
+	struct tb_xdp_link_state_change_response res;
+
+	memset(&res, 0, sizeof(res));
+	tb_xdp_fill_header(&res.hdr, route, sequence, LINK_STATE_CHANGE_RESPONSE,
+			   sizeof(res));
+
+	res.status = status;
+
+	return __tb_xdomain_response(ctl, &res, sizeof(res),
+				     TB_CFG_PKG_XDOMAIN_RESP);
+}
+
 /**
  * tb_register_protocol_handler() - Register protocol handler
  * @handler: Handler to register
@@ -600,14 +733,13 @@ static void tb_xdp_handle_request(struct work_struct *work)
 		goto out;
 	}
 
-	tb_dbg(tb, "%llx: received XDomain request %#x\n", route, pkg->type);
-
 	xd = tb_xdomain_find_by_route_locked(tb, route);
 	if (xd)
 		update_property_block(xd);
 
 	switch (pkg->type) {
 	case PROPERTIES_REQUEST:
+		tb_dbg(tb, "%llx: received XDomain properties request\n", route);
 		if (xd) {
 			ret = tb_xdp_properties_response(tb, ctl, xd, sequence,
 				(const struct tb_xdp_properties *)pkg);
@@ -615,6 +747,9 @@ static void tb_xdp_handle_request(struct work_struct *work)
 		break;
 
 	case PROPERTIES_CHANGED_REQUEST:
+		tb_dbg(tb, "%llx: received XDomain properties changed request\n",
+		       route);
+
 		ret = tb_xdp_properties_changed_response(ctl, route, sequence);
 
 		/*
@@ -622,18 +757,51 @@ static void tb_xdp_handle_request(struct work_struct *work)
 		 * the xdomain related to this connection as well in
 		 * case there is a change in services it offers.
 		 */
-		if (xd && device_is_registered(&xd->dev)) {
-			queue_delayed_work(tb->wq, &xd->get_properties_work,
-					   msecs_to_jiffies(50));
-		}
+		if (xd && device_is_registered(&xd->dev))
+			queue_delayed_work(tb->wq, &xd->state_work,
+					   msecs_to_jiffies(XDOMAIN_SHORT_TIMEOUT));
 		break;
 
 	case UUID_REQUEST_OLD:
 	case UUID_REQUEST:
+		tb_dbg(tb, "%llx: received XDomain UUID request\n", route);
 		ret = tb_xdp_uuid_response(ctl, route, sequence, uuid);
 		break;
 
+	case LINK_STATE_STATUS_REQUEST:
+		tb_dbg(tb, "%llx: received XDomain link state status request\n",
+		       route);
+
+		if (xd) {
+			ret = tb_xdp_link_state_status_response(tb, ctl, xd,
+								sequence);
+		} else {
+			tb_xdp_error_response(ctl, route, sequence,
+					      ERROR_NOT_READY);
+		}
+		break;
+
+	case LINK_STATE_CHANGE_REQUEST:
+		tb_dbg(tb, "%llx: received XDomain link state change request\n",
+		       route);
+
+		if (xd && xd->state == XDOMAIN_STATE_BONDING_UUID_HIGH) {
+			const struct tb_xdp_link_state_change *lsc =
+				(const struct tb_xdp_link_state_change *)pkg;
+
+			ret = tb_xdp_link_state_change_response(ctl, route,
+								sequence, 0);
+			xd->target_link_width = lsc->tlw;
+			queue_delayed_work(tb->wq, &xd->state_work,
+					   msecs_to_jiffies(XDOMAIN_SHORT_TIMEOUT));
+		} else {
+			tb_xdp_error_response(ctl, route, sequence,
+					      ERROR_NOT_READY);
+		}
+		break;
+
 	default:
+		tb_dbg(tb, "%llx: unknown XDomain request %#x\n", route, pkg->type);
 		tb_xdp_error_response(ctl, route, sequence,
 				      ERROR_NOT_SUPPORTED);
 		break;
@@ -1000,32 +1168,38 @@ static int tb_xdomain_update_link_attributes(struct tb_xdomain *xd)
 	return 0;
 }
 
-static void tb_xdomain_get_uuid(struct work_struct *work)
+static int tb_xdomain_get_uuid(struct tb_xdomain *xd)
 {
-	struct tb_xdomain *xd = container_of(work, typeof(*xd),
-					     get_uuid_work.work);
 	struct tb *tb = xd->tb;
 	uuid_t uuid;
+	u64 route;
 	int ret;
 
 	dev_dbg(&xd->dev, "requesting remote UUID\n");
 
-	ret = tb_xdp_uuid_request(tb->ctl, xd->route, xd->uuid_retries, &uuid);
+	ret = tb_xdp_uuid_request(tb->ctl, xd->route, xd->state_retries, &uuid,
+				  &route);
 	if (ret < 0) {
-		if (xd->uuid_retries-- > 0) {
+		if (xd->state_retries-- > 0) {
 			dev_dbg(&xd->dev, "failed to request UUID, retrying\n");
-			queue_delayed_work(xd->tb->wq, &xd->get_uuid_work,
-					   msecs_to_jiffies(100));
+			return -EAGAIN;
 		} else {
 			dev_dbg(&xd->dev, "failed to read remote UUID\n");
 		}
-		return;
+		return ret;
 	}
 
 	dev_dbg(&xd->dev, "got remote UUID %pUb\n", &uuid);
 
-	if (uuid_equal(&uuid, xd->local_uuid))
-		dev_dbg(&xd->dev, "intra-domain loop detected\n");
+	if (uuid_equal(&uuid, xd->local_uuid)) {
+		if (route == xd->route)
+			dev_dbg(&xd->dev, "loop back detected\n");
+		else
+			dev_dbg(&xd->dev, "intra-domain loop detected\n");
+
+		/* Don't bond lanes automatically for loops */
+		xd->bonding_possible = false;
+	}
 
 	/*
 	 * If the UUID is different, there is another domain connected
@@ -1035,27 +1209,152 @@ static void tb_xdomain_get_uuid(struct work_struct *work)
 	if (xd->remote_uuid && !uuid_equal(&uuid, xd->remote_uuid)) {
 		dev_dbg(&xd->dev, "remote UUID is different, unplugging\n");
 		xd->is_unplugged = true;
-		return;
+		return -ENODEV;
 	}
 
 	/* First time fill in the missing UUID */
 	if (!xd->remote_uuid) {
 		xd->remote_uuid = kmemdup(&uuid, sizeof(uuid_t), GFP_KERNEL);
 		if (!xd->remote_uuid)
-			return;
+			return -ENOMEM;
 	}
 
-	/* Now we can start the normal properties exchange */
-	queue_delayed_work(xd->tb->wq, &xd->properties_changed_work,
-			   msecs_to_jiffies(100));
-	queue_delayed_work(xd->tb->wq, &xd->get_properties_work,
-			   msecs_to_jiffies(1000));
+	return 0;
 }
 
-static void tb_xdomain_get_properties(struct work_struct *work)
+static int tb_xdomain_get_link_status(struct tb_xdomain *xd)
+{
+	struct tb *tb = xd->tb;
+	u8 slw, tlw, sls, tls;
+	int ret;
+
+	dev_dbg(&xd->dev, "sending link state status request to %pUb\n",
+		xd->remote_uuid);
+
+	ret = tb_xdp_link_state_status_request(tb->ctl, xd->route,
+					       xd->state_retries, &slw, &tlw, &sls,
+					       &tls);
+	if (ret) {
+		if (ret != -EOPNOTSUPP && xd->state_retries-- > 0) {
+			dev_dbg(&xd->dev,
+				"failed to request remote link status, retrying\n");
+			return -EAGAIN;
+		}
+		dev_dbg(&xd->dev, "failed to receive remote link status\n");
+		return ret;
+	}
+
+	dev_dbg(&xd->dev, "remote link supports width %#x speed %#x\n", slw, sls);
+
+	if (slw < LANE_ADP_CS_0_SUPPORTED_WIDTH_DUAL) {
+		dev_dbg(&xd->dev, "remote adapter is single lane only\n");
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
+
+static int tb_xdomain_link_state_change(struct tb_xdomain *xd,
+					unsigned int width)
+{
+	struct tb_switch *sw = tb_to_switch(xd->dev.parent);
+	struct tb_port *port = tb_port_at(xd->route, sw);
+	struct tb *tb = xd->tb;
+	u8 tlw, tls;
+	u32 val;
+	int ret;
+
+	if (width == 2)
+		tlw = LANE_ADP_CS_1_TARGET_WIDTH_DUAL;
+	else if (width == 1)
+		tlw = LANE_ADP_CS_1_TARGET_WIDTH_SINGLE;
+	else
+		return -EINVAL;
+
+	/* Use the current target speed */
+	ret = tb_port_read(port, &val, TB_CFG_PORT, port->cap_phy + LANE_ADP_CS_1, 1);
+	if (ret)
+		return ret;
+	tls = val & LANE_ADP_CS_1_TARGET_SPEED_MASK;
+
+	dev_dbg(&xd->dev, "sending link state change request with width %#x speed %#x\n",
+		tlw, tls);
+
+	ret = tb_xdp_link_state_change_request(tb->ctl, xd->route,
+					       xd->state_retries, tlw, tls);
+	if (ret) {
+		if (ret != -EOPNOTSUPP && xd->state_retries-- > 0) {
+			dev_dbg(&xd->dev,
+				"failed to change remote link state, retrying\n");
+			return -EAGAIN;
+		}
+		dev_err(&xd->dev, "failed request link state change, aborting\n");
+		return ret;
+	}
+
+	dev_dbg(&xd->dev, "received link state change response\n");
+	return 0;
+}
+
+static int tb_xdomain_bond_lanes_uuid_high(struct tb_xdomain *xd)
+{
+	struct tb_port *port;
+	int ret, width;
+
+	if (xd->target_link_width == LANE_ADP_CS_1_TARGET_WIDTH_SINGLE) {
+		width = 1;
+	} else if (xd->target_link_width == LANE_ADP_CS_1_TARGET_WIDTH_DUAL) {
+		width = 2;
+	} else {
+		if (xd->state_retries-- > 0) {
+			dev_dbg(&xd->dev,
+				"link state change request not received yet, retrying\n");
+			return -EAGAIN;
+		}
+		dev_dbg(&xd->dev, "timeout waiting for link change request\n");
+		return -ETIMEDOUT;
+	}
+
+	port = tb_port_at(xd->route, tb_xdomain_parent(xd));
+
+	/*
+	 * We can't use tb_xdomain_lane_bonding_enable() here because it
+	 * is the other side that initiates lane bonding. So here we
+	 * just set the width to both lane adapters and wait for the
+	 * link to transition bonded.
+	 */
+	ret = tb_port_set_link_width(port->dual_link_port, width);
+	if (ret) {
+		tb_port_warn(port->dual_link_port,
+			     "failed to set link width to %d\n", width);
+		return ret;
+	}
+
+	ret = tb_port_set_link_width(port, width);
+	if (ret) {
+		tb_port_warn(port, "failed to set link width to %d\n", width);
+		return ret;
+	}
+
+	ret = tb_port_wait_for_link_width(port, width, XDOMAIN_BONDING_TIMEOUT);
+	if (ret) {
+		dev_warn(&xd->dev, "error waiting for link width to become %d\n",
+			 width);
+		return ret;
+	}
+
+	port->bonded = width == 2;
+	port->dual_link_port->bonded = width == 2;
+
+	tb_port_update_credits(port);
+	tb_xdomain_update_link_attributes(xd);
+
+	dev_dbg(&xd->dev, "lane bonding %sabled\n", width == 2 ? "en" : "dis");
+	return 0;
+}
+
+static int tb_xdomain_get_properties(struct tb_xdomain *xd)
 {
-	struct tb_xdomain *xd = container_of(work, typeof(*xd),
-					     get_properties_work.work);
 	struct tb_property_dir *dir;
 	struct tb *tb = xd->tb;
 	bool update = false;
@@ -1066,34 +1365,35 @@ static void tb_xdomain_get_properties(struct work_struct *work)
 	dev_dbg(&xd->dev, "requesting remote properties\n");
 
 	ret = tb_xdp_properties_request(tb->ctl, xd->route, xd->local_uuid,
-					xd->remote_uuid, xd->properties_retries,
+					xd->remote_uuid, xd->state_retries,
 					&block, &gen);
 	if (ret < 0) {
-		if (xd->properties_retries-- > 0) {
+		if (xd->state_retries-- > 0) {
 			dev_dbg(&xd->dev,
 				"failed to request remote properties, retrying\n");
-			queue_delayed_work(xd->tb->wq, &xd->get_properties_work,
-					   msecs_to_jiffies(1000));
+			return -EAGAIN;
 		} else {
 			/* Give up now */
 			dev_err(&xd->dev,
 				"failed read XDomain properties from %pUb\n",
 				xd->remote_uuid);
 		}
-		return;
-	}
 
-	xd->properties_retries = XDOMAIN_PROPERTIES_RETRIES;
+		return ret;
+	}
 
 	mutex_lock(&xd->lock);
 
 	/* Only accept newer generation properties */
-	if (xd->remote_properties && gen <= xd->remote_property_block_gen)
+	if (xd->remote_properties && gen <= xd->remote_property_block_gen) {
+		ret = 0;
 		goto err_free_block;
+	}
 
 	dir = tb_property_parse_dir(block, ret);
 	if (!dir) {
 		dev_err(&xd->dev, "failed to parse XDomain properties\n");
+		ret = -ENOMEM;
 		goto err_free_block;
 	}
 
@@ -1124,9 +1424,16 @@ static void tb_xdomain_get_properties(struct work_struct *work)
 	 * registered, we notify the userspace that it has changed.
 	 */
 	if (!update) {
+		struct tb_port *port;
+
+		/* Now disable lane 1 if bonding was not enabled */
+		port = tb_port_at(xd->route, tb_xdomain_parent(xd));
+		if (!port->bonded)
+			tb_port_disable(port->dual_link_port);
+
 		if (device_add(&xd->dev)) {
 			dev_err(&xd->dev, "failed to add XDomain device\n");
-			return;
+			return -ENODEV;
 		}
 		dev_info(&xd->dev, "new host found, vendor=%#x device=%#x\n",
 			 xd->vendor, xd->device);
@@ -1138,13 +1445,193 @@ static void tb_xdomain_get_properties(struct work_struct *work)
 	}
 
 	enumerate_services(xd);
-	return;
+	return 0;
 
 err_free_dir:
 	tb_property_free_dir(dir);
 err_free_block:
 	kfree(block);
 	mutex_unlock(&xd->lock);
+
+	return ret;
+}
+
+static void tb_xdomain_queue_uuid(struct tb_xdomain *xd)
+{
+	xd->state = XDOMAIN_STATE_UUID;
+	xd->state_retries = XDOMAIN_RETRIES;
+	queue_delayed_work(xd->tb->wq, &xd->state_work,
+			   msecs_to_jiffies(XDOMAIN_SHORT_TIMEOUT));
+}
+
+static void tb_xdomain_queue_link_status(struct tb_xdomain *xd)
+{
+	xd->state = XDOMAIN_STATE_LINK_STATUS;
+	xd->state_retries = XDOMAIN_RETRIES;
+	queue_delayed_work(xd->tb->wq, &xd->state_work,
+			   msecs_to_jiffies(XDOMAIN_DEFAULT_TIMEOUT));
+}
+
+static void tb_xdomain_queue_link_status2(struct tb_xdomain *xd)
+{
+	xd->state = XDOMAIN_STATE_LINK_STATUS2;
+	xd->state_retries = XDOMAIN_RETRIES;
+	queue_delayed_work(xd->tb->wq, &xd->state_work,
+			   msecs_to_jiffies(XDOMAIN_DEFAULT_TIMEOUT));
+}
+
+static void tb_xdomain_queue_bonding(struct tb_xdomain *xd)
+{
+	if (memcmp(xd->local_uuid, xd->remote_uuid, UUID_SIZE) > 0) {
+		dev_dbg(&xd->dev, "we have higher UUID, other side bonds the lanes\n");
+		xd->state = XDOMAIN_STATE_BONDING_UUID_HIGH;
+	} else {
+		dev_dbg(&xd->dev, "we have lower UUID, bonding lanes\n");
+		xd->state = XDOMAIN_STATE_LINK_STATE_CHANGE;
+	}
+
+	xd->state_retries = XDOMAIN_RETRIES;
+	queue_delayed_work(xd->tb->wq, &xd->state_work,
+			   msecs_to_jiffies(XDOMAIN_DEFAULT_TIMEOUT));
+}
+
+static void tb_xdomain_queue_bonding_uuid_low(struct tb_xdomain *xd)
+{
+	xd->state = XDOMAIN_STATE_BONDING_UUID_LOW;
+	xd->state_retries = XDOMAIN_RETRIES;
+	queue_delayed_work(xd->tb->wq, &xd->state_work,
+			   msecs_to_jiffies(XDOMAIN_DEFAULT_TIMEOUT));
+}
+
+static void tb_xdomain_queue_properties(struct tb_xdomain *xd)
+{
+	xd->state = XDOMAIN_STATE_PROPERTIES;
+	xd->state_retries = XDOMAIN_RETRIES;
+	queue_delayed_work(xd->tb->wq, &xd->state_work,
+			   msecs_to_jiffies(XDOMAIN_DEFAULT_TIMEOUT));
+}
+
+static void tb_xdomain_queue_properties_changed(struct tb_xdomain *xd)
+{
+	xd->properties_changed_retries = XDOMAIN_RETRIES;
+	queue_delayed_work(xd->tb->wq, &xd->properties_changed_work,
+			   msecs_to_jiffies(XDOMAIN_SHORT_TIMEOUT));
+}
+
+static void tb_xdomain_state_work(struct work_struct *work)
+{
+	struct tb_xdomain *xd = container_of(work, typeof(*xd), state_work.work);
+	int ret, state = xd->state;
+
+	if (WARN_ON_ONCE(state < XDOMAIN_STATE_INIT ||
+			 state > XDOMAIN_STATE_ERROR))
+		return;
+
+	dev_dbg(&xd->dev, "running state %s\n", state_names[state]);
+
+	switch (state) {
+	case XDOMAIN_STATE_INIT:
+		if (xd->needs_uuid) {
+			tb_xdomain_queue_uuid(xd);
+		} else {
+			tb_xdomain_queue_properties_changed(xd);
+			tb_xdomain_queue_properties(xd);
+		}
+		break;
+
+	case XDOMAIN_STATE_UUID:
+		ret = tb_xdomain_get_uuid(xd);
+		if (ret) {
+			if (ret == -EAGAIN)
+				goto retry_state;
+			xd->state = XDOMAIN_STATE_ERROR;
+		} else {
+			tb_xdomain_queue_properties_changed(xd);
+			if (xd->bonding_possible)
+				tb_xdomain_queue_link_status(xd);
+			else
+				tb_xdomain_queue_properties(xd);
+		}
+		break;
+
+	case XDOMAIN_STATE_LINK_STATUS:
+		ret = tb_xdomain_get_link_status(xd);
+		if (ret) {
+			if (ret == -EAGAIN)
+				goto retry_state;
+
+			/*
+			 * If any of the lane bonding states fail we skip
+			 * bonding completely and try to continue from
+			 * reading properties.
+			 */
+			tb_xdomain_queue_properties(xd);
+		} else {
+			tb_xdomain_queue_bonding(xd);
+		}
+		break;
+
+	case XDOMAIN_STATE_LINK_STATE_CHANGE:
+		ret = tb_xdomain_link_state_change(xd, 2);
+		if (ret) {
+			if (ret == -EAGAIN)
+				goto retry_state;
+			tb_xdomain_queue_properties(xd);
+		} else {
+			tb_xdomain_queue_link_status2(xd);
+		}
+		break;
+
+	case XDOMAIN_STATE_LINK_STATUS2:
+		ret = tb_xdomain_get_link_status(xd);
+		if (ret) {
+			if (ret == -EAGAIN)
+				goto retry_state;
+			tb_xdomain_queue_properties(xd);
+		} else {
+			tb_xdomain_queue_bonding_uuid_low(xd);
+		}
+		break;
+
+	case XDOMAIN_STATE_BONDING_UUID_LOW:
+		tb_xdomain_lane_bonding_enable(xd);
+		tb_xdomain_queue_properties(xd);
+		break;
+
+	case XDOMAIN_STATE_BONDING_UUID_HIGH:
+		if (tb_xdomain_bond_lanes_uuid_high(xd) == -EAGAIN)
+			goto retry_state;
+		tb_xdomain_queue_properties(xd);
+		break;
+
+	case XDOMAIN_STATE_PROPERTIES:
+		ret = tb_xdomain_get_properties(xd);
+		if (ret) {
+			if (ret == -EAGAIN)
+				goto retry_state;
+			xd->state = XDOMAIN_STATE_ERROR;
+		} else {
+			xd->state = XDOMAIN_STATE_ENUMERATED;
+		}
+		break;
+
+	case XDOMAIN_STATE_ENUMERATED:
+		tb_xdomain_queue_properties(xd);
+		break;
+
+	case XDOMAIN_STATE_ERROR:
+		break;
+
+	default:
+		dev_warn(&xd->dev, "unexpected state %d\n", state);
+		break;
+	}
+
+	return;
+
+retry_state:
+	queue_delayed_work(xd->tb->wq, &xd->state_work,
+			   msecs_to_jiffies(XDOMAIN_DEFAULT_TIMEOUT));
 }
 
 static void tb_xdomain_properties_changed(struct work_struct *work)
@@ -1163,13 +1650,13 @@ static void tb_xdomain_properties_changed(struct work_struct *work)
 				"failed to send properties changed notification, retrying\n");
 			queue_delayed_work(xd->tb->wq,
 					   &xd->properties_changed_work,
-					   msecs_to_jiffies(1000));
+					   msecs_to_jiffies(XDOMAIN_DEFAULT_TIMEOUT));
 		}
 		dev_err(&xd->dev, "failed to send properties changed notification\n");
 		return;
 	}
 
-	xd->properties_changed_retries = XDOMAIN_PROPERTIES_CHANGED_RETRIES;
+	xd->properties_changed_retries = XDOMAIN_RETRIES;
 }
 
 static ssize_t device_show(struct device *dev, struct device_attribute *attr,
@@ -1304,31 +1791,17 @@ static void tb_xdomain_release(struct device *dev)
 
 static void start_handshake(struct tb_xdomain *xd)
 {
-	xd->uuid_retries = XDOMAIN_UUID_RETRIES;
-	xd->properties_retries = XDOMAIN_PROPERTIES_RETRIES;
-	xd->properties_changed_retries = XDOMAIN_PROPERTIES_CHANGED_RETRIES;
-
-	if (xd->needs_uuid) {
-		queue_delayed_work(xd->tb->wq, &xd->get_uuid_work,
-				   msecs_to_jiffies(100));
-	} else {
-		/* Start exchanging properties with the other host */
-		queue_delayed_work(xd->tb->wq, &xd->properties_changed_work,
-				   msecs_to_jiffies(100));
-		queue_delayed_work(xd->tb->wq, &xd->get_properties_work,
-				   msecs_to_jiffies(1000));
-	}
+	xd->state = XDOMAIN_STATE_INIT;
+	queue_delayed_work(xd->tb->wq, &xd->state_work,
+			   msecs_to_jiffies(XDOMAIN_SHORT_TIMEOUT));
 }
 
 static void stop_handshake(struct tb_xdomain *xd)
 {
-	xd->uuid_retries = 0;
-	xd->properties_retries = 0;
-	xd->properties_changed_retries = 0;
-
-	cancel_delayed_work_sync(&xd->get_uuid_work);
-	cancel_delayed_work_sync(&xd->get_properties_work);
 	cancel_delayed_work_sync(&xd->properties_changed_work);
+	cancel_delayed_work_sync(&xd->state_work);
+	xd->properties_changed_retries = 0;
+	xd->state_retries = 0;
 }
 
 static int __maybe_unused tb_xdomain_suspend(struct device *dev)
@@ -1389,8 +1862,7 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent,
 	ida_init(&xd->in_hopids);
 	ida_init(&xd->out_hopids);
 	mutex_init(&xd->lock);
-	INIT_DELAYED_WORK(&xd->get_uuid_work, tb_xdomain_get_uuid);
-	INIT_DELAYED_WORK(&xd->get_properties_work, tb_xdomain_get_properties);
+	INIT_DELAYED_WORK(&xd->state_work, tb_xdomain_state_work);
 	INIT_DELAYED_WORK(&xd->properties_changed_work,
 			  tb_xdomain_properties_changed);
 
@@ -1405,6 +1877,7 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent,
 			goto err_free_local_uuid;
 	} else {
 		xd->needs_uuid = true;
+		xd->bonding_possible = !!down->dual_link_port;
 	}
 
 	device_initialize(&xd->dev);
@@ -1523,9 +1996,9 @@ int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd)
 		return ret;
 	}
 
-	ret = tb_port_wait_for_link_width(port, 2, 100);
+	ret = tb_port_wait_for_link_width(port, 2, XDOMAIN_BONDING_TIMEOUT);
 	if (ret) {
-		tb_port_warn(port, "timeout enabling lane bonding\n");
+		tb_port_warn(port, "failed to enable lane bonding\n");
 		return ret;
 	}
 
diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h
index 124e13cb1469..e13fe15e6a51 100644
--- a/include/linux/thunderbolt.h
+++ b/include/linux/thunderbolt.h
@@ -198,15 +198,15 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir);
  * @local_property_block_len: Length of the @local_property_block in dwords
  * @remote_properties: Properties exported by the remote domain
  * @remote_property_block_gen: Generation of @remote_properties
- * @get_uuid_work: Work used to retrieve @remote_uuid
- * @uuid_retries: Number of times left @remote_uuid is requested before
- *		  giving up
- * @get_properties_work: Work used to get remote domain properties
- * @properties_retries: Number of times left to read properties
+ * @state: Next XDomain discovery state to run
+ * @state_work: Work used to run the next state
+ * @state_retries: Number of retries remain for the state
  * @properties_changed_work: Work used to notify the remote domain that
  *			     our properties have changed
  * @properties_changed_retries: Number of times left to send properties
  *				changed notification
+ * @bonding_possible: True if lane bonding is possible on local side
+ * @target_link_width: Target link width from the remote host
  * @link: Root switch link the remote domain is connected (ICM only)
  * @depth: Depth in the chain the remote domain is connected (ICM only)
  *
@@ -244,12 +244,13 @@ struct tb_xdomain {
 	u32 local_property_block_len;
 	struct tb_property_dir *remote_properties;
 	u32 remote_property_block_gen;
-	struct delayed_work get_uuid_work;
-	int uuid_retries;
-	struct delayed_work get_properties_work;
-	int properties_retries;
+	int state;
+	struct delayed_work state_work;
+	int state_retries;
 	struct delayed_work properties_changed_work;
 	int properties_changed_retries;
+	bool bonding_possible;
+	u8 target_link_width;
 	u8 link;
 	u8 depth;
 };
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/5] thunderbolt: Add support for XDomain lane bonding
  2022-04-26 11:25 [PATCH 0/5] thunderbolt: Add support for XDomain lane bonding Mika Westerberg
                   ` (4 preceding siblings ...)
  2022-04-26 11:25 ` [PATCH 5/5] thunderbolt: Add support for XDomain lane bonding Mika Westerberg
@ 2022-05-05  6:29 ` Mika Westerberg
  5 siblings, 0 replies; 7+ messages in thread
From: Mika Westerberg @ 2022-05-05  6:29 UTC (permalink / raw)
  To: linux-usb; +Cc: Yehezkel Bernat, Michael Jamet, Andreas Noever, Lukas Wunner

On Tue, Apr 26, 2022 at 02:25:35PM +0300, Mika Westerberg wrote:
> Hi all,
> 
> So far connecting two Thunderbolt/USB4 domains (hosts) the high-speed link
> has been using a single lane. The reason for this was simplicity and also
> the fact that the host DMA was not able to go over the 20 Gbit/s even if
> the lanes were bonded. However, Intel Alder Lake and beyond actually can go
> higher than that so now it makes more sense to take advantage of the lane
> bonding. The USB4 Inter-Domain Service specification describes the
> protocol and this patch series implements it for the Linux Thunderbolt/USB4
> driver.
> 
> If both sides (hosts) of the link announce supporting this, we will
> establish bonded link. This is only possible on systems with software based
> connection manager (so Intel Alder Lake and beyond).
> 
> Mika Westerberg (5):
>   thunderbolt: Add debug logging when lane is enabled/disabled
>   thunderbolt: Move tb_port_state() prototype to correct place
>   thunderbolt: Split setting link width and lane bonding into own functions
>   thunderbolt: Ignore port locked error in tb_port_wait_for_link_width()
>   thunderbolt: Add support for XDomain lane bonding

Series applied to thunderbolt.git/next.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-05-05  6:31 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-26 11:25 [PATCH 0/5] thunderbolt: Add support for XDomain lane bonding Mika Westerberg
2022-04-26 11:25 ` [PATCH 1/5] thunderbolt: Add debug logging when lane is enabled/disabled Mika Westerberg
2022-04-26 11:25 ` [PATCH 2/5] thunderbolt: Move tb_port_state() prototype to correct place Mika Westerberg
2022-04-26 11:25 ` [PATCH 3/5] thunderbolt: Split setting link width and lane bonding into own functions Mika Westerberg
2022-04-26 11:25 ` [PATCH 4/5] thunderbolt: Ignore port locked error in tb_port_wait_for_link_width() Mika Westerberg
2022-04-26 11:25 ` [PATCH 5/5] thunderbolt: Add support for XDomain lane bonding Mika Westerberg
2022-05-05  6:29 ` [PATCH 0/5] " Mika Westerberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.