All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support
@ 2024-01-23 10:28 Imre Deak
  2024-01-23 10:28 ` [PATCH 01/19] drm/dp: Add drm_dp_max_dprx_data_rate() Imre Deak
                   ` (22 more replies)
  0 siblings, 23 replies; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx
  Cc: Gil Fine, dri-devel, Naama Shachar, Saranya Gopal, Pengfei Xu,
	Rajaram Regupathy, Mika Westerberg

Add support for detecting DP tunnels on (Thunderbolt) display links and
enabling the Bandwidth Allocation mode on the link. This helps in
enabling the maximum resolution in any scenario on displays sharing the
BW on such links.

Kudos to all Cc'd for advices, co-development and testing.

Cc: Mika Westerberg <notifications@github.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Saranya Gopal <saranya.gopal@intel.com>
Cc: Rajaram Regupathy <rajaram.regupathy@intel.com>
Cc: Gil Fine <gil.fine@intel.com>
Cc: Naama Shachar <naamax.shachar@intel.com>
Cc: Pengfei Xu <pengfei.xu@intel.com>

Imre Deak (19):
  drm/dp: Add drm_dp_max_dprx_data_rate()
  drm/dp: Add support for DP tunneling
  drm/i915/dp: Add support to notify MST connectors to retry modesets
  drm/i915/dp: Use drm_dp_max_dprx_data_rate()
  drm/i915/dp: Factor out intel_dp_config_required_rate()
  drm/i915/dp: Export intel_dp_max_common_rate/lane_count()
  drm/i915/dp: Factor out intel_dp_update_sink_caps()
  drm/i915/dp: Factor out intel_dp_read_dprx_caps()
  drm/i915/dp: Add intel_dp_max_link_data_rate()
  drm/i915/dp: Add way to get active pipes with syncing commits
  drm/i915/dp: Add support for DP tunnel BW allocation
  drm/i915/dp: Add DP tunnel atomic state and check BW limit
  drm/i915/dp: Account for tunnel BW limit in
    intel_dp_max_link_data_rate()
  drm/i915/dp: Compute DP tunel BW during encoder state computation
  drm/i915/dp: Allocate/free DP tunnel BW in the encoder enable/disable
    hooks
  drm/i915/dp: Handle DP tunnel IRQs
  drm/i915/dp: Call intel_dp_sync_state() always for DDI DP encoders
  drm/i915/dp: Suspend/resume DP tunnels
  drm/i915/dp: Enable DP tunnel BW allocation mode

 drivers/gpu/drm/display/Kconfig               |   17 +
 drivers/gpu/drm/display/Makefile              |    2 +
 drivers/gpu/drm/display/drm_dp_helper.c       |   58 +
 drivers/gpu/drm/display/drm_dp_tunnel.c       | 1715 +++++++++++++++++
 drivers/gpu/drm/i915/Kconfig                  |   13 +
 drivers/gpu/drm/i915/Kconfig.debug            |    1 +
 drivers/gpu/drm/i915/Makefile                 |    3 +
 drivers/gpu/drm/i915/display/g4x_dp.c         |   28 +
 drivers/gpu/drm/i915/display/intel_atomic.c   |   10 +
 drivers/gpu/drm/i915/display/intel_ddi.c      |    9 +-
 drivers/gpu/drm/i915/display/intel_display.c  |   26 +-
 .../gpu/drm/i915/display/intel_display_core.h |    1 +
 .../drm/i915/display/intel_display_driver.c   |   20 +-
 .../drm/i915/display/intel_display_types.h    |    9 +
 drivers/gpu/drm/i915/display/intel_dp.c       |  309 ++-
 drivers/gpu/drm/i915/display/intel_dp.h       |   21 +-
 .../drm/i915/display/intel_dp_link_training.c |   33 +-
 .../drm/i915/display/intel_dp_link_training.h |    1 +
 drivers/gpu/drm/i915/display/intel_dp_mst.c   |   18 +-
 .../gpu/drm/i915/display/intel_dp_tunnel.c    |  642 ++++++
 .../gpu/drm/i915/display/intel_dp_tunnel.h    |  131 ++
 drivers/gpu/drm/i915/display/intel_link_bw.c  |    5 +
 drivers/gpu/drm/i915/display/intel_tc.c       |    4 +-
 include/drm/display/drm_dp.h                  |   61 +
 include/drm/display/drm_dp_helper.h           |    2 +
 include/drm/display/drm_dp_tunnel.h           |  270 +++
 26 files changed, 3292 insertions(+), 117 deletions(-)
 create mode 100644 drivers/gpu/drm/display/drm_dp_tunnel.c
 create mode 100644 drivers/gpu/drm/i915/display/intel_dp_tunnel.c
 create mode 100644 drivers/gpu/drm/i915/display/intel_dp_tunnel.h
 create mode 100644 include/drm/display/drm_dp_tunnel.h

-- 
2.39.2


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 01/19] drm/dp: Add drm_dp_max_dprx_data_rate()
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-01-26 11:36   ` Ville Syrjälä
  2024-01-23 10:28 ` [PATCH 02/19] drm/dp: Add support for DP tunneling Imre Deak
                   ` (21 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Copy intel_dp_max_data_rate() to DRM core. It will be needed by a
follow-up DP tunnel patch, checking the maximum rate the DPRX (sink)
supports. Accordingly use the drm_dp_max_dprx_data_rate() name for
clarity. This patchset will also switch calling the new DRM function
in i915 instead of intel_dp_max_data_rate().

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/display/drm_dp_helper.c | 58 +++++++++++++++++++++++++
 include/drm/display/drm_dp_helper.h     |  2 +
 2 files changed, 60 insertions(+)

diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
index b1ca3a1100dab..24911243d4d3a 100644
--- a/drivers/gpu/drm/display/drm_dp_helper.c
+++ b/drivers/gpu/drm/display/drm_dp_helper.c
@@ -4058,3 +4058,61 @@ int drm_dp_bw_channel_coding_efficiency(bool is_uhbr)
 		return 800000;
 }
 EXPORT_SYMBOL(drm_dp_bw_channel_coding_efficiency);
+
+/*
+ * Given a link rate and lanes, get the data bandwidth.
+ *
+ * Data bandwidth is the actual payload rate, which depends on the data
+ * bandwidth efficiency and the link rate.
+ *
+ * For 8b/10b channel encoding, SST and non-FEC, the data bandwidth efficiency
+ * is 80%. For example, for a 1.62 Gbps link, 1.62*10^9 bps * 0.80 * (1/8) =
+ * 162000 kBps. With 8-bit symbols, we have 162000 kHz symbol clock. Just by
+ * coincidence, the port clock in kHz matches the data bandwidth in kBps, and
+ * they equal the link bit rate in Gbps multiplied by 100000. (Note that this no
+ * longer holds for data bandwidth as soon as FEC or MST is taken into account!)
+ *
+ * For 128b/132b channel encoding, the data bandwidth efficiency is 96.71%. For
+ * example, for a 10 Gbps link, 10*10^9 bps * 0.9671 * (1/8) = 1208875
+ * kBps. With 32-bit symbols, we have 312500 kHz symbol clock. The value 1000000
+ * does not match the symbol clock, the port clock (not even if you think in
+ * terms of a byte clock), nor the data bandwidth. It only matches the link bit
+ * rate in units of 10000 bps.
+ *
+ * Note that protocol layers above the DPRX link level considered here can
+ * further limit the maximum data rate. Such layers are the MST topology (with
+ * limits on the link between the source and first branch device as well as on
+ * the whole MST path until the DPRX link) and (Thunderbolt) DP tunnels -
+ * which in turn can encapsulate an MST link with its own limit - with each
+ * SST or MST encapsulated tunnel sharing the BW of a tunnel group.
+ *
+ * TODO: Add support for querying the max data rate with the above limits as
+ * well.
+ *
+ * Returns the maximum data rate in kBps units.
+ */
+int drm_dp_max_dprx_data_rate(int max_link_rate, int max_lanes)
+{
+	int ch_coding_efficiency =
+		drm_dp_bw_channel_coding_efficiency(drm_dp_is_uhbr_rate(max_link_rate));
+	int max_link_rate_kbps = max_link_rate * 10;
+
+	/*
+	 * UHBR rates always use 128b/132b channel encoding, and have
+	 * 97.71% data bandwidth efficiency. Consider max_link_rate the
+	 * link bit rate in units of 10000 bps.
+	 */
+	/*
+	 * Lower than UHBR rates always use 8b/10b channel encoding, and have
+	 * 80% data bandwidth efficiency for SST non-FEC. However, this turns
+	 * out to be a nop by coincidence:
+	 *
+	 *	int max_link_rate_kbps = max_link_rate * 10;
+	 *	max_link_rate_kbps = DIV_ROUND_DOWN_ULL(max_link_rate_kbps * 8, 10);
+	 *	max_link_rate = max_link_rate_kbps / 8;
+	 */
+	return DIV_ROUND_DOWN_ULL(mul_u32_u32(max_link_rate_kbps * max_lanes,
+					      ch_coding_efficiency),
+				  1000000 * 8);
+}
+EXPORT_SYMBOL(drm_dp_max_dprx_data_rate);
diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h
index 863b2e7add29e..454ae7517419a 100644
--- a/include/drm/display/drm_dp_helper.h
+++ b/include/drm/display/drm_dp_helper.h
@@ -813,4 +813,6 @@ int drm_dp_bw_overhead(int lane_count, int hactive,
 		       int bpp_x16, unsigned long flags);
 int drm_dp_bw_channel_coding_efficiency(bool is_uhbr);
 
+int drm_dp_max_dprx_data_rate(int max_link_rate, int max_lanes);
+
 #endif /* _DRM_DP_HELPER_H_ */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
  2024-01-23 10:28 ` [PATCH 01/19] drm/dp: Add drm_dp_max_dprx_data_rate() Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-01-31 12:50   ` Hogander, Jouni
                     ` (2 more replies)
  2024-01-23 10:28 ` [PATCH 03/19] drm/i915/dp: Add support to notify MST connectors to retry modesets Imre Deak
                   ` (20 subsequent siblings)
  22 siblings, 3 replies; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Add support for Display Port DP tunneling. For now this includes the
support for Bandwidth Allocation Mode, leaving adding Panel Replay
support for later.

BWA allows using displays that share the same (Thunderbolt) link with
their maximum resolution. Atm, this may not be possible due to the
coarse granularity of partitioning the link BW among the displays on the
link: the BW allocation policy is in a SW/FW/HW component on the link
(on Thunderbolt it's the SW or FW Connection Manager), independent of
the driver. This policy will set the DPRX maximum rate and lane count
DPCD registers the GFX driver will see (0x00000, 0x00001, 0x02200,
0x02201) based on the available link BW.

The granularity of the current BW allocation policy is course, based on
the required link rate in the 1.62Gbs..8.1Gbps range and it may prevent
using higher resolutions all together: the display connected first will
get a share of the link BW which corresponds to its full DPRX capability
(regardless of the actual mode it uses). A subsequent display connected
will only get the remaining BW, which could be well below its full
capability.

BWA solves the above course granularity (reducing it to a 250Mbs..1Gps
range) and first-come/first-served issues by letting the driver request
the BW for each display on a link which reflects the actual modes the
displays use.

This patch adds the DRM core helper functions, while a follow-up change
in the patchset takes them into use in the i915 driver.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/display/Kconfig         |   17 +
 drivers/gpu/drm/display/Makefile        |    2 +
 drivers/gpu/drm/display/drm_dp_tunnel.c | 1715 +++++++++++++++++++++++
 include/drm/display/drm_dp.h            |   60 +
 include/drm/display/drm_dp_tunnel.h     |  270 ++++
 5 files changed, 2064 insertions(+)
 create mode 100644 drivers/gpu/drm/display/drm_dp_tunnel.c
 create mode 100644 include/drm/display/drm_dp_tunnel.h

diff --git a/drivers/gpu/drm/display/Kconfig b/drivers/gpu/drm/display/Kconfig
index 09712b88a5b83..b024a84b94c1c 100644
--- a/drivers/gpu/drm/display/Kconfig
+++ b/drivers/gpu/drm/display/Kconfig
@@ -17,6 +17,23 @@ config DRM_DISPLAY_DP_HELPER
 	help
 	  DRM display helpers for DisplayPort.
 
+config DRM_DISPLAY_DP_TUNNEL
+	bool
+	select DRM_DISPLAY_DP_HELPER
+	help
+	  Enable support for DisplayPort tunnels.
+
+config DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
+	bool "Enable debugging the DP tunnel state"
+	depends on REF_TRACKER
+	depends on DRM_DISPLAY_DP_TUNNEL
+	depends on DEBUG_KERNEL
+	depends on EXPERT
+	help
+	  Enables debugging the DP tunnel manager's status.
+
+	  If in doubt, say "N".
+
 config DRM_DISPLAY_HDCP_HELPER
 	bool
 	depends on DRM_DISPLAY_HELPER
diff --git a/drivers/gpu/drm/display/Makefile b/drivers/gpu/drm/display/Makefile
index 17ac4a1006a80..7ca61333c6696 100644
--- a/drivers/gpu/drm/display/Makefile
+++ b/drivers/gpu/drm/display/Makefile
@@ -8,6 +8,8 @@ drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_HELPER) += \
 	drm_dp_helper.o \
 	drm_dp_mst_topology.o \
 	drm_dsc_helper.o
+drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_TUNNEL) += \
+	drm_dp_tunnel.o
 drm_display_helper-$(CONFIG_DRM_DISPLAY_HDCP_HELPER) += drm_hdcp_helper.o
 drm_display_helper-$(CONFIG_DRM_DISPLAY_HDMI_HELPER) += \
 	drm_hdmi_helper.o \
diff --git a/drivers/gpu/drm/display/drm_dp_tunnel.c b/drivers/gpu/drm/display/drm_dp_tunnel.c
new file mode 100644
index 0000000000000..58f6330db7d9d
--- /dev/null
+++ b/drivers/gpu/drm/display/drm_dp_tunnel.c
@@ -0,0 +1,1715 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#include <linux/ref_tracker.h>
+#include <linux/types.h>
+
+#include <drm/drm_atomic_state_helper.h>
+
+#include <drm/drm_atomic.h>
+#include <drm/drm_print.h>
+#include <drm/display/drm_dp.h>
+#include <drm/display/drm_dp_helper.h>
+#include <drm/display/drm_dp_tunnel.h>
+
+#define to_group(__private_obj) \
+	container_of(__private_obj, struct drm_dp_tunnel_group, base)
+
+#define to_group_state(__private_state) \
+	container_of(__private_state, struct drm_dp_tunnel_group_state, base)
+
+#define is_dp_tunnel_private_obj(__obj) \
+	((__obj)->funcs == &tunnel_group_funcs)
+
+#define for_each_new_group_in_state(__state, __new_group_state, __i) \
+	for ((__i) = 0; \
+	     (__i) < (__state)->num_private_objs; \
+	     (__i)++) \
+		for_each_if ((__state)->private_objs[__i].ptr && \
+			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
+			     ((__new_group_state) = \
+				to_group_state((__state)->private_objs[__i].new_state), 1))
+
+#define for_each_old_group_in_state(__state, __old_group_state, __i) \
+	for ((__i) = 0; \
+	     (__i) < (__state)->num_private_objs; \
+	     (__i)++) \
+		for_each_if ((__state)->private_objs[__i].ptr && \
+			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
+			     ((__old_group_state) = \
+				to_group_state((__state)->private_objs[__i].old_state), 1))
+
+#define for_each_tunnel_in_group(__group, __tunnel) \
+	list_for_each_entry(__tunnel, &(__group)->tunnels, node)
+
+#define for_each_tunnel_state(__group_state, __tunnel_state) \
+	list_for_each_entry(__tunnel_state, &(__group_state)->tunnel_states, node)
+
+#define for_each_tunnel_state_safe(__group_state, __tunnel_state, __tunnel_state_tmp) \
+	list_for_each_entry_safe(__tunnel_state, __tunnel_state_tmp, \
+				 &(__group_state)->tunnel_states, node)
+
+#define kbytes_to_mbits(__kbytes) \
+	DIV_ROUND_UP((__kbytes) * 8, 1000)
+
+#define DPTUN_BW_ARG(__bw) ((__bw) < 0 ? (__bw) : kbytes_to_mbits(__bw))
+
+#define __tun_prn(__tunnel, __level, __type, __fmt, ...) \
+	drm_##__level##__type((__tunnel)->group->mgr->dev, \
+			      "[DPTUN %s][%s] " __fmt, \
+			      drm_dp_tunnel_name(__tunnel), \
+			      (__tunnel)->aux->name, ## \
+			      __VA_ARGS__)
+
+#define tun_dbg(__tunnel, __fmt, ...) \
+	__tun_prn(__tunnel, dbg, _kms, __fmt, ## __VA_ARGS__)
+
+#define tun_dbg_stat(__tunnel, __err, __fmt, ...) do { \
+	if (__err) \
+		__tun_prn(__tunnel, dbg, _kms, __fmt " (Failed, err: %pe)\n", \
+			  ## __VA_ARGS__, ERR_PTR(__err)); \
+	else \
+		__tun_prn(__tunnel, dbg, _kms, __fmt " (Ok)\n", \
+			  ## __VA_ARGS__); \
+} while (0)
+
+#define tun_dbg_atomic(__tunnel, __fmt, ...) \
+	__tun_prn(__tunnel, dbg, _atomic, __fmt, ## __VA_ARGS__)
+
+#define tun_grp_dbg(__group, __fmt, ...) \
+	drm_dbg_kms((__group)->mgr->dev, \
+		    "[DPTUN %s] " __fmt, \
+		    drm_dp_tunnel_group_name(__group), ## \
+		    __VA_ARGS__)
+
+#define DP_TUNNELING_BASE DP_TUNNELING_OUI
+
+#define __DPTUN_REG_RANGE(start, size) \
+	GENMASK_ULL(start + size - 1, start)
+
+#define DPTUN_REG_RANGE(addr, size) \
+	__DPTUN_REG_RANGE((addr) - DP_TUNNELING_BASE, size)
+
+#define DPTUN_REG(addr) DPTUN_REG_RANGE(addr, 1)
+
+#define DPTUN_INFO_REG_MASK ( \
+	DPTUN_REG_RANGE(DP_TUNNELING_OUI, DP_TUNNELING_OUI_BYTES) | \
+	DPTUN_REG_RANGE(DP_TUNNELING_DEV_ID, DP_TUNNELING_DEV_ID_BYTES) | \
+	DPTUN_REG(DP_TUNNELING_HW_REV) | \
+	DPTUN_REG(DP_TUNNELING_SW_REV_MAJOR) | \
+	DPTUN_REG(DP_TUNNELING_SW_REV_MINOR) | \
+	DPTUN_REG(DP_TUNNELING_CAPABILITIES) | \
+	DPTUN_REG(DP_IN_ADAPTER_INFO) | \
+	DPTUN_REG(DP_USB4_DRIVER_ID) | \
+	DPTUN_REG(DP_USB4_DRIVER_BW_CAPABILITY) | \
+	DPTUN_REG(DP_IN_ADAPTER_TUNNEL_INFORMATION) | \
+	DPTUN_REG(DP_BW_GRANULARITY) | \
+	DPTUN_REG(DP_ESTIMATED_BW) | \
+	DPTUN_REG(DP_ALLOCATED_BW) | \
+	DPTUN_REG(DP_TUNNELING_MAX_LINK_RATE) | \
+	DPTUN_REG(DP_TUNNELING_MAX_LANE_COUNT) | \
+	DPTUN_REG(DP_DPTX_BW_ALLOCATION_MODE_CONTROL))
+
+static const DECLARE_BITMAP(dptun_info_regs, 64) = {
+	DPTUN_INFO_REG_MASK & -1UL,
+#if BITS_PER_LONG == 32
+	DPTUN_INFO_REG_MASK >> 32,
+#endif
+};
+
+struct drm_dp_tunnel_regs {
+	u8 buf[HWEIGHT64(DPTUN_INFO_REG_MASK)];
+};
+
+struct drm_dp_tunnel_group;
+
+struct drm_dp_tunnel {
+	struct drm_dp_tunnel_group *group;
+
+	struct list_head node;
+
+	struct kref kref;
+#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
+	struct ref_tracker *tracker;
+#endif
+	struct drm_dp_aux *aux;
+	char name[8];
+
+	int bw_granularity;
+	int estimated_bw;
+	int allocated_bw;
+
+	int max_dprx_rate;
+	u8 max_dprx_lane_count;
+
+	u8 adapter_id;
+
+	bool bw_alloc_supported:1;
+	bool bw_alloc_enabled:1;
+	bool has_io_error:1;
+	bool destroyed:1;
+};
+
+struct drm_dp_tunnel_group_state;
+
+struct drm_dp_tunnel_state {
+	struct drm_dp_tunnel_group_state *group_state;
+
+	struct drm_dp_tunnel_ref tunnel_ref;
+
+	struct list_head node;
+
+	u32 stream_mask;
+	int *stream_bw;
+};
+
+struct drm_dp_tunnel_group_state {
+	struct drm_private_state base;
+
+	struct list_head tunnel_states;
+};
+
+struct drm_dp_tunnel_group {
+	struct drm_private_obj base;
+	struct drm_dp_tunnel_mgr *mgr;
+
+	struct list_head tunnels;
+
+	int available_bw;	/* available BW including the allocated_bw of all tunnels */
+	int drv_group_id;
+
+	char name[8];
+
+	bool active:1;
+};
+
+struct drm_dp_tunnel_mgr {
+	struct drm_device *dev;
+
+	int group_count;
+	struct drm_dp_tunnel_group *groups;
+	wait_queue_head_t bw_req_queue;
+
+#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
+	struct ref_tracker_dir ref_tracker;
+#endif
+};
+
+static int next_reg_area(int *offset)
+{
+	*offset = find_next_bit(dptun_info_regs, 64, *offset);
+
+	return find_next_zero_bit(dptun_info_regs, 64, *offset + 1) - *offset;
+}
+
+#define tunnel_reg_ptr(__regs, __address) ({ \
+	WARN_ON(!test_bit((__address) - DP_TUNNELING_BASE, dptun_info_regs)); \
+	&(__regs)->buf[bitmap_weight(dptun_info_regs, (__address) - DP_TUNNELING_BASE)]; \
+})
+
+static int read_tunnel_regs(struct drm_dp_aux *aux, struct drm_dp_tunnel_regs *regs)
+{
+	int offset = 0;
+	int len;
+
+	while ((len = next_reg_area(&offset))) {
+		int address = DP_TUNNELING_BASE + offset;
+
+		if (drm_dp_dpcd_read(aux, address, tunnel_reg_ptr(regs, address), len) < 0)
+			return -EIO;
+
+		offset += len;
+	}
+
+	return 0;
+}
+
+static u8 tunnel_reg(const struct drm_dp_tunnel_regs *regs, int address)
+{
+	return *tunnel_reg_ptr(regs, address);
+}
+
+static int tunnel_reg_drv_group_id(const struct drm_dp_tunnel_regs *regs)
+{
+	int drv_id = tunnel_reg(regs, DP_USB4_DRIVER_ID) & DP_USB4_DRIVER_ID_MASK;
+	int group_id = tunnel_reg(regs, DP_IN_ADAPTER_TUNNEL_INFORMATION) & DP_GROUP_ID_MASK;
+
+	if (!group_id)
+		return 0;
+
+	return (drv_id << DP_GROUP_ID_BITS) | group_id;
+}
+
+/* Return granularity in kB/s units */
+static int tunnel_reg_bw_granularity(const struct drm_dp_tunnel_regs *regs)
+{
+	int gr = tunnel_reg(regs, DP_BW_GRANULARITY) & DP_BW_GRANULARITY_MASK;
+
+	WARN_ON(gr > 2);
+
+	return (250000 << gr) / 8;
+}
+
+static int tunnel_reg_max_dprx_rate(const struct drm_dp_tunnel_regs *regs)
+{
+	u8 bw_code = tunnel_reg(regs, DP_TUNNELING_MAX_LINK_RATE);
+
+	return drm_dp_bw_code_to_link_rate(bw_code);
+}
+
+static int tunnel_reg_max_dprx_lane_count(const struct drm_dp_tunnel_regs *regs)
+{
+	u8 lane_count = tunnel_reg(regs, DP_TUNNELING_MAX_LANE_COUNT) &
+			DP_TUNNELING_MAX_LANE_COUNT_MASK;
+
+	return lane_count;
+}
+
+static bool tunnel_reg_bw_alloc_supported(const struct drm_dp_tunnel_regs *regs)
+{
+	u8 cap_mask = DP_TUNNELING_SUPPORT | DP_IN_BW_ALLOCATION_MODE_SUPPORT;
+
+	if ((tunnel_reg(regs, DP_TUNNELING_CAPABILITIES) & cap_mask) != cap_mask)
+		return false;
+
+	return tunnel_reg(regs, DP_USB4_DRIVER_BW_CAPABILITY) &
+	       DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT;
+}
+
+static bool tunnel_reg_bw_alloc_enabled(const struct drm_dp_tunnel_regs *regs)
+{
+	return tunnel_reg(regs, DP_DPTX_BW_ALLOCATION_MODE_CONTROL) &
+		DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE;
+}
+
+static int tunnel_group_drv_id(int drv_group_id)
+{
+	return drv_group_id >> DP_GROUP_ID_BITS;
+}
+
+static int tunnel_group_id(int drv_group_id)
+{
+	return drv_group_id & DP_GROUP_ID_MASK;
+}
+
+const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
+{
+	return tunnel->name;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_name);
+
+static const char *drm_dp_tunnel_group_name(const struct drm_dp_tunnel_group *group)
+{
+	return group->name;
+}
+
+static struct drm_dp_tunnel_group *
+lookup_or_alloc_group(struct drm_dp_tunnel_mgr *mgr, int drv_group_id)
+{
+	struct drm_dp_tunnel_group *group = NULL;
+	int i;
+
+	for (i = 0; i < mgr->group_count; i++) {
+		/*
+		 * A tunnel group with 0 group ID shouldn't have more than one
+		 * tunnels.
+		 */
+		if (tunnel_group_id(drv_group_id) &&
+		    mgr->groups[i].drv_group_id == drv_group_id)
+			return &mgr->groups[i];
+
+		if (!group && !mgr->groups[i].active)
+			group = &mgr->groups[i];
+	}
+
+	if (!group) {
+		drm_dbg_kms(mgr->dev,
+			    "DPTUN: Can't allocate more tunnel groups\n");
+		return NULL;
+	}
+
+	group->drv_group_id = drv_group_id;
+	group->active = true;
+
+	snprintf(group->name, sizeof(group->name), "%d:%d:*",
+		 tunnel_group_drv_id(drv_group_id) & ((1 << DP_GROUP_ID_BITS) - 1),
+		 tunnel_group_id(drv_group_id) & ((1 << DP_USB4_DRIVER_ID_BITS) - 1));
+
+	return group;
+}
+
+static void free_group(struct drm_dp_tunnel_group *group)
+{
+	struct drm_dp_tunnel_mgr *mgr = group->mgr;
+
+	if (drm_WARN_ON(mgr->dev, !list_empty(&group->tunnels)))
+		return;
+
+	group->drv_group_id = 0;
+	group->available_bw = -1;
+	group->active = false;
+}
+
+static struct drm_dp_tunnel *
+tunnel_get(struct drm_dp_tunnel *tunnel)
+{
+	kref_get(&tunnel->kref);
+
+	return tunnel;
+}
+
+static void free_tunnel(struct kref *kref)
+{
+	struct drm_dp_tunnel *tunnel = container_of(kref, typeof(*tunnel), kref);
+	struct drm_dp_tunnel_group *group = tunnel->group;
+
+	list_del(&tunnel->node);
+	if (list_empty(&group->tunnels))
+		free_group(group);
+
+	kfree(tunnel);
+}
+
+static void tunnel_put(struct drm_dp_tunnel *tunnel)
+{
+	kref_put(&tunnel->kref, free_tunnel);
+}
+
+#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
+static void track_tunnel_ref(struct drm_dp_tunnel *tunnel,
+			     struct ref_tracker **tracker)
+{
+	ref_tracker_alloc(&tunnel->group->mgr->ref_tracker,
+			  tracker, GFP_KERNEL);
+}
+
+static void untrack_tunnel_ref(struct drm_dp_tunnel *tunnel,
+			       struct ref_tracker **tracker)
+{
+	ref_tracker_free(&tunnel->group->mgr->ref_tracker,
+			 tracker);
+}
+
+struct drm_dp_tunnel *
+drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
+{
+	track_tunnel_ref(tunnel, NULL);
+
+	return tunnel_get(tunnel);
+}
+EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
+
+void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel)
+{
+	tunnel_put(tunnel);
+	untrack_tunnel_ref(tunnel, NULL);
+}
+EXPORT_SYMBOL(drm_dp_tunnel_put_untracked);
+
+struct drm_dp_tunnel *
+drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel,
+		    struct ref_tracker **tracker)
+{
+	track_tunnel_ref(tunnel, tracker);
+
+	return tunnel_get(tunnel);
+}
+EXPORT_SYMBOL(drm_dp_tunnel_get);
+
+void drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel,
+			 struct ref_tracker **tracker)
+{
+	untrack_tunnel_ref(tunnel, tracker);
+	tunnel_put(tunnel);
+}
+EXPORT_SYMBOL(drm_dp_tunnel_put);
+#else
+#define track_tunnel_ref(tunnel, tracker) do {} while (0)
+#define untrack_tunnel_ref(tunnel, tracker) do {} while (0)
+
+struct drm_dp_tunnel *
+drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
+{
+	return tunnel_get(tunnel);
+}
+EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
+
+void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel)
+{
+	tunnel_put(tunnel);
+}
+EXPORT_SYMBOL(drm_dp_tunnel_put_untracked);
+#endif
+
+static bool add_tunnel_to_group(struct drm_dp_tunnel_mgr *mgr,
+				int drv_group_id,
+				struct drm_dp_tunnel *tunnel)
+{
+	struct drm_dp_tunnel_group *group =
+		lookup_or_alloc_group(mgr, drv_group_id);
+
+	if (!group)
+		return false;
+
+	tunnel->group = group;
+	list_add(&tunnel->node, &group->tunnels);
+
+	return true;
+}
+
+static struct drm_dp_tunnel *
+create_tunnel(struct drm_dp_tunnel_mgr *mgr,
+	      struct drm_dp_aux *aux,
+	      const struct drm_dp_tunnel_regs *regs)
+{
+	int drv_group_id = tunnel_reg_drv_group_id(regs);
+	struct drm_dp_tunnel *tunnel;
+
+	tunnel = kzalloc(sizeof(*tunnel), GFP_KERNEL);
+	if (!tunnel)
+		return NULL;
+
+	INIT_LIST_HEAD(&tunnel->node);
+
+	kref_init(&tunnel->kref);
+
+	tunnel->aux = aux;
+
+	tunnel->adapter_id = tunnel_reg(regs, DP_IN_ADAPTER_INFO) & DP_IN_ADAPTER_NUMBER_MASK;
+
+	snprintf(tunnel->name, sizeof(tunnel->name), "%d:%d:%d",
+		 tunnel_group_drv_id(drv_group_id) & ((1 << DP_GROUP_ID_BITS) - 1),
+		 tunnel_group_id(drv_group_id) & ((1 << DP_USB4_DRIVER_ID_BITS) - 1),
+		 tunnel->adapter_id & ((1 << DP_IN_ADAPTER_NUMBER_BITS) - 1));
+
+	tunnel->bw_granularity = tunnel_reg_bw_granularity(regs);
+	tunnel->allocated_bw = tunnel_reg(regs, DP_ALLOCATED_BW) *
+			       tunnel->bw_granularity;
+
+	tunnel->bw_alloc_supported = tunnel_reg_bw_alloc_supported(regs);
+	tunnel->bw_alloc_enabled = tunnel_reg_bw_alloc_enabled(regs);
+
+	if (!add_tunnel_to_group(mgr, drv_group_id, tunnel)) {
+		kfree(tunnel);
+
+		return NULL;
+	}
+
+	track_tunnel_ref(tunnel, &tunnel->tracker);
+
+	return tunnel;
+}
+
+static void destroy_tunnel(struct drm_dp_tunnel *tunnel)
+{
+	untrack_tunnel_ref(tunnel, &tunnel->tracker);
+	tunnel_put(tunnel);
+}
+
+void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel)
+{
+	tunnel->has_io_error = true;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_set_io_error);
+
+static char yes_no_chr(int val)
+{
+	return val ? 'Y' : 'N';
+}
+
+#define SKIP_DPRX_CAPS_CHECK		BIT(0)
+#define ALLOW_ALLOCATED_BW_CHANGE	BIT(1)
+
+static bool tunnel_regs_are_valid(struct drm_dp_tunnel_mgr *mgr,
+				  const struct drm_dp_tunnel_regs *regs,
+				  unsigned int flags)
+{
+	int drv_group_id = tunnel_reg_drv_group_id(regs);
+	bool check_dprx = !(flags & SKIP_DPRX_CAPS_CHECK);
+	bool ret = true;
+
+	if (!tunnel_reg_bw_alloc_supported(regs)) {
+		if (tunnel_group_id(drv_group_id)) {
+			drm_dbg_kms(mgr->dev,
+				    "DPTUN: A non-zero group ID is only allowed with BWA support\n");
+			ret = false;
+		}
+
+		if (tunnel_reg(regs, DP_ALLOCATED_BW)) {
+			drm_dbg_kms(mgr->dev,
+				    "DPTUN: BW is allocated without BWA support\n");
+			ret = false;
+		}
+
+		return ret;
+	}
+
+	if (!tunnel_group_id(drv_group_id)) {
+		drm_dbg_kms(mgr->dev,
+			    "DPTUN: BWA support requires a non-zero group ID\n");
+		ret = false;
+	}
+
+	if (check_dprx && hweight8(tunnel_reg_max_dprx_lane_count(regs)) != 1) {
+		drm_dbg_kms(mgr->dev,
+			    "DPTUN: Invalid DPRX lane count: %d\n",
+			    tunnel_reg_max_dprx_lane_count(regs));
+
+		ret = false;
+	}
+
+	if (check_dprx && !tunnel_reg_max_dprx_rate(regs)) {
+		drm_dbg_kms(mgr->dev,
+			    "DPTUN: DPRX rate is 0\n");
+
+		ret = false;
+	}
+
+	if (tunnel_reg(regs, DP_ALLOCATED_BW) > tunnel_reg(regs, DP_ESTIMATED_BW)) {
+		drm_dbg_kms(mgr->dev,
+			    "DPTUN: Allocated BW %d > estimated BW %d Mb/s\n",
+			    DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) *
+					 tunnel_reg_bw_granularity(regs)),
+			    DPTUN_BW_ARG(tunnel_reg(regs, DP_ESTIMATED_BW) *
+					 tunnel_reg_bw_granularity(regs)));
+
+		ret = false;
+	}
+
+	return ret;
+}
+
+static bool tunnel_info_changes_are_valid(struct drm_dp_tunnel *tunnel,
+					  const struct drm_dp_tunnel_regs *regs,
+					  unsigned int flags)
+{
+	int new_drv_group_id = tunnel_reg_drv_group_id(regs);
+	bool ret = true;
+
+	if (tunnel->bw_alloc_supported != tunnel_reg_bw_alloc_supported(regs)) {
+		tun_dbg(tunnel,
+			"BW alloc support has changed %c -> %c\n",
+			yes_no_chr(tunnel->bw_alloc_supported),
+			yes_no_chr(tunnel_reg_bw_alloc_supported(regs)));
+
+		ret = false;
+	}
+
+	if (tunnel->group->drv_group_id != new_drv_group_id) {
+		tun_dbg(tunnel,
+			"Driver/group ID has changed %d:%d:* -> %d:%d:*\n",
+			tunnel_group_drv_id(tunnel->group->drv_group_id),
+			tunnel_group_id(tunnel->group->drv_group_id),
+			tunnel_group_drv_id(new_drv_group_id),
+			tunnel_group_id(new_drv_group_id));
+
+		ret = false;
+	}
+
+	if (!tunnel->bw_alloc_supported)
+		return ret;
+
+	if (tunnel->bw_granularity != tunnel_reg_bw_granularity(regs)) {
+		tun_dbg(tunnel,
+			"BW granularity has changed: %d -> %d Mb/s\n",
+			DPTUN_BW_ARG(tunnel->bw_granularity),
+			DPTUN_BW_ARG(tunnel_reg_bw_granularity(regs)));
+
+		ret = false;
+	}
+
+	/*
+	 * On some devices at least the BW alloc mode enabled status is always
+	 * reported as 0, so skip checking that here.
+	 */
+
+	if (!(flags & ALLOW_ALLOCATED_BW_CHANGE) &&
+	    tunnel->allocated_bw !=
+	    tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity) {
+		tun_dbg(tunnel,
+			"Allocated BW has changed: %d -> %d Mb/s\n",
+			DPTUN_BW_ARG(tunnel->allocated_bw),
+			DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity));
+
+		ret = false;
+	}
+
+	return ret;
+}
+
+static int
+read_and_verify_tunnel_regs(struct drm_dp_tunnel *tunnel,
+			    struct drm_dp_tunnel_regs *regs,
+			    unsigned int flags)
+{
+	int err;
+
+	err = read_tunnel_regs(tunnel->aux, regs);
+	if (err < 0) {
+		drm_dp_tunnel_set_io_error(tunnel);
+
+		return err;
+	}
+
+	if (!tunnel_regs_are_valid(tunnel->group->mgr, regs, flags))
+		return -EINVAL;
+
+	if (!tunnel_info_changes_are_valid(tunnel, regs, flags))
+		return -EINVAL;
+
+	return 0;
+}
+
+static bool update_dprx_caps(struct drm_dp_tunnel *tunnel, const struct drm_dp_tunnel_regs *regs)
+{
+	bool changed = false;
+
+	if (tunnel_reg_max_dprx_rate(regs) != tunnel->max_dprx_rate) {
+		tunnel->max_dprx_rate = tunnel_reg_max_dprx_rate(regs);
+		changed = true;
+	}
+
+	if (tunnel_reg_max_dprx_lane_count(regs) != tunnel->max_dprx_lane_count) {
+		tunnel->max_dprx_lane_count = tunnel_reg_max_dprx_lane_count(regs);
+		changed = true;
+	}
+
+	return changed;
+}
+
+static int dev_id_len(const u8 *dev_id, int max_len)
+{
+	while (max_len && dev_id[max_len - 1] == '\0')
+		max_len--;
+
+	return max_len;
+}
+
+static int get_max_dprx_bw(const struct drm_dp_tunnel *tunnel)
+{
+	int bw = drm_dp_max_dprx_data_rate(tunnel->max_dprx_rate,
+					   tunnel->max_dprx_lane_count);
+
+	return min(roundup(bw, tunnel->bw_granularity),
+		   MAX_DP_REQUEST_BW * tunnel->bw_granularity);
+}
+
+static int get_max_tunnel_bw(const struct drm_dp_tunnel *tunnel)
+{
+	return min(get_max_dprx_bw(tunnel), tunnel->group->available_bw);
+}
+
+/**
+ * drm_dp_tunnel_detect - Detect DP tunnel on the link
+ * @mgr: Tunnel manager
+ * @aux: DP AUX on which the tunnel will be detected
+ *
+ * Detect if there is any DP tunnel on the link and add it to the tunnel
+ * group's tunnel list.
+ *
+ * Returns 0 on success, negative error code on failure.
+ */
+struct drm_dp_tunnel *
+drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
+		       struct drm_dp_aux *aux)
+{
+	struct drm_dp_tunnel_regs regs;
+	struct drm_dp_tunnel *tunnel;
+	int err;
+
+	err = read_tunnel_regs(aux, &regs);
+	if (err)
+		return ERR_PTR(err);
+
+	if (!(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
+	      DP_TUNNELING_SUPPORT))
+		return ERR_PTR(-ENODEV);
+
+	/* The DPRX caps are valid only after enabling BW alloc mode. */
+	if (!tunnel_regs_are_valid(mgr, &regs, SKIP_DPRX_CAPS_CHECK))
+		return ERR_PTR(-EINVAL);
+
+	tunnel = create_tunnel(mgr, aux, &regs);
+	if (!tunnel)
+		return ERR_PTR(-ENOMEM);
+
+	tun_dbg(tunnel,
+		"OUI:%*phD DevID:%*pE Rev-HW:%d.%d SW:%d.%d PR-Sup:%c BWA-Sup:%c BWA-En:%c\n",
+		DP_TUNNELING_OUI_BYTES,
+			tunnel_reg_ptr(&regs, DP_TUNNELING_OUI),
+		dev_id_len(tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID), DP_TUNNELING_DEV_ID_BYTES),
+			tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID),
+		(tunnel_reg(&regs, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MAJOR_MASK) >>
+			DP_TUNNELING_HW_REV_MAJOR_SHIFT,
+		(tunnel_reg(&regs, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MINOR_MASK) >>
+			DP_TUNNELING_HW_REV_MINOR_SHIFT,
+		tunnel_reg(&regs, DP_TUNNELING_SW_REV_MAJOR),
+		tunnel_reg(&regs, DP_TUNNELING_SW_REV_MINOR),
+		yes_no_chr(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
+			   DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT),
+		yes_no_chr(tunnel->bw_alloc_supported),
+		yes_no_chr(tunnel->bw_alloc_enabled));
+
+	return tunnel;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_detect);
+
+/**
+ * drm_dp_tunnel_destroy - Destroy tunnel object
+ * @tunnel: Tunnel object
+ *
+ * Remove the tunnel from the tunnel topology and destroy it.
+ */
+int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
+{
+	if (drm_WARN_ON(tunnel->group->mgr->dev, tunnel->destroyed))
+		return -ENODEV;
+
+	tun_dbg(tunnel, "destroying\n");
+
+	tunnel->destroyed = true;
+	destroy_tunnel(tunnel);
+
+	return 0;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_destroy);
+
+static int check_tunnel(const struct drm_dp_tunnel *tunnel)
+{
+	if (tunnel->destroyed)
+		return -ENODEV;
+
+	if (tunnel->has_io_error)
+		return -EIO;
+
+	return 0;
+}
+
+static int group_allocated_bw(struct drm_dp_tunnel_group *group)
+{
+	struct drm_dp_tunnel *tunnel;
+	int group_allocated_bw = 0;
+
+	for_each_tunnel_in_group(group, tunnel) {
+		if (check_tunnel(tunnel) == 0 &&
+		    tunnel->bw_alloc_enabled)
+			group_allocated_bw += tunnel->allocated_bw;
+	}
+
+	return group_allocated_bw;
+}
+
+static int calc_group_available_bw(const struct drm_dp_tunnel *tunnel)
+{
+	return group_allocated_bw(tunnel->group) -
+	       tunnel->allocated_bw +
+	       tunnel->estimated_bw;
+}
+
+static int update_group_available_bw(struct drm_dp_tunnel *tunnel,
+				     const struct drm_dp_tunnel_regs *regs)
+{
+	struct drm_dp_tunnel *tunnel_iter;
+	int group_available_bw;
+	bool changed;
+
+	tunnel->estimated_bw = tunnel_reg(regs, DP_ESTIMATED_BW) * tunnel->bw_granularity;
+
+	if (calc_group_available_bw(tunnel) == tunnel->group->available_bw)
+		return 0;
+
+	for_each_tunnel_in_group(tunnel->group, tunnel_iter) {
+		int err;
+
+		if (tunnel_iter == tunnel)
+			continue;
+
+		if (check_tunnel(tunnel_iter) != 0 ||
+		    !tunnel_iter->bw_alloc_enabled)
+			continue;
+
+		err = drm_dp_dpcd_probe(tunnel_iter->aux, DP_DPCD_REV);
+		if (err) {
+			tun_dbg(tunnel_iter,
+				"Probe failed, assume disconnected (err %pe)\n",
+				ERR_PTR(err));
+			drm_dp_tunnel_set_io_error(tunnel_iter);
+		}
+	}
+
+	group_available_bw = calc_group_available_bw(tunnel);
+
+	tun_dbg(tunnel, "Updated group available BW: %d->%d\n",
+		DPTUN_BW_ARG(tunnel->group->available_bw),
+		DPTUN_BW_ARG(group_available_bw));
+
+	changed = tunnel->group->available_bw != group_available_bw;
+
+	tunnel->group->available_bw = group_available_bw;
+
+	return changed ? 1 : 0;
+}
+
+static int set_bw_alloc_mode(struct drm_dp_tunnel *tunnel, bool enable)
+{
+	u8 mask = DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE | DP_UNMASK_BW_ALLOCATION_IRQ;
+	u8 val;
+
+	if (drm_dp_dpcd_readb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, &val) < 0)
+		goto out_err;
+
+	if (enable)
+		val |= mask;
+	else
+		val &= ~mask;
+
+	if (drm_dp_dpcd_writeb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, val) < 0)
+		goto out_err;
+
+	tunnel->bw_alloc_enabled = enable;
+
+	return 0;
+
+out_err:
+	drm_dp_tunnel_set_io_error(tunnel);
+
+	return -EIO;
+}
+
+/**
+ * drm_dp_tunnel_enable_bw_alloc: Enable DP tunnel BW allocation mode
+ * @tunnel: Tunnel object
+ *
+ * Enable the DP tunnel BW allocation mode on @tunnel if it supports it.
+ *
+ * Returns 0 in case of success, negative error code otherwise.
+ */
+int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
+{
+	struct drm_dp_tunnel_regs regs;
+	int err = check_tunnel(tunnel);
+
+	if (err)
+		return err;
+
+	if (!tunnel->bw_alloc_supported)
+		return -EOPNOTSUPP;
+
+	if (!tunnel_group_id(tunnel->group->drv_group_id))
+		return -EINVAL;
+
+	err = set_bw_alloc_mode(tunnel, true);
+	if (err)
+		goto out;
+
+	err = read_and_verify_tunnel_regs(tunnel, &regs, 0);
+	if (err) {
+		set_bw_alloc_mode(tunnel, false);
+
+		goto out;
+	}
+
+	if (!tunnel->max_dprx_rate)
+		update_dprx_caps(tunnel, &regs);
+
+	if (tunnel->group->available_bw == -1) {
+		err = update_group_available_bw(tunnel, &regs);
+		if (err > 0)
+			err = 0;
+	}
+out:
+	tun_dbg_stat(tunnel, err,
+		     "Enabling BW alloc mode: DPRX:%dx%d Group alloc:%d/%d Mb/s",
+		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
+		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
+		     DPTUN_BW_ARG(tunnel->group->available_bw));
+
+	return err;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_enable_bw_alloc);
+
+/**
+ * drm_dp_tunnel_disable_bw_alloc: Disable DP tunnel BW allocation mode
+ * @tunnel: Tunnel object
+ *
+ * Disable the DP tunnel BW allocation mode on @tunnel.
+ *
+ * Returns 0 in case of success, negative error code otherwise.
+ */
+int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
+{
+	int err = check_tunnel(tunnel);
+
+	if (err)
+		return err;
+
+	err = set_bw_alloc_mode(tunnel, false);
+
+	tun_dbg_stat(tunnel, err, "Disabling BW alloc mode");
+
+	return err;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_disable_bw_alloc);
+
+bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
+{
+	return tunnel->bw_alloc_enabled;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_bw_alloc_is_enabled);
+
+static int bw_req_complete(struct drm_dp_aux *aux, bool *status_changed)
+{
+	u8 bw_req_mask = DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED;
+	u8 status_change_mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
+	u8 val;
+
+	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
+		return -EIO;
+
+	*status_changed = val & status_change_mask;
+
+	val &= bw_req_mask;
+
+	if (!val)
+		return -EAGAIN;
+
+	if (drm_dp_dpcd_writeb(aux, DP_TUNNELING_STATUS, val) < 0)
+		return -EIO;
+
+	return val == DP_BW_REQUEST_SUCCEEDED ? 0 : -ENOSPC;
+}
+
+static int allocate_tunnel_bw(struct drm_dp_tunnel *tunnel, int bw)
+{
+	struct drm_dp_tunnel_mgr *mgr = tunnel->group->mgr;
+	int request_bw = DIV_ROUND_UP(bw, tunnel->bw_granularity);
+	unsigned long wait_expires;
+	DEFINE_WAIT(wait);
+	int err;
+
+	/* Atomic check should prevent the following. */
+	if (drm_WARN_ON(mgr->dev, request_bw > MAX_DP_REQUEST_BW)) {
+		err = -EINVAL;
+		goto out;
+	}
+
+	if (drm_dp_dpcd_writeb(tunnel->aux, DP_REQUEST_BW, request_bw) < 0) {
+		err = -EIO;
+		goto out;
+	}
+
+	wait_expires = jiffies + msecs_to_jiffies(3000);
+
+	for (;;) {
+		bool status_changed;
+
+		err = bw_req_complete(tunnel->aux, &status_changed);
+		if (err != -EAGAIN)
+			break;
+
+		if (status_changed) {
+			struct drm_dp_tunnel_regs regs;
+
+			err = read_and_verify_tunnel_regs(tunnel, &regs,
+							  ALLOW_ALLOCATED_BW_CHANGE);
+			if (err)
+				break;
+		}
+
+		if (time_after(jiffies, wait_expires)) {
+			err = -ETIMEDOUT;
+			break;
+		}
+
+		prepare_to_wait(&mgr->bw_req_queue, &wait, TASK_UNINTERRUPTIBLE);
+		schedule_timeout(msecs_to_jiffies(200));
+	};
+
+	finish_wait(&mgr->bw_req_queue, &wait);
+
+	if (err)
+		goto out;
+
+	tunnel->allocated_bw = request_bw * tunnel->bw_granularity;
+
+out:
+	tun_dbg_stat(tunnel, err, "Allocating %d/%d Mb/s for tunnel: Group alloc:%d/%d Mb/s",
+		     DPTUN_BW_ARG(request_bw * tunnel->bw_granularity),
+		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
+		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
+		     DPTUN_BW_ARG(tunnel->group->available_bw));
+
+	if (err == -EIO)
+		drm_dp_tunnel_set_io_error(tunnel);
+
+	return err;
+}
+
+int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
+{
+	int err = check_tunnel(tunnel);
+
+	if (err)
+		return err;
+
+	return allocate_tunnel_bw(tunnel, bw);
+}
+EXPORT_SYMBOL(drm_dp_tunnel_alloc_bw);
+
+static int check_and_clear_status_change(struct drm_dp_tunnel *tunnel)
+{
+	u8 mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
+	u8 val;
+
+	if (drm_dp_dpcd_readb(tunnel->aux, DP_TUNNELING_STATUS, &val) < 0)
+		goto out_err;
+
+	val &= mask;
+
+	if (val) {
+		if (drm_dp_dpcd_writeb(tunnel->aux, DP_TUNNELING_STATUS, val) < 0)
+			goto out_err;
+
+		return 1;
+	}
+
+	if (!drm_dp_tunnel_bw_alloc_is_enabled(tunnel))
+		return 0;
+
+	/*
+	 * Check for estimated BW changes explicitly to account for lost
+	 * BW change notifications.
+	 */
+	if (drm_dp_dpcd_readb(tunnel->aux, DP_ESTIMATED_BW, &val) < 0)
+		goto out_err;
+
+	if (val * tunnel->bw_granularity != tunnel->estimated_bw)
+		return 1;
+
+	return 0;
+
+out_err:
+	drm_dp_tunnel_set_io_error(tunnel);
+
+	return -EIO;
+}
+
+/**
+ * drm_dp_tunnel_update_state: Update DP tunnel SW state with the HW state
+ * @tunnel: Tunnel object
+ *
+ * Update the SW state of @tunnel with the HW state.
+ *
+ * Returns 0 if the state has not changed, 1 if it has changed and got updated
+ * successfully and a negative error code otherwise.
+ */
+int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
+{
+	struct drm_dp_tunnel_regs regs;
+	bool changed = false;
+	int ret = check_tunnel(tunnel);
+
+	if (ret < 0)
+		return ret;
+
+	ret = check_and_clear_status_change(tunnel);
+	if (ret < 0)
+		goto out;
+
+	if (!ret)
+		return 0;
+
+	ret = read_and_verify_tunnel_regs(tunnel, &regs, 0);
+	if (ret)
+		goto out;
+
+	if (update_dprx_caps(tunnel, &regs))
+		changed = true;
+
+	ret = update_group_available_bw(tunnel, &regs);
+	if (ret == 1)
+		changed = true;
+
+out:
+	tun_dbg_stat(tunnel, ret < 0 ? ret : 0,
+		     "State update: Changed:%c DPRX:%dx%d Tunnel alloc:%d/%d Group alloc:%d/%d Mb/s",
+		     yes_no_chr(changed),
+		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
+		     DPTUN_BW_ARG(tunnel->allocated_bw),
+		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
+		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
+		     DPTUN_BW_ARG(tunnel->group->available_bw));
+
+	if (ret < 0)
+		return ret;
+
+	if (changed)
+		return 1;
+
+	return 0;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_update_state);
+
+/*
+ * Returns 0 if no re-probe is needed, 1 if a re-probe is needed,
+ * a negative error code otherwise.
+ */
+int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_aux *aux)
+{
+	u8 val;
+
+	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
+		return -EIO;
+
+	if (val & (DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED))
+		wake_up_all(&mgr->bw_req_queue);
+
+	if (val & (DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED))
+		return 1;
+
+	return 0;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_handle_irq);
+
+/**
+ * drm_dp_tunnel_max_dprx_rate - Query the maximum rate of the tunnel's DPRX
+ * @tunnel: Tunnel object
+ *
+ * The function is used to query the maximum link rate of the DPRX connected
+ * to @tunnel. Note that this rate will not be limited by the BW limit of the
+ * tunnel, as opposed to the standard and extended DP_MAX_LINK_RATE DPCD
+ * registers.
+ *
+ * Returns the maximum link rate in 10 kbit/s units.
+ */
+int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
+{
+	return tunnel->max_dprx_rate;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_rate);
+
+/**
+ * drm_dp_tunnel_max_dprx_lane_count - Query the maximum lane count of the tunnel's DPRX
+ * @tunnel: Tunnel object
+ *
+ * The function is used to query the maximum lane count of the DPRX connected
+ * to @tunnel. Note that this lane count will not be limited by the BW limit of
+ * the tunnel, as opposed to the standard and extended DP_MAX_LANE_COUNT DPCD
+ * registers.
+ *
+ * Returns the maximum lane count.
+ */
+int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
+{
+	return tunnel->max_dprx_lane_count;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_lane_count);
+
+/**
+ * drm_dp_tunnel_available_bw - Query the estimated total available BW of the tunnel
+ * @tunnel: Tunnel object
+ *
+ * This function is used to query the estimated total available BW of the
+ * tunnel. This includes the currently allocated and free BW for all the
+ * tunnels in @tunnel's group. The available BW is valid only after the BW
+ * allocation mode has been enabled for the tunnel and its state got updated
+ * calling drm_dp_tunnel_update_state().
+ *
+ * Returns the @tunnel group's estimated total available bandwidth in kB/s
+ * units, or -1 if the available BW isn't valid (the BW allocation mode is
+ * not enabled or the tunnel's state hasn't been updated).
+ */
+int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
+{
+	return tunnel->group->available_bw;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_available_bw);
+
+static struct drm_dp_tunnel_group_state *
+drm_dp_tunnel_atomic_get_group_state(struct drm_atomic_state *state,
+				     const struct drm_dp_tunnel *tunnel)
+{
+	return (struct drm_dp_tunnel_group_state *)
+		drm_atomic_get_private_obj_state(state,
+						 &tunnel->group->base);
+}
+
+static struct drm_dp_tunnel_state *
+add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
+		 struct drm_dp_tunnel *tunnel)
+{
+	struct drm_dp_tunnel_state *tunnel_state;
+
+	tun_dbg_atomic(tunnel,
+		       "Adding state for tunnel %p to group state %p\n",
+		       tunnel, group_state);
+
+	tunnel_state = kzalloc(sizeof(*tunnel_state), GFP_KERNEL);
+	if (!tunnel_state)
+		return NULL;
+
+	tunnel_state->group_state = group_state;
+
+	drm_dp_tunnel_ref_get(tunnel, &tunnel_state->tunnel_ref);
+
+	INIT_LIST_HEAD(&tunnel_state->node);
+	list_add(&tunnel_state->node, &group_state->tunnel_states);
+
+	return tunnel_state;
+}
+
+void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state)
+{
+	tun_dbg_atomic(tunnel_state->tunnel_ref.tunnel,
+		       "Clearing state for tunnel %p\n",
+		       tunnel_state->tunnel_ref.tunnel);
+
+	list_del(&tunnel_state->node);
+
+	kfree(tunnel_state->stream_bw);
+	drm_dp_tunnel_ref_put(&tunnel_state->tunnel_ref);
+
+	kfree(tunnel_state);
+}
+EXPORT_SYMBOL(drm_dp_tunnel_atomic_clear_state);
+
+static void clear_tunnel_group_state(struct drm_dp_tunnel_group_state *group_state)
+{
+	struct drm_dp_tunnel_state *tunnel_state;
+	struct drm_dp_tunnel_state *tunnel_state_tmp;
+
+	for_each_tunnel_state_safe(group_state, tunnel_state, tunnel_state_tmp)
+		drm_dp_tunnel_atomic_clear_state(tunnel_state);
+}
+
+static struct drm_dp_tunnel_state *
+get_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
+		 const struct drm_dp_tunnel *tunnel)
+{
+	struct drm_dp_tunnel_state *tunnel_state;
+
+	for_each_tunnel_state(group_state, tunnel_state)
+		if (tunnel_state->tunnel_ref.tunnel == tunnel)
+			return tunnel_state;
+
+	return NULL;
+}
+
+static struct drm_dp_tunnel_state *
+get_or_add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
+			struct drm_dp_tunnel *tunnel)
+{
+	struct drm_dp_tunnel_state *tunnel_state;
+
+	tunnel_state = get_tunnel_state(group_state, tunnel);
+	if (tunnel_state)
+		return tunnel_state;
+
+	return add_tunnel_state(group_state, tunnel);
+}
+
+static struct drm_private_state *
+tunnel_group_duplicate_state(struct drm_private_obj *obj)
+{
+	struct drm_dp_tunnel_group_state *group_state = to_group_state(obj->state);
+	struct drm_dp_tunnel_state *tunnel_state;
+
+	group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
+	if (!group_state)
+		return NULL;
+
+	INIT_LIST_HEAD(&group_state->tunnel_states);
+
+	__drm_atomic_helper_private_obj_duplicate_state(obj, &group_state->base);
+
+	for_each_tunnel_state(to_group_state(obj->state), tunnel_state) {
+		struct drm_dp_tunnel_state *new_tunnel_state;
+
+		new_tunnel_state = get_or_add_tunnel_state(group_state,
+							   tunnel_state->tunnel_ref.tunnel);
+		if (!new_tunnel_state)
+			goto out_free_state;
+
+		new_tunnel_state->stream_mask = tunnel_state->stream_mask;
+		new_tunnel_state->stream_bw = kmemdup(tunnel_state->stream_bw,
+						      sizeof(*tunnel_state->stream_bw) *
+							hweight32(tunnel_state->stream_mask),
+						      GFP_KERNEL);
+
+		if (!new_tunnel_state->stream_bw)
+			goto out_free_state;
+	}
+
+	return &group_state->base;
+
+out_free_state:
+	clear_tunnel_group_state(group_state);
+	kfree(group_state);
+
+	return NULL;
+}
+
+static void tunnel_group_destroy_state(struct drm_private_obj *obj, struct drm_private_state *state)
+{
+	struct drm_dp_tunnel_group_state *group_state = to_group_state(state);
+
+	clear_tunnel_group_state(group_state);
+	kfree(group_state);
+}
+
+static const struct drm_private_state_funcs tunnel_group_funcs = {
+	.atomic_duplicate_state = tunnel_group_duplicate_state,
+	.atomic_destroy_state = tunnel_group_destroy_state,
+};
+
+struct drm_dp_tunnel_state *
+drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
+			       struct drm_dp_tunnel *tunnel)
+{
+	struct drm_dp_tunnel_group_state *group_state =
+		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
+	struct drm_dp_tunnel_state *tunnel_state;
+
+	if (IS_ERR(group_state))
+		return ERR_CAST(group_state);
+
+	tunnel_state = get_or_add_tunnel_state(group_state, tunnel);
+	if (!tunnel_state)
+		return ERR_PTR(-ENOMEM);
+
+	return tunnel_state;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_state);
+
+struct drm_dp_tunnel_state *
+drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
+				   const struct drm_dp_tunnel *tunnel)
+{
+	struct drm_dp_tunnel_group_state *new_group_state;
+	int i;
+
+	for_each_new_group_in_state(state, new_group_state, i)
+		if (to_group(new_group_state->base.obj) == tunnel->group)
+			return get_tunnel_state(new_group_state, tunnel);
+
+	return NULL;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_new_state);
+
+static bool init_group(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_tunnel_group *group)
+{
+	struct drm_dp_tunnel_group_state *group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
+
+	if (!group_state)
+		return false;
+
+	INIT_LIST_HEAD(&group_state->tunnel_states);
+
+	group->mgr = mgr;
+	group->available_bw = -1;
+	INIT_LIST_HEAD(&group->tunnels);
+
+	drm_atomic_private_obj_init(mgr->dev, &group->base, &group_state->base,
+				    &tunnel_group_funcs);
+
+	return true;
+}
+
+static void cleanup_group(struct drm_dp_tunnel_group *group)
+{
+	drm_atomic_private_obj_fini(&group->base);
+}
+
+#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
+static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
+{
+	const struct drm_dp_tunnel_state *tunnel_state;
+	u32 stream_mask = 0;
+
+	for_each_tunnel_state(group_state, tunnel_state) {
+		drm_WARN(to_group(group_state->base.obj)->mgr->dev,
+			 tunnel_state->stream_mask & stream_mask,
+			 "[DPTUN %s]: conflicting stream IDs %x (IDs in other tunnels %x)\n",
+			 tunnel_state->tunnel_ref.tunnel->name,
+			 tunnel_state->stream_mask,
+			 stream_mask);
+
+		stream_mask |= tunnel_state->stream_mask;
+	}
+}
+#else
+static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
+{
+}
+#endif
+
+static int stream_id_to_idx(u32 stream_mask, u8 stream_id)
+{
+	return hweight32(stream_mask & (BIT(stream_id) - 1));
+}
+
+static int resize_bw_array(struct drm_dp_tunnel_state *tunnel_state,
+			   unsigned long old_mask, unsigned long new_mask)
+{
+	unsigned long move_mask = old_mask & new_mask;
+	int *new_bws = NULL;
+	int id;
+
+	WARN_ON(!new_mask);
+
+	if (old_mask == new_mask)
+		return 0;
+
+	new_bws = kcalloc(hweight32(new_mask), sizeof(*new_bws), GFP_KERNEL);
+	if (!new_bws)
+		return -ENOMEM;
+
+	for_each_set_bit(id, &move_mask, BITS_PER_TYPE(move_mask))
+		new_bws[stream_id_to_idx(new_mask, id)] =
+			tunnel_state->stream_bw[stream_id_to_idx(old_mask, id)];
+
+	kfree(tunnel_state->stream_bw);
+	tunnel_state->stream_bw = new_bws;
+	tunnel_state->stream_mask = new_mask;
+
+	return 0;
+}
+
+static int set_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
+			 u8 stream_id, int bw)
+{
+	int err;
+
+	err = resize_bw_array(tunnel_state,
+			      tunnel_state->stream_mask,
+			      tunnel_state->stream_mask | BIT(stream_id));
+	if (err)
+		return err;
+
+	tunnel_state->stream_bw[stream_id_to_idx(tunnel_state->stream_mask, stream_id)] = bw;
+
+	return 0;
+}
+
+static int clear_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
+			   u8 stream_id)
+{
+	if (!(tunnel_state->stream_mask & ~BIT(stream_id))) {
+		drm_dp_tunnel_atomic_clear_state(tunnel_state);
+		return 0;
+	}
+
+	return resize_bw_array(tunnel_state,
+			       tunnel_state->stream_mask,
+			       tunnel_state->stream_mask & ~BIT(stream_id));
+}
+
+int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
+					 struct drm_dp_tunnel *tunnel,
+					 u8 stream_id, int bw)
+{
+	struct drm_dp_tunnel_group_state *new_group_state =
+		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
+	struct drm_dp_tunnel_state *tunnel_state;
+	int err;
+
+	if (drm_WARN_ON(tunnel->group->mgr->dev,
+			stream_id > BITS_PER_TYPE(tunnel_state->stream_mask)))
+		return -EINVAL;
+
+	tun_dbg(tunnel,
+		"Setting %d Mb/s for stream %d\n",
+		DPTUN_BW_ARG(bw), stream_id);
+
+	if (bw == 0) {
+		tunnel_state = get_tunnel_state(new_group_state, tunnel);
+		if (!tunnel_state)
+			return 0;
+
+		return clear_stream_bw(tunnel_state, stream_id);
+	}
+
+	tunnel_state = get_or_add_tunnel_state(new_group_state, tunnel);
+	if (drm_WARN_ON(state->dev, !tunnel_state))
+		return -EINVAL;
+
+	err = set_stream_bw(tunnel_state, stream_id, bw);
+	if (err)
+		return err;
+
+	check_unique_stream_ids(new_group_state);
+
+	return 0;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_atomic_set_stream_bw);
+
+int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
+{
+	int tunnel_bw = 0;
+	int i;
+
+	for (i = 0; i < hweight32(tunnel_state->stream_mask); i++)
+		tunnel_bw += tunnel_state->stream_bw[i];
+
+	return tunnel_bw;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_tunnel_bw);
+
+int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
+						    const struct drm_dp_tunnel *tunnel,
+						    u32 *stream_mask)
+{
+	struct drm_dp_tunnel_group_state *group_state =
+		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
+	struct drm_dp_tunnel_state *tunnel_state;
+
+	if (IS_ERR(group_state))
+		return PTR_ERR(group_state);
+
+	*stream_mask = 0;
+	for_each_tunnel_state(group_state, tunnel_state)
+		*stream_mask |= tunnel_state->stream_mask;
+
+	return 0;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_group_streams_in_state);
+
+static int
+drm_dp_tunnel_atomic_check_group_bw(struct drm_dp_tunnel_group_state *new_group_state,
+				    u32 *failed_stream_mask)
+{
+	struct drm_dp_tunnel_group *group = to_group(new_group_state->base.obj);
+	struct drm_dp_tunnel_state *new_tunnel_state;
+	u32 group_stream_mask = 0;
+	int group_bw = 0;
+
+	for_each_tunnel_state(new_group_state, new_tunnel_state) {
+		struct drm_dp_tunnel *tunnel = new_tunnel_state->tunnel_ref.tunnel;
+		int max_dprx_bw = get_max_dprx_bw(tunnel);
+		int tunnel_bw = drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state);
+
+		tun_dbg(tunnel,
+			"%sRequired %d/%d Mb/s total for tunnel.\n",
+			tunnel_bw > max_dprx_bw ? "Not enough BW: " : "",
+			DPTUN_BW_ARG(tunnel_bw),
+			DPTUN_BW_ARG(max_dprx_bw));
+
+		if (tunnel_bw > max_dprx_bw) {
+			*failed_stream_mask = new_tunnel_state->stream_mask;
+			return -ENOSPC;
+		}
+
+		group_bw += min(roundup(tunnel_bw, tunnel->bw_granularity),
+				max_dprx_bw);
+		group_stream_mask |= new_tunnel_state->stream_mask;
+	}
+
+	tun_grp_dbg(group,
+		    "%sRequired %d/%d Mb/s total for tunnel group.\n",
+		    group_bw > group->available_bw ? "Not enough BW: " : "",
+		    DPTUN_BW_ARG(group_bw),
+		    DPTUN_BW_ARG(group->available_bw));
+
+	if (group_bw > group->available_bw) {
+		*failed_stream_mask = group_stream_mask;
+		return -ENOSPC;
+	}
+
+	return 0;
+}
+
+int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
+					  u32 *failed_stream_mask)
+{
+	struct drm_dp_tunnel_group_state *new_group_state;
+	int i;
+
+	for_each_new_group_in_state(state, new_group_state, i) {
+		int ret;
+
+		ret = drm_dp_tunnel_atomic_check_group_bw(new_group_state,
+							  failed_stream_mask);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_atomic_check_stream_bws);
+
+static void destroy_mgr(struct drm_dp_tunnel_mgr *mgr)
+{
+	int i;
+
+	for (i = 0; i < mgr->group_count; i++) {
+		cleanup_group(&mgr->groups[i]);
+		drm_WARN_ON(mgr->dev, !list_empty(&mgr->groups[i].tunnels));
+	}
+
+#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
+	ref_tracker_dir_exit(&mgr->ref_tracker);
+#endif
+
+	kfree(mgr->groups);
+	kfree(mgr);
+}
+
+/**
+ * drm_dp_tunnel_mgr_create - Create a DP tunnel manager
+ * @i915: i915 driver object
+ *
+ * Creates a DP tunnel manager.
+ *
+ * Returns a pointer to the tunnel manager if created successfully or NULL in
+ * case of an error.
+ */
+struct drm_dp_tunnel_mgr *
+drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
+{
+	struct drm_dp_tunnel_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
+	int i;
+
+	if (!mgr)
+		return NULL;
+
+	mgr->dev = dev;
+	init_waitqueue_head(&mgr->bw_req_queue);
+
+	mgr->groups = kcalloc(max_group_count, sizeof(*mgr->groups), GFP_KERNEL);
+	if (!mgr->groups) {
+		kfree(mgr);
+
+		return NULL;
+	}
+
+#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
+	ref_tracker_dir_init(&mgr->ref_tracker, 16, "dptun");
+#endif
+
+	for (i = 0; i < max_group_count; i++) {
+		if (!init_group(mgr, &mgr->groups[i])) {
+			destroy_mgr(mgr);
+
+			return NULL;
+		}
+
+		mgr->group_count++;
+	}
+
+	return mgr;
+}
+EXPORT_SYMBOL(drm_dp_tunnel_mgr_create);
+
+/**
+ * drm_dp_tunnel_mgr_destroy - Destroy DP tunnel manager
+ * @mgr: Tunnel manager object
+ *
+ * Destroy the tunnel manager.
+ */
+void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr)
+{
+	destroy_mgr(mgr);
+}
+EXPORT_SYMBOL(drm_dp_tunnel_mgr_destroy);
diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
index 281afff6ee4e5..8bfd5d007be8d 100644
--- a/include/drm/display/drm_dp.h
+++ b/include/drm/display/drm_dp.h
@@ -1382,6 +1382,66 @@
 #define DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET	0x69494
 #define DP_HDCP_2_2_REG_DBG_OFFSET		0x69518
 
+/* DP-tunneling */
+#define DP_TUNNELING_OUI				0xe0000
+#define  DP_TUNNELING_OUI_BYTES				3
+
+#define DP_TUNNELING_DEV_ID				0xe0003
+#define  DP_TUNNELING_DEV_ID_BYTES			6
+
+#define DP_TUNNELING_HW_REV				0xe0009
+#define  DP_TUNNELING_HW_REV_MAJOR_SHIFT		4
+#define  DP_TUNNELING_HW_REV_MAJOR_MASK			(0xf << DP_TUNNELING_HW_REV_MAJOR_SHIFT)
+#define  DP_TUNNELING_HW_REV_MINOR_SHIFT		0
+#define  DP_TUNNELING_HW_REV_MINOR_MASK			(0xf << DP_TUNNELING_HW_REV_MINOR_SHIFT)
+
+#define DP_TUNNELING_SW_REV_MAJOR			0xe000a
+#define DP_TUNNELING_SW_REV_MINOR			0xe000b
+
+#define DP_TUNNELING_CAPABILITIES			0xe000d
+#define  DP_IN_BW_ALLOCATION_MODE_SUPPORT		(1 << 7)
+#define  DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT		(1 << 6)
+#define  DP_TUNNELING_SUPPORT				(1 << 0)
+
+#define DP_IN_ADAPTER_INFO				0xe000e
+#define  DP_IN_ADAPTER_NUMBER_BITS			7
+#define  DP_IN_ADAPTER_NUMBER_MASK			((1 << DP_IN_ADAPTER_NUMBER_BITS) - 1)
+
+#define DP_USB4_DRIVER_ID				0xe000f
+#define  DP_USB4_DRIVER_ID_BITS				4
+#define  DP_USB4_DRIVER_ID_MASK				((1 << DP_USB4_DRIVER_ID_BITS) - 1)
+
+#define DP_USB4_DRIVER_BW_CAPABILITY			0xe0020
+#define  DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT	(1 << 7)
+
+#define DP_IN_ADAPTER_TUNNEL_INFORMATION		0xe0021
+#define  DP_GROUP_ID_BITS				3
+#define  DP_GROUP_ID_MASK				((1 << DP_GROUP_ID_BITS) - 1)
+
+#define DP_BW_GRANULARITY				0xe0022
+#define  DP_BW_GRANULARITY_MASK				0x3
+
+#define DP_ESTIMATED_BW					0xe0023
+#define DP_ALLOCATED_BW					0xe0024
+
+#define DP_TUNNELING_STATUS				0xe0025
+#define  DP_BW_ALLOCATION_CAPABILITY_CHANGED		(1 << 3)
+#define  DP_ESTIMATED_BW_CHANGED			(1 << 2)
+#define  DP_BW_REQUEST_SUCCEEDED			(1 << 1)
+#define  DP_BW_REQUEST_FAILED				(1 << 0)
+
+#define DP_TUNNELING_MAX_LINK_RATE			0xe0028
+
+#define DP_TUNNELING_MAX_LANE_COUNT			0xe0029
+#define  DP_TUNNELING_MAX_LANE_COUNT_MASK		0x1f
+
+#define DP_DPTX_BW_ALLOCATION_MODE_CONTROL		0xe0030
+#define  DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE	(1 << 7)
+#define  DP_UNMASK_BW_ALLOCATION_IRQ			(1 << 6)
+
+#define DP_REQUEST_BW					0xe0031
+#define  MAX_DP_REQUEST_BW				255
+
 /* LTTPR: Link Training (LT)-tunable PHY Repeaters */
 #define DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV 0xf0000 /* 1.3 */
 #define DP_MAX_LINK_RATE_PHY_REPEATER			    0xf0001 /* 1.4a */
diff --git a/include/drm/display/drm_dp_tunnel.h b/include/drm/display/drm_dp_tunnel.h
new file mode 100644
index 0000000000000..f6449b1b4e6e9
--- /dev/null
+++ b/include/drm/display/drm_dp_tunnel.h
@@ -0,0 +1,270 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#ifndef __DRM_DP_TUNNEL_H__
+#define __DRM_DP_TUNNEL_H__
+
+#include <linux/err.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+
+struct drm_dp_aux;
+
+struct drm_device;
+
+struct drm_atomic_state;
+struct drm_dp_tunnel_mgr;
+struct drm_dp_tunnel_state;
+
+struct ref_tracker;
+
+struct drm_dp_tunnel_ref {
+	struct drm_dp_tunnel *tunnel;
+#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
+	struct ref_tracker *tracker;
+#endif
+};
+
+#ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL
+
+struct drm_dp_tunnel *
+drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel);
+void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel);
+
+#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
+struct drm_dp_tunnel *
+drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
+
+void
+drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
+#else
+#define drm_dp_tunnel_get(tunnel, tracker) \
+	drm_dp_tunnel_get_untracked(tunnel)
+
+#define drm_dp_tunnel_put(tunnel, tracker) \
+	drm_dp_tunnel_put_untracked(tunnel)
+
+#endif
+
+static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
+					   struct drm_dp_tunnel_ref *tunnel_ref)
+{
+	tunnel_ref->tunnel = drm_dp_tunnel_get(tunnel, &tunnel_ref->tracker);
+}
+
+static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref)
+{
+	drm_dp_tunnel_put(tunnel_ref->tunnel, &tunnel_ref->tracker);
+}
+
+struct drm_dp_tunnel *
+drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
+		     struct drm_dp_aux *aux);
+int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel);
+
+int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel);
+int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel);
+bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel);
+int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw);
+int drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel);
+int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel);
+
+void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel);
+
+int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
+			     struct drm_dp_aux *aux);
+
+int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel);
+int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel);
+int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel);
+
+const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel);
+
+struct drm_dp_tunnel_state *
+drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
+			       struct drm_dp_tunnel *tunnel);
+struct drm_dp_tunnel_state *
+drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
+				   const struct drm_dp_tunnel *tunnel);
+
+void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state);
+
+int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
+				       struct drm_dp_tunnel *tunnel,
+				       u8 stream_id, int bw);
+int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
+						    const struct drm_dp_tunnel *tunnel,
+						    u32 *stream_mask);
+
+int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
+					  u32 *failed_stream_mask);
+
+int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state);
+
+struct drm_dp_tunnel_mgr *
+drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count);
+void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr);
+
+#else
+
+static inline struct drm_dp_tunnel *
+drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
+{
+	return NULL;
+}
+
+static inline void
+drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel) {}
+
+static inline struct drm_dp_tunnel *
+drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker)
+{
+	return NULL;
+}
+
+static inline void
+drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker) {}
+
+static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
+					   struct drm_dp_tunnel_ref *tunnel_ref) {}
+static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref) {}
+
+static inline struct drm_dp_tunnel *
+drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
+		     struct drm_dp_aux *aux)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
+
+static inline int
+drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
+{
+	return 0;
+}
+
+static inline int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
+{
+	return false;
+}
+
+static inline int
+drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int
+drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int
+drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel) {}
+static inline int
+drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
+			 struct drm_dp_aux *aux)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int
+drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
+{
+	return 0;
+}
+
+static inline int
+drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
+{
+	return 0;
+}
+
+static inline int
+drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
+{
+	return -1;
+}
+
+static inline const char *
+drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
+{
+	return NULL;
+}
+
+static inline struct drm_dp_tunnel_state *
+drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
+			       struct drm_dp_tunnel *tunnel)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
+
+static inline struct drm_dp_tunnel_state *
+drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
+				   const struct drm_dp_tunnel *tunnel)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
+
+static inline void
+drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state) {}
+
+static inline int
+drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
+				   struct drm_dp_tunnel *tunnel,
+				   u8 stream_id, int bw)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int
+drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
+						const struct drm_dp_tunnel *tunnel,
+						u32 *stream_mask)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int
+drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
+				      u32 *failed_stream_mask)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int
+drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
+{
+	return 0;
+}
+
+static inline struct drm_dp_tunnel_mgr *
+drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
+
+static inline
+void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr) {}
+
+
+#endif /* CONFIG_DRM_DISPLAY_DP_TUNNEL */
+
+#endif /* __DRM_DP_TUNNEL_H__ */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 03/19] drm/i915/dp: Add support to notify MST connectors to retry modesets
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
  2024-01-23 10:28 ` [PATCH 01/19] drm/dp: Add drm_dp_max_dprx_data_rate() Imre Deak
  2024-01-23 10:28 ` [PATCH 02/19] drm/dp: Add support for DP tunneling Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-01-29 10:36   ` Hogander, Jouni
  2024-01-23 10:28 ` [PATCH 04/19] drm/i915/dp: Use drm_dp_max_dprx_data_rate() Imre Deak
                   ` (19 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

On shared (Thunderbolt) links with DP tunnels, the modeset may need to
be retried on all connectors on the link due to a link BW limitation
arising only after the atomic check phase. To support this add a helper
function queuing a work to retry the modeset on a given port's connector
and at the same time any MST connector with streams through the same
port. A follow-up change enabling the DP tunnel Bandwidth Allocation
Mode will take this into use.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  |  5 +-
 drivers/gpu/drm/i915/display/intel_dp.c       | 55 ++++++++++++++++++-
 drivers/gpu/drm/i915/display/intel_dp.h       |  8 +++
 .../drm/i915/display/intel_dp_link_training.c |  3 +-
 drivers/gpu/drm/i915/display/intel_dp_mst.c   |  2 +
 5 files changed, 67 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index a92e959c8ac7b..0caebbb3e2dbb 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -8060,8 +8060,9 @@ void intel_hpd_poll_fini(struct drm_i915_private *i915)
 	/* Kill all the work that may have been queued by hpd. */
 	drm_connector_list_iter_begin(&i915->drm, &conn_iter);
 	for_each_intel_connector_iter(connector, &conn_iter) {
-		if (connector->modeset_retry_work.func)
-			cancel_work_sync(&connector->modeset_retry_work);
+		if (connector->modeset_retry_work.func &&
+		    cancel_work_sync(&connector->modeset_retry_work))
+			drm_connector_put(&connector->base);
 		if (connector->hdcp.shim) {
 			cancel_delayed_work_sync(&connector->hdcp.check_work);
 			cancel_work_sync(&connector->hdcp.prop_work);
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index ab415f41924d7..4e36c2c39888e 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2837,6 +2837,50 @@ intel_dp_audio_compute_config(struct intel_encoder *encoder,
 					intel_dp_is_uhbr(pipe_config);
 }
 
+void intel_dp_queue_modeset_retry_work(struct intel_connector *connector)
+{
+	struct drm_i915_private *i915 = to_i915(connector->base.dev);
+
+	drm_connector_get(&connector->base);
+	if (!queue_work(i915->unordered_wq, &connector->modeset_retry_work))
+		drm_connector_put(&connector->base);
+}
+
+void
+intel_dp_queue_modeset_retry_for_link(struct intel_atomic_state *state,
+				      struct intel_encoder *encoder,
+				      const struct intel_crtc_state *crtc_state,
+				      const struct drm_connector_state *conn_state)
+{
+	struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
+	struct intel_connector *connector;
+	struct intel_digital_connector_state *iter_conn_state;
+	struct intel_dp *intel_dp;
+	int i;
+
+	if (conn_state) {
+		connector = to_intel_connector(conn_state->connector);
+		intel_dp_queue_modeset_retry_work(connector);
+
+		return;
+	}
+
+	if (drm_WARN_ON(&i915->drm,
+			!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST)))
+		return;
+
+	intel_dp = enc_to_intel_dp(encoder);
+
+	for_each_new_intel_connector_in_state(state, connector, iter_conn_state, i) {
+		(void)iter_conn_state;
+
+		if (connector->mst_port != intel_dp)
+			continue;
+
+		intel_dp_queue_modeset_retry_work(connector);
+	}
+}
+
 int
 intel_dp_compute_config(struct intel_encoder *encoder,
 			struct intel_crtc_state *pipe_config,
@@ -6436,6 +6480,14 @@ static void intel_dp_modeset_retry_work_fn(struct work_struct *work)
 	mutex_unlock(&connector->dev->mode_config.mutex);
 	/* Send Hotplug uevent so userspace can reprobe */
 	drm_kms_helper_connector_hotplug_event(connector);
+
+	drm_connector_put(connector);
+}
+
+void intel_dp_init_modeset_retry_work(struct intel_connector *connector)
+{
+	INIT_WORK(&connector->modeset_retry_work,
+		  intel_dp_modeset_retry_work_fn);
 }
 
 bool
@@ -6452,8 +6504,7 @@ intel_dp_init_connector(struct intel_digital_port *dig_port,
 	int type;
 
 	/* Initialize the work for modeset in case of link train failure */
-	INIT_WORK(&intel_connector->modeset_retry_work,
-		  intel_dp_modeset_retry_work_fn);
+	intel_dp_init_modeset_retry_work(intel_connector);
 
 	if (drm_WARN(dev, dig_port->max_lanes < 1,
 		     "Not enough lanes (%d) for DP on [ENCODER:%d:%s]\n",
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 530cc97bc42f4..105c2086310db 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -23,6 +23,8 @@ struct intel_digital_port;
 struct intel_dp;
 struct intel_encoder;
 
+struct work_struct;
+
 struct link_config_limits {
 	int min_rate, max_rate;
 	int min_lane_count, max_lane_count;
@@ -43,6 +45,12 @@ void intel_dp_adjust_compliance_config(struct intel_dp *intel_dp,
 bool intel_dp_limited_color_range(const struct intel_crtc_state *crtc_state,
 				  const struct drm_connector_state *conn_state);
 int intel_dp_min_bpp(enum intel_output_format output_format);
+void intel_dp_init_modeset_retry_work(struct intel_connector *connector);
+void intel_dp_queue_modeset_retry_work(struct intel_connector *connector);
+void intel_dp_queue_modeset_retry_for_link(struct intel_atomic_state *state,
+					   struct intel_encoder *encoder,
+					   const struct intel_crtc_state *crtc_state,
+					   const struct drm_connector_state *conn_state);
 bool intel_dp_init_connector(struct intel_digital_port *dig_port,
 			     struct intel_connector *intel_connector);
 void intel_dp_set_link_params(struct intel_dp *intel_dp,
diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
index 1abfafbbfa757..7b140cbf8dd31 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
@@ -1075,7 +1075,6 @@ static void intel_dp_schedule_fallback_link_training(struct intel_dp *intel_dp,
 						     const struct intel_crtc_state *crtc_state)
 {
 	struct intel_connector *intel_connector = intel_dp->attached_connector;
-	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
 
 	if (!intel_digital_port_connected(&dp_to_dig_port(intel_dp)->base)) {
 		lt_dbg(intel_dp, DP_PHY_DPRX, "Link Training failed on disconnected sink.\n");
@@ -1093,7 +1092,7 @@ static void intel_dp_schedule_fallback_link_training(struct intel_dp *intel_dp,
 	}
 
 	/* Schedule a Hotplug Uevent to userspace to start modeset */
-	queue_work(i915->unordered_wq, &intel_connector->modeset_retry_work);
+	intel_dp_queue_modeset_retry_work(intel_connector);
 }
 
 /* Perform the link training on all LTTPRs and the DPRX on a link. */
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index 5fa25a5a36b55..b15e43ebf138b 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -1542,6 +1542,8 @@ static struct drm_connector *intel_dp_add_mst_connector(struct drm_dp_mst_topolo
 	intel_connector->port = port;
 	drm_dp_mst_get_port_malloc(port);
 
+	intel_dp_init_modeset_retry_work(intel_connector);
+
 	intel_connector->dp.dsc_decompression_aux = drm_dp_mst_dsc_aux_for_port(port);
 	intel_dp_mst_read_decompression_port_dsc_caps(intel_dp, intel_connector);
 	intel_connector->dp.dsc_hblank_expansion_quirk =
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 04/19] drm/i915/dp: Use drm_dp_max_dprx_data_rate()
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (2 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 03/19] drm/i915/dp: Add support to notify MST connectors to retry modesets Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-06 20:27   ` Shankar, Uma
  2024-01-23 10:28 ` [PATCH 05/19] drm/i915/dp: Factor out intel_dp_config_required_rate() Imre Deak
                   ` (18 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Instead of intel_dp_max_data_rate() use the equivalent
drm_dp_max_dprx_data_rate() which was copied from the former one in a
previous patch.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c |  2 +-
 drivers/gpu/drm/i915/display/intel_dp.c      | 62 +++-----------------
 drivers/gpu/drm/i915/display/intel_dp.h      |  1 -
 drivers/gpu/drm/i915/display/intel_dp_mst.c  |  2 +-
 4 files changed, 10 insertions(+), 57 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 0caebbb3e2dbb..b9f985a5e705b 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -2478,7 +2478,7 @@ intel_link_compute_m_n(u16 bits_per_pixel_x16, int nlanes,
 	u32 link_symbol_clock = intel_dp_link_symbol_clock(link_clock);
 	u32 data_m = intel_dp_effective_data_rate(pixel_clock, bits_per_pixel_x16,
 						  bw_overhead);
-	u32 data_n = intel_dp_max_data_rate(link_clock, nlanes);
+	u32 data_n = drm_dp_max_dprx_data_rate(link_clock, nlanes);
 
 	/*
 	 * Windows/BIOS uses fixed M/N values always. Follow suit.
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 4e36c2c39888e..c7b06a9b197cc 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -383,52 +383,6 @@ int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
 				1000000 * 16 * 8);
 }
 
-/*
- * Given a link rate and lanes, get the data bandwidth.
- *
- * Data bandwidth is the actual payload rate, which depends on the data
- * bandwidth efficiency and the link rate.
- *
- * For 8b/10b channel encoding, SST and non-FEC, the data bandwidth efficiency
- * is 80%. For example, for a 1.62 Gbps link, 1.62*10^9 bps * 0.80 * (1/8) =
- * 162000 kBps. With 8-bit symbols, we have 162000 kHz symbol clock. Just by
- * coincidence, the port clock in kHz matches the data bandwidth in kBps, and
- * they equal the link bit rate in Gbps multiplied by 100000. (Note that this no
- * longer holds for data bandwidth as soon as FEC or MST is taken into account!)
- *
- * For 128b/132b channel encoding, the data bandwidth efficiency is 96.71%. For
- * example, for a 10 Gbps link, 10*10^9 bps * 0.9671 * (1/8) = 1208875
- * kBps. With 32-bit symbols, we have 312500 kHz symbol clock. The value 1000000
- * does not match the symbol clock, the port clock (not even if you think in
- * terms of a byte clock), nor the data bandwidth. It only matches the link bit
- * rate in units of 10000 bps.
- */
-int
-intel_dp_max_data_rate(int max_link_rate, int max_lanes)
-{
-	int ch_coding_efficiency =
-		drm_dp_bw_channel_coding_efficiency(drm_dp_is_uhbr_rate(max_link_rate));
-	int max_link_rate_kbps = max_link_rate * 10;
-
-	/*
-	 * UHBR rates always use 128b/132b channel encoding, and have
-	 * 97.71% data bandwidth efficiency. Consider max_link_rate the
-	 * link bit rate in units of 10000 bps.
-	 */
-	/*
-	 * Lower than UHBR rates always use 8b/10b channel encoding, and have
-	 * 80% data bandwidth efficiency for SST non-FEC. However, this turns
-	 * out to be a nop by coincidence:
-	 *
-	 *	int max_link_rate_kbps = max_link_rate * 10;
-	 *	max_link_rate_kbps = DIV_ROUND_DOWN_ULL(max_link_rate_kbps * 8, 10);
-	 *	max_link_rate = max_link_rate_kbps / 8;
-	 */
-	return DIV_ROUND_DOWN_ULL(mul_u32_u32(max_link_rate_kbps * max_lanes,
-					      ch_coding_efficiency),
-				  1000000 * 8);
-}
-
 bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
@@ -658,7 +612,7 @@ static bool intel_dp_can_link_train_fallback_for_edp(struct intel_dp *intel_dp,
 	int mode_rate, max_rate;
 
 	mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
-	max_rate = intel_dp_max_data_rate(link_rate, lane_count);
+	max_rate = drm_dp_max_dprx_data_rate(link_rate, lane_count);
 	if (mode_rate > max_rate)
 		return false;
 
@@ -1260,7 +1214,7 @@ intel_dp_mode_valid(struct drm_connector *_connector,
 	max_link_clock = intel_dp_max_link_rate(intel_dp);
 	max_lanes = intel_dp_max_lane_count(intel_dp);
 
-	max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes);
+	max_rate = drm_dp_max_dprx_data_rate(max_link_clock, max_lanes);
 	mode_rate = intel_dp_link_required(target_clock,
 					   intel_dp_mode_min_output_bpp(connector, mode));
 
@@ -1610,8 +1564,8 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
 			for (lane_count = limits->min_lane_count;
 			     lane_count <= limits->max_lane_count;
 			     lane_count <<= 1) {
-				link_avail = intel_dp_max_data_rate(link_rate,
-								    lane_count);
+				link_avail = drm_dp_max_dprx_data_rate(link_rate,
+								       lane_count);
 
 				if (mode_rate <= link_avail) {
 					pipe_config->lane_count = lane_count;
@@ -2462,8 +2416,8 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
 			    "DP link rate required %i available %i\n",
 			    intel_dp_link_required(adjusted_mode->crtc_clock,
 						   to_bpp_int_roundup(pipe_config->dsc.compressed_bpp_x16)),
-			    intel_dp_max_data_rate(pipe_config->port_clock,
-						   pipe_config->lane_count));
+			    drm_dp_max_dprx_data_rate(pipe_config->port_clock,
+						      pipe_config->lane_count));
 	} else {
 		drm_dbg_kms(&i915->drm, "DP lane count %d clock %d bpp %d\n",
 			    pipe_config->lane_count, pipe_config->port_clock,
@@ -2473,8 +2427,8 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
 			    "DP link rate required %i available %i\n",
 			    intel_dp_link_required(adjusted_mode->crtc_clock,
 						   pipe_config->pipe_bpp),
-			    intel_dp_max_data_rate(pipe_config->port_clock,
-						   pipe_config->lane_count));
+			    drm_dp_max_dprx_data_rate(pipe_config->port_clock,
+						      pipe_config->lane_count));
 	}
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 105c2086310db..46f79747f807d 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -113,7 +113,6 @@ bool intel_dp_get_colorimetry_status(struct intel_dp *intel_dp);
 int intel_dp_link_required(int pixel_clock, int bpp);
 int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
 				 int bw_overhead);
-int intel_dp_max_data_rate(int max_link_rate, int max_lanes);
 bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp);
 bool intel_dp_needs_vsc_sdp(const struct intel_crtc_state *crtc_state,
 			    const struct drm_connector_state *conn_state);
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index b15e43ebf138b..cfcc157b7d41d 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -1295,7 +1295,7 @@ intel_dp_mst_mode_valid_ctx(struct drm_connector *connector,
 	max_link_clock = intel_dp_max_link_rate(intel_dp);
 	max_lanes = intel_dp_max_lane_count(intel_dp);
 
-	max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes);
+	max_rate = drm_dp_max_dprx_data_rate(max_link_clock, max_lanes);
 	mode_rate = intel_dp_link_required(mode->clock, min_bpp);
 
 	ret = drm_modeset_lock(&mgr->base.lock, ctx);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 05/19] drm/i915/dp: Factor out intel_dp_config_required_rate()
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (3 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 04/19] drm/i915/dp: Use drm_dp_max_dprx_data_rate() Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-06 20:32   ` Shankar, Uma
  2024-01-23 10:28 ` [PATCH 06/19] drm/i915/dp: Export intel_dp_max_common_rate/lane_count() Imre Deak
                   ` (17 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Factor out intel_dp_config_required_rate() used by a follow-up patch
enabling the DP tunnel BW allocation mode.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 43 +++++++++++--------------
 drivers/gpu/drm/i915/display/intel_dp.h |  1 +
 2 files changed, 20 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index c7b06a9b197cc..0a5c60428ffb7 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2338,6 +2338,17 @@ intel_dp_compute_config_limits(struct intel_dp *intel_dp,
 						       limits);
 }
 
+int intel_dp_config_required_rate(const struct intel_crtc_state *crtc_state)
+{
+	const struct drm_display_mode *adjusted_mode =
+		&crtc_state->hw.adjusted_mode;
+	int bpp = crtc_state->dsc.compression_enable ?
+		to_bpp_int_roundup(crtc_state->dsc.compressed_bpp_x16) :
+		crtc_state->pipe_bpp;
+
+	return intel_dp_link_required(adjusted_mode->crtc_clock, bpp);
+}
+
 static int
 intel_dp_compute_link_config(struct intel_encoder *encoder,
 			     struct intel_crtc_state *pipe_config,
@@ -2405,31 +2416,15 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
 			return ret;
 	}
 
-	if (pipe_config->dsc.compression_enable) {
-		drm_dbg_kms(&i915->drm,
-			    "DP lane count %d clock %d Input bpp %d Compressed bpp " BPP_X16_FMT "\n",
-			    pipe_config->lane_count, pipe_config->port_clock,
-			    pipe_config->pipe_bpp,
-			    BPP_X16_ARGS(pipe_config->dsc.compressed_bpp_x16));
+	drm_dbg_kms(&i915->drm,
+		    "DP lane count %d clock %d bpp input %d compressed " BPP_X16_FMT " link rate required %d available %d\n",
+		    pipe_config->lane_count, pipe_config->port_clock,
+		    pipe_config->pipe_bpp,
+		    BPP_X16_ARGS(pipe_config->dsc.compressed_bpp_x16),
+		    intel_dp_config_required_rate(pipe_config),
+		    drm_dp_max_dprx_data_rate(pipe_config->port_clock,
+					      pipe_config->lane_count));
 
-		drm_dbg_kms(&i915->drm,
-			    "DP link rate required %i available %i\n",
-			    intel_dp_link_required(adjusted_mode->crtc_clock,
-						   to_bpp_int_roundup(pipe_config->dsc.compressed_bpp_x16)),
-			    drm_dp_max_dprx_data_rate(pipe_config->port_clock,
-						      pipe_config->lane_count));
-	} else {
-		drm_dbg_kms(&i915->drm, "DP lane count %d clock %d bpp %d\n",
-			    pipe_config->lane_count, pipe_config->port_clock,
-			    pipe_config->pipe_bpp);
-
-		drm_dbg_kms(&i915->drm,
-			    "DP link rate required %i available %i\n",
-			    intel_dp_link_required(adjusted_mode->crtc_clock,
-						   pipe_config->pipe_bpp),
-			    drm_dp_max_dprx_data_rate(pipe_config->port_clock,
-						      pipe_config->lane_count));
-	}
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 46f79747f807d..37274e3c2902f 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -102,6 +102,7 @@ void intel_dp_mst_suspend(struct drm_i915_private *dev_priv);
 void intel_dp_mst_resume(struct drm_i915_private *dev_priv);
 int intel_dp_max_link_rate(struct intel_dp *intel_dp);
 int intel_dp_max_lane_count(struct intel_dp *intel_dp);
+int intel_dp_config_required_rate(const struct intel_crtc_state *crtc_state);
 int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
 
 void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 06/19] drm/i915/dp: Export intel_dp_max_common_rate/lane_count()
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (4 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 05/19] drm/i915/dp: Factor out intel_dp_config_required_rate() Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-06 20:34   ` Shankar, Uma
  2024-01-23 10:28 ` [PATCH 07/19] drm/i915/dp: Factor out intel_dp_update_sink_caps() Imre Deak
                   ` (16 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Export intel_dp_max_common_rate() and intel_dp_max_lane_count() used by
a follow-up patch enabling the DP tunnel BW allocation mode.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 4 ++--
 drivers/gpu/drm/i915/display/intel_dp.h | 2 ++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 0a5c60428ffb7..f40706c5d1aad 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -309,7 +309,7 @@ static int intel_dp_common_rate(struct intel_dp *intel_dp, int index)
 }
 
 /* Theoretical max between source and sink */
-static int intel_dp_max_common_rate(struct intel_dp *intel_dp)
+int intel_dp_max_common_rate(struct intel_dp *intel_dp)
 {
 	return intel_dp_common_rate(intel_dp, intel_dp->num_common_rates - 1);
 }
@@ -326,7 +326,7 @@ static int intel_dp_max_source_lane_count(struct intel_digital_port *dig_port)
 }
 
 /* Theoretical max between source and sink */
-static int intel_dp_max_common_lane_count(struct intel_dp *intel_dp)
+int intel_dp_max_common_lane_count(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
 	int source_max = intel_dp_max_source_lane_count(dig_port);
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 37274e3c2902f..a7906d8738c4a 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -104,6 +104,8 @@ int intel_dp_max_link_rate(struct intel_dp *intel_dp);
 int intel_dp_max_lane_count(struct intel_dp *intel_dp);
 int intel_dp_config_required_rate(const struct intel_crtc_state *crtc_state);
 int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
+int intel_dp_max_common_rate(struct intel_dp *intel_dp);
+int intel_dp_max_common_lane_count(struct intel_dp *intel_dp);
 
 void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
 			   u8 *link_bw, u8 *rate_select);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 07/19] drm/i915/dp: Factor out intel_dp_update_sink_caps()
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (5 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 06/19] drm/i915/dp: Export intel_dp_max_common_rate/lane_count() Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-06 20:35   ` Shankar, Uma
  2024-01-23 10:28 ` [PATCH 08/19] drm/i915/dp: Factor out intel_dp_read_dprx_caps() Imre Deak
                   ` (15 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Factor out a function updating the sink's link rate and lane count
capabilities, used by a follow-up patch enabling the DP tunnel BW
allocation mode.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 11 ++++++++---
 drivers/gpu/drm/i915/display/intel_dp.h |  1 +
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index f40706c5d1aad..23434d0aba188 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -3949,6 +3949,13 @@ intel_dp_has_sink_count(struct intel_dp *intel_dp)
 					  &intel_dp->desc);
 }
 
+void intel_dp_update_sink_caps(struct intel_dp *intel_dp)
+{
+	intel_dp_set_sink_rates(intel_dp);
+	intel_dp_set_max_sink_lane_count(intel_dp);
+	intel_dp_set_common_rates(intel_dp);
+}
+
 static bool
 intel_dp_get_dpcd(struct intel_dp *intel_dp)
 {
@@ -3965,9 +3972,7 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
 		drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc,
 				 drm_dp_is_branch(intel_dp->dpcd));
 
-		intel_dp_set_sink_rates(intel_dp);
-		intel_dp_set_max_sink_lane_count(intel_dp);
-		intel_dp_set_common_rates(intel_dp);
+		intel_dp_update_sink_caps(intel_dp);
 	}
 
 	if (intel_dp_has_sink_count(intel_dp)) {
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index a7906d8738c4a..49553e43add22 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -106,6 +106,7 @@ int intel_dp_config_required_rate(const struct intel_crtc_state *crtc_state);
 int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
 int intel_dp_max_common_rate(struct intel_dp *intel_dp);
 int intel_dp_max_common_lane_count(struct intel_dp *intel_dp);
+void intel_dp_update_sink_caps(struct intel_dp *intel_dp);
 
 void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
 			   u8 *link_bw, u8 *rate_select);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 08/19] drm/i915/dp: Factor out intel_dp_read_dprx_caps()
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (6 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 07/19] drm/i915/dp: Factor out intel_dp_update_sink_caps() Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-06 20:36   ` Shankar, Uma
  2024-01-23 10:28 ` [PATCH 09/19] drm/i915/dp: Add intel_dp_max_link_data_rate() Imre Deak
                   ` (14 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Factor out a function to read the sink's DPRX capabilities used by a
follow-up patch enabling the DP tunnel BW allocation mode.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 .../drm/i915/display/intel_dp_link_training.c | 30 +++++++++++++++----
 .../drm/i915/display/intel_dp_link_training.h |  1 +
 2 files changed, 26 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
index 7b140cbf8dd31..fb84ca98bb7ab 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
@@ -162,6 +162,28 @@ static int intel_dp_init_lttpr(struct intel_dp *intel_dp, const u8 dpcd[DP_RECEI
 	return lttpr_count;
 }
 
+int intel_dp_read_dprx_caps(struct intel_dp *intel_dp, u8 dpcd[DP_RECEIVER_CAP_SIZE])
+{
+	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+
+	if (intel_dp_is_edp(intel_dp))
+		return 0;
+
+	/*
+	 * Detecting LTTPRs must be avoided on platforms with an AUX timeout
+	 * period < 3.2ms. (see DP Standard v2.0, 2.11.2, 3.6.6.1).
+	 */
+	if (DISPLAY_VER(i915) >= 10 && !IS_GEMINILAKE(i915))
+		if (drm_dp_dpcd_probe(&intel_dp->aux,
+				      DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV))
+			return -EIO;
+
+	if (drm_dp_read_dpcd_caps(&intel_dp->aux, dpcd))
+		return -EIO;
+
+	return 0;
+}
+
 /**
  * intel_dp_init_lttpr_and_dprx_caps - detect LTTPR and DPRX caps, init the LTTPR link training mode
  * @intel_dp: Intel DP struct
@@ -192,12 +214,10 @@ int intel_dp_init_lttpr_and_dprx_caps(struct intel_dp *intel_dp)
 	if (!intel_dp_is_edp(intel_dp) &&
 	    (DISPLAY_VER(i915) >= 10 && !IS_GEMINILAKE(i915))) {
 		u8 dpcd[DP_RECEIVER_CAP_SIZE];
+		int err = intel_dp_read_dprx_caps(intel_dp, dpcd);
 
-		if (drm_dp_dpcd_probe(&intel_dp->aux, DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV))
-			return -EIO;
-
-		if (drm_dp_read_dpcd_caps(&intel_dp->aux, dpcd))
-			return -EIO;
+		if (err != 0)
+			return err;
 
 		lttpr_count = intel_dp_init_lttpr(intel_dp, dpcd);
 	}
diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.h b/drivers/gpu/drm/i915/display/intel_dp_link_training.h
index 2c8f2775891b0..19836a8a4f904 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.h
+++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.h
@@ -11,6 +11,7 @@
 struct intel_crtc_state;
 struct intel_dp;
 
+int intel_dp_read_dprx_caps(struct intel_dp *intel_dp, u8 dpcd[DP_RECEIVER_CAP_SIZE]);
 int intel_dp_init_lttpr_and_dprx_caps(struct intel_dp *intel_dp);
 
 void intel_dp_get_adjust_train(struct intel_dp *intel_dp,
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 09/19] drm/i915/dp: Add intel_dp_max_link_data_rate()
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (7 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 08/19] drm/i915/dp: Factor out intel_dp_read_dprx_caps() Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-06 20:37   ` Shankar, Uma
  2024-01-23 10:28 ` [PATCH 10/19] drm/i915/dp: Add way to get active pipes with syncing commits Imre Deak
                   ` (13 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Add intel_dp_max_link_data_rate() to get the link BW vs. the sink DPRX
BW used by a follow-up patch enabling the DP tunnel BW allocation mode.
The link BW can be below the DPRX BW due to a BW limitation on a link
shared by multiple sinks.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c     | 32 +++++++++++++++++----
 drivers/gpu/drm/i915/display/intel_dp.h     |  2 ++
 drivers/gpu/drm/i915/display/intel_dp_mst.c |  3 +-
 3 files changed, 30 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 23434d0aba188..9cd675c6d0ee8 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -383,6 +383,22 @@ int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
 				1000000 * 16 * 8);
 }
 
+/**
+ * intel_dp_max_link_data_rate: Calculate the maximum rate for the given link params
+ * @intel_dp: Intel DP object
+ * @max_dprx_rate: Maximum data rate of the DPRX
+ * @max_dprx_lanes: Maximum lane count of the DPRX
+ *
+ * Calculate the maximum data rate for the provided link parameters.
+ *
+ * Returns the maximum data rate in kBps units.
+ */
+int intel_dp_max_link_data_rate(struct intel_dp *intel_dp,
+				int max_dprx_rate, int max_dprx_lanes)
+{
+	return drm_dp_max_dprx_data_rate(max_dprx_rate, max_dprx_lanes);
+}
+
 bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
@@ -612,7 +628,7 @@ static bool intel_dp_can_link_train_fallback_for_edp(struct intel_dp *intel_dp,
 	int mode_rate, max_rate;
 
 	mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
-	max_rate = drm_dp_max_dprx_data_rate(link_rate, lane_count);
+	max_rate = intel_dp_max_link_data_rate(intel_dp, link_rate, lane_count);
 	if (mode_rate > max_rate)
 		return false;
 
@@ -1214,7 +1230,8 @@ intel_dp_mode_valid(struct drm_connector *_connector,
 	max_link_clock = intel_dp_max_link_rate(intel_dp);
 	max_lanes = intel_dp_max_lane_count(intel_dp);
 
-	max_rate = drm_dp_max_dprx_data_rate(max_link_clock, max_lanes);
+	max_rate = intel_dp_max_link_data_rate(intel_dp, max_link_clock, max_lanes);
+
 	mode_rate = intel_dp_link_required(target_clock,
 					   intel_dp_mode_min_output_bpp(connector, mode));
 
@@ -1564,8 +1581,10 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
 			for (lane_count = limits->min_lane_count;
 			     lane_count <= limits->max_lane_count;
 			     lane_count <<= 1) {
-				link_avail = drm_dp_max_dprx_data_rate(link_rate,
-								       lane_count);
+				link_avail = intel_dp_max_link_data_rate(intel_dp,
+									 link_rate,
+									 lane_count);
+
 
 				if (mode_rate <= link_avail) {
 					pipe_config->lane_count = lane_count;
@@ -2422,8 +2441,9 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
 		    pipe_config->pipe_bpp,
 		    BPP_X16_ARGS(pipe_config->dsc.compressed_bpp_x16),
 		    intel_dp_config_required_rate(pipe_config),
-		    drm_dp_max_dprx_data_rate(pipe_config->port_clock,
-					      pipe_config->lane_count));
+		    intel_dp_max_link_data_rate(intel_dp,
+						pipe_config->port_clock,
+						pipe_config->lane_count));
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 49553e43add22..8b0dfbf06afff 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -117,6 +117,8 @@ bool intel_dp_get_colorimetry_status(struct intel_dp *intel_dp);
 int intel_dp_link_required(int pixel_clock, int bpp);
 int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
 				 int bw_overhead);
+int intel_dp_max_link_data_rate(struct intel_dp *intel_dp,
+				int max_dprx_rate, int max_dprx_lanes);
 bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp);
 bool intel_dp_needs_vsc_sdp(const struct intel_crtc_state *crtc_state,
 			    const struct drm_connector_state *conn_state);
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index cfcc157b7d41d..520393dc8b453 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -1295,7 +1295,8 @@ intel_dp_mst_mode_valid_ctx(struct drm_connector *connector,
 	max_link_clock = intel_dp_max_link_rate(intel_dp);
 	max_lanes = intel_dp_max_lane_count(intel_dp);
 
-	max_rate = drm_dp_max_dprx_data_rate(max_link_clock, max_lanes);
+	max_rate = intel_dp_max_link_data_rate(intel_dp,
+					       max_link_clock, max_lanes);
 	mode_rate = intel_dp_link_required(mode->clock, min_bpp);
 
 	ret = drm_modeset_lock(&mgr->base.lock, ctx);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 10/19] drm/i915/dp: Add way to get active pipes with syncing commits
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (8 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 09/19] drm/i915/dp: Add intel_dp_max_link_data_rate() Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-01-23 10:28 ` [PATCH 11/19] drm/i915/dp: Add support for DP tunnel BW allocation Imre Deak
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Add a way to get the active pipes through a given DP port by syncing
against a related pending non-blocking commit. Atm
intel_dp_get_active_pipes() will only try to sync a given pipe and if
that would block ignore the pipe. A follow-up change enabling the DP
tunnel BW allocation mode will need to ensure that all active pipes are
returned.

A follow-up patchset will add a no-sync mode as well, needed by the
current intel_tc_port_link_reset() user of it, which atm incorrectly
ignores active pipes for which the syncing would block (but otherwise
doesn't require an actual syncing).

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 25 +++++++++++++++++++++----
 drivers/gpu/drm/i915/display/intel_dp.h |  6 ++++++
 drivers/gpu/drm/i915/display/intel_tc.c |  4 +++-
 3 files changed, 30 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 9cd675c6d0ee8..323475569ee7f 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -5019,6 +5019,7 @@ static bool intel_dp_has_connector(struct intel_dp *intel_dp,
 
 int intel_dp_get_active_pipes(struct intel_dp *intel_dp,
 			      struct drm_modeset_acquire_ctx *ctx,
+			      enum intel_dp_get_pipes_mode mode,
 			      u8 *pipe_mask)
 {
 	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
@@ -5053,9 +5054,23 @@ int intel_dp_get_active_pipes(struct intel_dp *intel_dp,
 		if (!crtc_state->hw.active)
 			continue;
 
-		if (conn_state->commit &&
-		    !try_wait_for_completion(&conn_state->commit->hw_done))
-			continue;
+		if (conn_state->commit) {
+			bool synced;
+
+			switch (mode) {
+			case INTEL_DP_GET_PIPES_TRY_SYNC:
+				if (!try_wait_for_completion(&conn_state->commit->hw_done))
+					continue;
+				break;
+			case INTEL_DP_GET_PIPES_SYNC:
+				synced = wait_for_completion_timeout(&conn_state->commit->hw_done,
+								     msecs_to_jiffies(5000));
+				drm_WARN_ON(&i915->drm, !synced);
+				break;
+			default:
+				MISSING_CASE(mode);
+			}
+		}
 
 		*pipe_mask |= BIT(crtc->pipe);
 	}
@@ -5092,7 +5107,9 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
 	if (!intel_dp_needs_link_retrain(intel_dp))
 		return 0;
 
-	ret = intel_dp_get_active_pipes(intel_dp, ctx, &pipe_mask);
+	ret = intel_dp_get_active_pipes(intel_dp, ctx,
+					INTEL_DP_GET_PIPES_TRY_SYNC,
+					&pipe_mask);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 8b0dfbf06afff..1a7b87787dfa9 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -25,6 +25,11 @@ struct intel_encoder;
 
 struct work_struct;
 
+enum intel_dp_get_pipes_mode {
+	INTEL_DP_GET_PIPES_TRY_SYNC,
+	INTEL_DP_GET_PIPES_SYNC,
+};
+
 struct link_config_limits {
 	int min_rate, max_rate;
 	int min_lane_count, max_lane_count;
@@ -59,6 +64,7 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
 					    int link_rate, u8 lane_count);
 int intel_dp_get_active_pipes(struct intel_dp *intel_dp,
 			      struct drm_modeset_acquire_ctx *ctx,
+			      enum intel_dp_get_pipes_mode mode,
 			      u8 *pipe_mask);
 int intel_dp_retrain_link(struct intel_encoder *encoder,
 			  struct drm_modeset_acquire_ctx *ctx);
diff --git a/drivers/gpu/drm/i915/display/intel_tc.c b/drivers/gpu/drm/i915/display/intel_tc.c
index f34743e6eeed2..561d6f97ff189 100644
--- a/drivers/gpu/drm/i915/display/intel_tc.c
+++ b/drivers/gpu/drm/i915/display/intel_tc.c
@@ -1655,7 +1655,9 @@ static int reset_link_commit(struct intel_tc_port *tc,
 	if (ret)
 		return ret;
 
-	ret = intel_dp_get_active_pipes(intel_dp, ctx, &pipe_mask);
+	ret = intel_dp_get_active_pipes(intel_dp, ctx,
+					INTEL_DP_GET_PIPES_TRY_SYNC,
+					&pipe_mask);
 	if (ret)
 		return ret;
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 11/19] drm/i915/dp: Add support for DP tunnel BW allocation
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (9 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 10/19] drm/i915/dp: Add way to get active pipes with syncing commits Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-05 22:47   ` Ville Syrjälä
  2024-02-06 23:08   ` Ville Syrjälä
  2024-01-23 10:28 ` [PATCH 12/19] drm/i915/dp: Add DP tunnel atomic state and check BW limit Imre Deak
                   ` (11 subsequent siblings)
  22 siblings, 2 replies; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Add support to enable the DP tunnel BW allocation mode. Follow-up
patches will call the required helpers added here to prepare for a
modeset on a link with DP tunnels, the last change in the patchset
actually enabling BWA.

With BWA enabled, the driver will expose the full mode list a display
supports, regardless of any BW limitation on a shared (Thunderbolt)
link. Such BW limits will be checked against only during a modeset, when
the driver has the full knowledge of each display's BW requirement.

If the link BW changes in a way that a connector's modelist may also
change, userspace will get a hotplug notification for all the connectors
sharing the same link (so it can adjust the mode used for a display).

The BW limitation can change at any point, asynchronously to modesets
on a given connector, so a modeset can fail even though the atomic check
for it passed. In such scenarios userspace will get a bad link
notification and in response is supposed to retry the modeset.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/Kconfig                  |  13 +
 drivers/gpu/drm/i915/Kconfig.debug            |   1 +
 drivers/gpu/drm/i915/Makefile                 |   3 +
 drivers/gpu/drm/i915/display/intel_atomic.c   |   2 +
 .../gpu/drm/i915/display/intel_display_core.h |   1 +
 .../drm/i915/display/intel_display_types.h    |   9 +
 .../gpu/drm/i915/display/intel_dp_tunnel.c    | 642 ++++++++++++++++++
 .../gpu/drm/i915/display/intel_dp_tunnel.h    | 131 ++++
 8 files changed, 802 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/display/intel_dp_tunnel.c
 create mode 100644 drivers/gpu/drm/i915/display/intel_dp_tunnel.h

diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig
index b5d6e3352071f..4636913c17868 100644
--- a/drivers/gpu/drm/i915/Kconfig
+++ b/drivers/gpu/drm/i915/Kconfig
@@ -155,6 +155,19 @@ config DRM_I915_PXP
 	  protected session and manage the status of the alive software session,
 	  as well as its life cycle.
 
+config DRM_I915_DP_TUNNEL
+	bool "Enable DP tunnel support"
+	depends on DRM_I915
+	select DRM_DISPLAY_DP_TUNNEL
+	default y
+	help
+	  Choose this option to detect DP tunnels and enable the Bandwidth
+	  Allocation mode for such tunnels. This allows using the maximum
+	  resolution allowed by the link BW on all displays sharing the
+	  link BW, for instance on a Thunderbolt link.
+
+	  If in doubt, say "Y".
+
 menu "drm/i915 Debugging"
 depends on DRM_I915
 depends on EXPERT
diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
index 5b7162076850c..bc18e2d9ea05d 100644
--- a/drivers/gpu/drm/i915/Kconfig.debug
+++ b/drivers/gpu/drm/i915/Kconfig.debug
@@ -28,6 +28,7 @@ config DRM_I915_DEBUG
 	select STACKDEPOT
 	select STACKTRACE
 	select DRM_DP_AUX_CHARDEV
+	select DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE if DRM_I915_DP_TUNNEL
 	select X86_MSR # used by igt/pm_rpm
 	select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks)
 	select DRM_DEBUG_MM if DRM=y
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index c13f14edb5088..3ef6ed41e62b4 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -369,6 +369,9 @@ i915-y += \
 	display/vlv_dsi.o \
 	display/vlv_dsi_pll.o
 
+i915-$(CONFIG_DRM_I915_DP_TUNNEL) += \
+	display/intel_dp_tunnel.o
+
 i915-y += \
 	i915_perf.o
 
diff --git a/drivers/gpu/drm/i915/display/intel_atomic.c b/drivers/gpu/drm/i915/display/intel_atomic.c
index ec0d5168b5035..96ab37e158995 100644
--- a/drivers/gpu/drm/i915/display/intel_atomic.c
+++ b/drivers/gpu/drm/i915/display/intel_atomic.c
@@ -29,6 +29,7 @@
  * See intel_atomic_plane.c for the plane-specific atomic functionality.
  */
 
+#include <drm/display/drm_dp_tunnel.h>
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fourcc.h>
@@ -38,6 +39,7 @@
 #include "intel_atomic.h"
 #include "intel_cdclk.h"
 #include "intel_display_types.h"
+#include "intel_dp_tunnel.h"
 #include "intel_global_state.h"
 #include "intel_hdcp.h"
 #include "intel_psr.h"
diff --git a/drivers/gpu/drm/i915/display/intel_display_core.h b/drivers/gpu/drm/i915/display/intel_display_core.h
index a90f1aa201be8..0993d25a0a686 100644
--- a/drivers/gpu/drm/i915/display/intel_display_core.h
+++ b/drivers/gpu/drm/i915/display/intel_display_core.h
@@ -522,6 +522,7 @@ struct intel_display {
 	} wq;
 
 	/* Grouping using named structs. Keep sorted. */
+	struct drm_dp_tunnel_mgr *dp_tunnel_mgr;
 	struct intel_audio audio;
 	struct intel_dpll dpll;
 	struct intel_fbc *fbc[I915_MAX_FBCS];
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
index ae2e8cff9d691..b79db78b27728 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -33,6 +33,7 @@
 
 #include <drm/display/drm_dp_dual_mode_helper.h>
 #include <drm/display/drm_dp_mst_helper.h>
+#include <drm/display/drm_dp_tunnel.h>
 #include <drm/display/drm_dsc.h>
 #include <drm/drm_atomic.h>
 #include <drm/drm_crtc.h>
@@ -677,6 +678,8 @@ struct intel_atomic_state {
 
 	struct intel_shared_dpll_state shared_dpll[I915_NUM_PLLS];
 
+	struct intel_dp_tunnel_inherited_state *dp_tunnel_state;
+
 	/*
 	 * Current watermarks can't be trusted during hardware readout, so
 	 * don't bother calculating intermediate watermarks.
@@ -1372,6 +1375,9 @@ struct intel_crtc_state {
 		struct drm_dsc_config config;
 	} dsc;
 
+	/* DP tunnel used for BW allocation. */
+	struct drm_dp_tunnel_ref dp_tunnel_ref;
+
 	/* HSW+ linetime watermarks */
 	u16 linetime;
 	u16 ips_linetime;
@@ -1775,6 +1781,9 @@ struct intel_dp {
 	/* connector directly attached - won't be use for modeset in mst world */
 	struct intel_connector *attached_connector;
 
+	struct drm_dp_tunnel *tunnel;
+	bool tunnel_suspended:1;
+
 	/* mst connector list */
 	struct intel_dp_mst_encoder *mst_encoders[I915_MAX_PIPES];
 	struct drm_dp_mst_topology_mgr mst_mgr;
diff --git a/drivers/gpu/drm/i915/display/intel_dp_tunnel.c b/drivers/gpu/drm/i915/display/intel_dp_tunnel.c
new file mode 100644
index 0000000000000..52dd0108a6c13
--- /dev/null
+++ b/drivers/gpu/drm/i915/display/intel_dp_tunnel.c
@@ -0,0 +1,642 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#include "i915_drv.h"
+
+#include <drm/display/drm_dp_tunnel.h>
+
+#include "intel_atomic.h"
+#include "intel_display_limits.h"
+#include "intel_display_types.h"
+#include "intel_dp.h"
+#include "intel_dp_link_training.h"
+#include "intel_dp_mst.h"
+#include "intel_dp_tunnel.h"
+#include "intel_link_bw.h"
+
+struct intel_dp_tunnel_inherited_state {
+	struct {
+		struct drm_dp_tunnel_ref tunnel_ref;
+	} tunnels[I915_MAX_PIPES];
+};
+
+static void destroy_tunnel(struct intel_dp *intel_dp)
+{
+	drm_dp_tunnel_destroy(intel_dp->tunnel);
+	intel_dp->tunnel = NULL;
+}
+
+void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp)
+{
+	if (!intel_dp->tunnel)
+		return;
+
+	destroy_tunnel(intel_dp);
+}
+
+void intel_dp_tunnel_destroy(struct intel_dp *intel_dp)
+{
+	if (!intel_dp->tunnel)
+		return;
+
+	if (intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
+		drm_dp_tunnel_disable_bw_alloc(intel_dp->tunnel);
+
+	destroy_tunnel(intel_dp);
+}
+
+static int kbytes_to_mbits(int kbytes)
+{
+	return DIV_ROUND_UP(kbytes * 8, 1000);
+}
+
+static int get_current_link_bw(struct intel_dp *intel_dp,
+			       bool *below_dprx_bw)
+{
+	int rate = intel_dp_max_common_rate(intel_dp);
+	int lane_count = intel_dp_max_common_lane_count(intel_dp);
+	int bw;
+
+	bw = intel_dp_max_link_data_rate(intel_dp, rate, lane_count);
+	*below_dprx_bw = bw < drm_dp_max_dprx_data_rate(rate, lane_count);
+
+	return bw;
+}
+
+static int update_tunnel_state(struct intel_dp *intel_dp)
+{
+	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
+	bool old_bw_below_dprx;
+	bool new_bw_below_dprx;
+	int old_bw;
+	int new_bw;
+	int ret;
+
+	old_bw = get_current_link_bw(intel_dp, &old_bw_below_dprx);
+
+	ret = drm_dp_tunnel_update_state(intel_dp->tunnel);
+	if (ret < 0) {
+		drm_dbg_kms(&i915->drm,
+			    "[DPTUN %s][ENCODER:%d:%s] State update failed (err %pe)\n",
+			    drm_dp_tunnel_name(intel_dp->tunnel),
+			    encoder->base.base.id,
+			    encoder->base.name,
+			    ERR_PTR(ret));
+
+		return ret;
+	}
+
+	if (ret == 0 ||
+	    !drm_dp_tunnel_bw_alloc_is_enabled(intel_dp->tunnel))
+		return 0;
+
+	intel_dp_update_sink_caps(intel_dp);
+
+	new_bw = get_current_link_bw(intel_dp, &new_bw_below_dprx);
+
+	/* Suppress the notification if the mode list can't change due to bw. */
+	if (old_bw_below_dprx == new_bw_below_dprx &&
+	    !new_bw_below_dprx)
+		return 0;
+
+	drm_dbg_kms(&i915->drm,
+		    "[DPTUN %s][ENCODER:%d:%s] Notify users about BW change: %d -> %d\n",
+		    drm_dp_tunnel_name(intel_dp->tunnel),
+		    encoder->base.base.id,
+		    encoder->base.name,
+		    kbytes_to_mbits(old_bw),
+		    kbytes_to_mbits(new_bw));
+
+	return 1;
+}
+
+static int allocate_initial_tunnel_bw(struct intel_dp *intel_dp,
+				      struct drm_modeset_acquire_ctx *ctx)
+{
+	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
+	const struct intel_crtc *crtc;
+	int tunnel_bw = 0;
+	u8 pipe_mask;
+	int err;
+
+	err = intel_dp_get_active_pipes(intel_dp, ctx,
+					INTEL_DP_GET_PIPES_SYNC,
+					&pipe_mask);
+	if (err)
+		return err;
+
+	for_each_intel_crtc_in_pipe_mask(&i915->drm, crtc, pipe_mask) {
+		const struct intel_crtc_state *crtc_state =
+			to_intel_crtc_state(crtc->base.state);
+		int stream_bw = intel_dp_config_required_rate(crtc_state);
+
+		drm_dbg_kms(&i915->drm,
+			    "[DPTUN %s][ENCODER:%d:%s][CRTC:%d:%s] Initial BW for stream %d: %d/%d Mb/s\n",
+			    drm_dp_tunnel_name(intel_dp->tunnel),
+			    encoder->base.base.id,
+			    encoder->base.name,
+			    crtc->base.base.id,
+			    crtc->base.name,
+			    crtc->pipe,
+			    kbytes_to_mbits(stream_bw),
+			    kbytes_to_mbits(tunnel_bw));
+
+		tunnel_bw += stream_bw;
+	}
+
+	err = drm_dp_tunnel_alloc_bw(intel_dp->tunnel, tunnel_bw);
+	if (err) {
+		drm_dbg_kms(&i915->drm,
+			    "[DPTUN %s][ENCODER:%d:%s] Initial BW allocation failed (err %pe)\n",
+			    drm_dp_tunnel_name(intel_dp->tunnel),
+			    encoder->base.base.id,
+			    encoder->base.name,
+			    ERR_PTR(err));
+
+		return err;
+	}
+
+	return update_tunnel_state(intel_dp);
+}
+
+static int detect_new_tunnel(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx)
+{
+	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
+	struct drm_dp_tunnel *tunnel;
+	int ret;
+
+	tunnel = drm_dp_tunnel_detect(i915->display.dp_tunnel_mgr,
+					&intel_dp->aux);
+	if (IS_ERR(tunnel))
+		return PTR_ERR(tunnel);
+
+	intel_dp->tunnel = tunnel;
+
+	ret = drm_dp_tunnel_enable_bw_alloc(intel_dp->tunnel);
+	if (ret) {
+		if (ret == -EOPNOTSUPP)
+			return 0;
+
+		drm_dbg_kms(&i915->drm,
+			    "[DPTUN %s][ENCODER:%d:%s] Failed to enable BW allocation mode (ret %pe)\n",
+			    drm_dp_tunnel_name(intel_dp->tunnel),
+			    encoder->base.base.id,
+			    encoder->base.name,
+			    ERR_PTR(ret));
+
+		/* Keep the tunnel with BWA disabled */
+		return 0;
+	}
+
+	ret = allocate_initial_tunnel_bw(intel_dp, ctx);
+	if (ret < 0)
+		intel_dp_tunnel_destroy(intel_dp);
+
+	return ret;
+}
+
+int intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx)
+{
+	int ret;
+
+	if (intel_dp_is_edp(intel_dp))
+		return 0;
+
+	if (intel_dp->tunnel) {
+		ret = update_tunnel_state(intel_dp);
+		if (ret >= 0)
+			return ret;
+
+		/* Try to recreate the tunnel after an update error. */
+		intel_dp_tunnel_destroy(intel_dp);
+	}
+
+	ret = detect_new_tunnel(intel_dp, ctx);
+	if (ret >= 0 || ret == -EDEADLK)
+		return ret;
+
+	return ret;
+}
+
+bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp)
+{
+	return intel_dp->tunnel &&
+		drm_dp_tunnel_bw_alloc_is_enabled(intel_dp->tunnel);
+}
+
+void intel_dp_tunnel_suspend(struct intel_dp *intel_dp)
+{
+	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+	struct intel_connector *connector = intel_dp->attached_connector;
+	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
+
+	if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
+		return;
+
+	drm_dbg_kms(&i915->drm, "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Suspend\n",
+		    drm_dp_tunnel_name(intel_dp->tunnel),
+		    connector->base.base.id, connector->base.name,
+		    encoder->base.base.id, encoder->base.name);
+
+	intel_dp->tunnel_suspended = true;
+}
+
+void intel_dp_tunnel_resume(struct intel_dp *intel_dp, bool dpcd_updated)
+{
+	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+	struct intel_connector *connector = intel_dp->attached_connector;
+	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
+	u8 dpcd[DP_RECEIVER_CAP_SIZE];
+	int err = 0;
+
+	if (!intel_dp->tunnel_suspended)
+		return;
+
+	intel_dp->tunnel_suspended = false;
+
+	drm_dbg_kms(&i915->drm, "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Resume\n",
+		    drm_dp_tunnel_name(intel_dp->tunnel),
+		    connector->base.base.id, connector->base.name,
+		    encoder->base.base.id, encoder->base.name);
+
+	/* DPRX caps read required by tunnel detection */
+	if (!dpcd_updated)
+		err = intel_dp_read_dprx_caps(intel_dp, dpcd);
+
+	if (err)
+		drm_dp_tunnel_set_io_error(intel_dp->tunnel);
+	else
+		err = drm_dp_tunnel_enable_bw_alloc(intel_dp->tunnel);
+		/* TODO: allocate initial BW */
+
+	if (!err)
+		return;
+
+	drm_dbg_kms(&i915->drm,
+		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Tunnel can't be resumed, will drop and redect it (err %pe)\n",
+		    drm_dp_tunnel_name(intel_dp->tunnel),
+		    connector->base.base.id, connector->base.name,
+		    encoder->base.base.id, encoder->base.name,
+		    ERR_PTR(err));
+}
+
+static struct drm_dp_tunnel *
+get_inherited_tunnel_state(struct intel_atomic_state *state,
+			   const struct intel_crtc *crtc)
+{
+	if (!state->dp_tunnel_state)
+		return NULL;
+
+	return state->dp_tunnel_state->tunnels[crtc->pipe].tunnel_ref.tunnel;
+}
+
+static int
+add_inherited_tunnel_state(struct intel_atomic_state *state,
+			   struct drm_dp_tunnel *tunnel,
+			   const struct intel_crtc *crtc)
+{
+	struct drm_i915_private *i915 = to_i915(state->base.dev);
+	struct drm_dp_tunnel *old_tunnel;
+
+	old_tunnel = get_inherited_tunnel_state(state, crtc);
+	if (old_tunnel) {
+		drm_WARN_ON(&i915->drm, old_tunnel != tunnel);
+		return 0;
+	}
+
+	if (!state->dp_tunnel_state) {
+		state->dp_tunnel_state = kzalloc(sizeof(*state->dp_tunnel_state), GFP_KERNEL);
+		if (!state->dp_tunnel_state)
+			return -ENOMEM;
+	}
+
+	drm_dp_tunnel_ref_get(tunnel,
+			      &state->dp_tunnel_state->tunnels[crtc->pipe].tunnel_ref);
+
+	return 0;
+}
+
+static int check_inherited_tunnel_state(struct intel_atomic_state *state,
+					struct intel_dp *intel_dp,
+					const struct intel_digital_connector_state *old_conn_state)
+{
+	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
+	const struct intel_connector *connector =
+		to_intel_connector(old_conn_state->base.connector);
+	struct intel_crtc *old_crtc;
+	const struct intel_crtc_state *old_crtc_state;
+
+	/*
+	 * If a BWA tunnel gets detected only after the corresponding
+	 * connector got enabled already without a BWA tunnel, or a different
+	 * BWA tunnel (which was removed meanwhile) the old CRTC state won't
+	 * contain the state of the current tunnel. This tunnel still has a
+	 * reserved BW, which needs to be released, add the state for such
+	 * inherited tunnels separately only to this atomic state.
+	 */
+	if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
+		return 0;
+
+	if (!old_conn_state->base.crtc)
+		return 0;
+
+	old_crtc = to_intel_crtc(old_conn_state->base.crtc);
+	old_crtc_state = intel_atomic_get_old_crtc_state(state, old_crtc);
+
+	if (!old_crtc_state->hw.active ||
+	    old_crtc_state->dp_tunnel_ref.tunnel == intel_dp->tunnel)
+		return 0;
+
+	drm_dbg_kms(&i915->drm,
+		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Adding state for inherited tunnel %p\n",
+		    drm_dp_tunnel_name(intel_dp->tunnel),
+		    connector->base.base.id,
+		    connector->base.name,
+		    encoder->base.base.id,
+		    encoder->base.name,
+		    old_crtc->base.base.id,
+		    old_crtc->base.name,
+		    intel_dp->tunnel);
+
+	return add_inherited_tunnel_state(state, intel_dp->tunnel, old_crtc);
+}
+
+void intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state)
+{
+	enum pipe pipe;
+
+	if (!state->dp_tunnel_state)
+		return;
+
+	for_each_pipe(to_i915(state->base.dev), pipe)
+		if (state->dp_tunnel_state->tunnels[pipe].tunnel_ref.tunnel)
+			drm_dp_tunnel_ref_put(&state->dp_tunnel_state->tunnels[pipe].tunnel_ref);
+
+	kfree(state->dp_tunnel_state);
+	state->dp_tunnel_state = NULL;
+}
+
+static int intel_dp_tunnel_atomic_add_group_state(struct intel_atomic_state *state,
+						  struct drm_dp_tunnel *tunnel)
+{
+	struct drm_i915_private *i915 = to_i915(state->base.dev);
+	u32 pipe_mask;
+	int err;
+
+	if (!tunnel)
+		return 0;
+
+	err = drm_dp_tunnel_atomic_get_group_streams_in_state(&state->base,
+							      tunnel, &pipe_mask);
+	if (err)
+		return err;
+
+	drm_WARN_ON(&i915->drm, pipe_mask & ~((1 << I915_MAX_PIPES) - 1));
+
+	return intel_modeset_pipes_in_mask_early(state, "DPTUN", pipe_mask);
+}
+
+int intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state,
+					      struct intel_crtc *crtc)
+{
+	const struct intel_crtc_state *new_crtc_state =
+		intel_atomic_get_new_crtc_state(state, crtc);
+	const struct drm_dp_tunnel_state *tunnel_state;
+	struct drm_dp_tunnel *tunnel = new_crtc_state->dp_tunnel_ref.tunnel;
+
+	if (!tunnel)
+		return 0;
+
+	tunnel_state = drm_dp_tunnel_atomic_get_state(&state->base, tunnel);
+	if (IS_ERR(tunnel_state))
+		return PTR_ERR(tunnel_state);
+
+	return 0;
+}
+
+static int check_group_state(struct intel_atomic_state *state,
+			     struct intel_dp *intel_dp,
+			     const struct intel_connector *connector,
+			     struct intel_crtc *crtc)
+{
+	struct drm_i915_private *i915 = to_i915(state->base.dev);
+	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
+	const struct intel_crtc_state *crtc_state;
+
+	crtc_state = intel_atomic_get_new_crtc_state(state, crtc);
+
+	if (!crtc_state->dp_tunnel_ref.tunnel)
+		return 0;
+
+	drm_dbg_kms(&i915->drm,
+		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Adding group state for tunnel %p\n",
+		    drm_dp_tunnel_name(intel_dp->tunnel),
+		    connector->base.base.id,
+		    connector->base.name,
+		    encoder->base.base.id,
+		    encoder->base.name,
+		    crtc->base.base.id,
+		    crtc->base.name,
+		    intel_dp->tunnel);
+
+	return intel_dp_tunnel_atomic_add_group_state(state, crtc_state->dp_tunnel_ref.tunnel);
+}
+
+int intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state,
+				       struct intel_dp *intel_dp,
+				       struct intel_connector *connector)
+{
+	const struct intel_digital_connector_state *old_conn_state =
+		intel_atomic_get_new_connector_state(state, connector);
+	const struct intel_digital_connector_state *new_conn_state =
+		intel_atomic_get_new_connector_state(state, connector);
+	int err;
+
+	if (old_conn_state->base.crtc) {
+		err = check_group_state(state, intel_dp, connector,
+					to_intel_crtc(old_conn_state->base.crtc));
+		if (err)
+			return err;
+	}
+
+	if (new_conn_state->base.crtc &&
+	    new_conn_state->base.crtc != old_conn_state->base.crtc) {
+		err = check_group_state(state, intel_dp, connector,
+					to_intel_crtc(new_conn_state->base.crtc));
+		if (err)
+			return err;
+	}
+
+	return check_inherited_tunnel_state(state, intel_dp, old_conn_state);
+}
+
+void intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
+					      struct intel_dp *intel_dp,
+					      const struct intel_connector *connector,
+					      struct intel_crtc_state *crtc_state)
+{
+	struct drm_i915_private *i915 = to_i915(state->base.dev);
+	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
+	const struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
+	int required_rate = intel_dp_config_required_rate(crtc_state);
+
+	if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
+		return;
+
+	drm_dbg_kms(&i915->drm,
+		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Stream %d required BW %d Mb/s\n",
+		    drm_dp_tunnel_name(intel_dp->tunnel),
+		    connector->base.base.id,
+		    connector->base.name,
+		    encoder->base.base.id,
+		    encoder->base.name,
+		    crtc->base.base.id,
+		    crtc->base.name,
+		    crtc->pipe,
+		    kbytes_to_mbits(required_rate));
+
+	drm_dp_tunnel_atomic_set_stream_bw(&state->base, intel_dp->tunnel,
+					   crtc->pipe, required_rate);
+
+	drm_dp_tunnel_ref_get(intel_dp->tunnel,
+			      &crtc_state->dp_tunnel_ref);
+}
+
+/**
+ * intel_dp_tunnel_atomic_check_link - Check the DP tunnel atomic state
+ * @state: intel atomic state
+ * @limits: link BW limits
+ *
+ * Check the link configuration for all DP tunnels in @state. If the
+ * configuration is invalid @limits will be updated if possible to
+ * reduce the total BW, after which the configuration for all CRTCs in
+ * @state must be recomputed with the updated @limits.
+ *
+ * Returns:
+ *   - 0 if the confugration is valid
+ *   - %-EAGAIN, if the configuration is invalid and @limits got updated
+ *     with fallback values with which the configuration of all CRTCs in
+ *     @state must be recomputed
+ *   - Other negative error, if the configuration is invalid without a
+ *     fallback possibility, or the check failed for another reason
+ */
+int intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state,
+				      struct intel_link_bw_limits *limits)
+{
+	u32 failed_stream_mask;
+	int err;
+
+	err = drm_dp_tunnel_atomic_check_stream_bws(&state->base,
+						    &failed_stream_mask);
+	if (err != -ENOSPC)
+		return err;
+
+	err = intel_link_bw_reduce_bpp(state, limits,
+				       failed_stream_mask, "DP tunnel link BW");
+
+	return err ? : -EAGAIN;
+}
+
+void intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state,
+				     struct intel_encoder *encoder,
+				     const struct intel_crtc_state *new_crtc_state,
+				     const struct drm_connector_state *new_conn_state)
+{
+	struct drm_i915_private *i915 = to_i915(state->base.dev);
+	struct drm_dp_tunnel *tunnel = new_crtc_state->dp_tunnel_ref.tunnel;
+	const struct drm_dp_tunnel_state *new_tunnel_state;
+	int err;
+
+	if (!tunnel)
+		return;
+
+	new_tunnel_state = drm_dp_tunnel_atomic_get_new_state(&state->base, tunnel);
+
+	err = drm_dp_tunnel_alloc_bw(tunnel,
+				     drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state));
+	if (!err)
+		return;
+
+	if (!intel_digital_port_connected(encoder))
+		return;
+
+	drm_dbg_kms(&i915->drm,
+		    "[DPTUN %s][ENCODER:%d:%s] BW allocation failed on a connected sink (err %pe)\n",
+		    drm_dp_tunnel_name(tunnel),
+		    encoder->base.base.id,
+		    encoder->base.name,
+		    ERR_PTR(err));
+
+	intel_dp_queue_modeset_retry_for_link(state, encoder, new_crtc_state, new_conn_state);
+}
+
+void intel_dp_tunnel_atomic_free_bw(struct intel_atomic_state *state,
+				    struct intel_encoder *encoder,
+				    const struct intel_crtc_state *old_crtc_state,
+				    const struct drm_connector_state *old_conn_state)
+{
+	struct drm_i915_private *i915 = to_i915(state->base.dev);
+	struct intel_crtc *old_crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
+	struct drm_dp_tunnel *tunnel;
+	int err;
+
+	tunnel = get_inherited_tunnel_state(state, old_crtc);
+	if (!tunnel)
+		tunnel = old_crtc_state->dp_tunnel_ref.tunnel;
+
+	if (!tunnel)
+		return;
+
+	err = drm_dp_tunnel_alloc_bw(tunnel, 0);
+	if (!err)
+		return;
+
+	if (!intel_digital_port_connected(encoder))
+		return;
+
+	drm_dbg_kms(&i915->drm,
+		    "[DPTUN %s][ENCODER:%d:%s] BW freeing failed on a connected sink (err %pe)\n",
+		    drm_dp_tunnel_name(tunnel),
+		    encoder->base.base.id,
+		    encoder->base.name,
+		    ERR_PTR(err));
+
+	intel_dp_queue_modeset_retry_for_link(state, encoder, old_crtc_state, old_conn_state);
+}
+
+int intel_dp_tunnel_mgr_init(struct drm_i915_private *i915)
+{
+	struct drm_dp_tunnel_mgr *tunnel_mgr;
+	struct drm_connector_list_iter connector_list_iter;
+	struct intel_connector *connector;
+	int dp_connectors = 0;
+
+	drm_connector_list_iter_begin(&i915->drm, &connector_list_iter);
+	for_each_intel_connector_iter(connector, &connector_list_iter) {
+		if (connector->base.connector_type != DRM_MODE_CONNECTOR_DisplayPort)
+			continue;
+
+		dp_connectors++;
+	}
+	drm_connector_list_iter_end(&connector_list_iter);
+
+	tunnel_mgr = drm_dp_tunnel_mgr_create(&i915->drm, dp_connectors);
+	if (IS_ERR(tunnel_mgr))
+		return PTR_ERR(tunnel_mgr);
+
+	i915->display.dp_tunnel_mgr = tunnel_mgr;
+
+	return 0;
+}
+
+void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915)
+{
+	drm_dp_tunnel_mgr_destroy(i915->display.dp_tunnel_mgr);
+	i915->display.dp_tunnel_mgr = NULL;
+}
diff --git a/drivers/gpu/drm/i915/display/intel_dp_tunnel.h b/drivers/gpu/drm/i915/display/intel_dp_tunnel.h
new file mode 100644
index 0000000000000..bedba3ba9ad8d
--- /dev/null
+++ b/drivers/gpu/drm/i915/display/intel_dp_tunnel.h
@@ -0,0 +1,131 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#ifndef __INTEL_DP_TUNNEL_H__
+#define __INTEL_DP_TUNNEL_H__
+
+#include <linux/errno.h>
+#include <linux/types.h>
+
+struct drm_i915_private;
+struct drm_connector_state;
+struct drm_modeset_acquire_ctx;
+
+struct intel_atomic_state;
+struct intel_connector;
+struct intel_crtc;
+struct intel_crtc_state;
+struct intel_dp;
+struct intel_encoder;
+struct intel_link_bw_limits;
+
+#if defined(CONFIG_DRM_I915_DP_TUNNEL) && defined(I915)
+
+int intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx);
+void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp);
+void intel_dp_tunnel_destroy(struct intel_dp *intel_dp);
+void intel_dp_tunnel_resume(struct intel_dp *intel_dp, bool dpcd_updated);
+void intel_dp_tunnel_suspend(struct intel_dp *intel_dp);
+
+bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp);
+
+void
+intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state);
+
+void intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
+					      struct intel_dp *intel_dp,
+					      const struct intel_connector *connector,
+					      struct intel_crtc_state *crtc_state);
+
+int intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state,
+					      struct intel_crtc *crtc);
+int intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state,
+				      struct intel_link_bw_limits *limits);
+int intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state,
+				       struct intel_dp *intel_dp,
+				       struct intel_connector *connector);
+void intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state,
+				     struct intel_encoder *encoder,
+				     const struct intel_crtc_state *new_crtc_state,
+				     const struct drm_connector_state *new_conn_state);
+void intel_dp_tunnel_atomic_free_bw(struct intel_atomic_state *state,
+				    struct intel_encoder *encoder,
+				    const struct intel_crtc_state *old_crtc_state,
+				    const struct drm_connector_state *old_conn_state);
+
+int intel_dp_tunnel_mgr_init(struct drm_i915_private *i915);
+void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915);
+
+#else
+
+static inline int
+intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp) {}
+static inline void intel_dp_tunnel_destroy(struct intel_dp *intel_dp) {}
+static inline void intel_dp_tunnel_resume(struct intel_dp *intel_dp, bool dpcd_updated) {}
+static inline void intel_dp_tunnel_suspend(struct intel_dp *intel_dp) {}
+
+static inline bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp)
+{
+	return false;
+}
+
+static inline void
+intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state) {}
+
+static inline void
+intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
+					 struct intel_dp *intel_dp,
+					 const struct intel_connector *connector,
+					 struct intel_crtc_state *crtc_state) {}
+
+static inline int
+intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state,
+					  struct intel_crtc *crtc)
+{
+	return 0;
+}
+
+static inline int
+intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state,
+				  struct intel_link_bw_limits *limits)
+{
+	return 0;
+}
+
+static inline int
+intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state,
+				   struct intel_dp *intel_dp,
+				   struct intel_connector *connector)
+{
+	return 0;
+}
+
+static inline void
+intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state,
+				struct intel_encoder *encoder,
+				const struct intel_crtc_state *new_crtc_state,
+				const struct drm_connector_state *new_conn_state) {}
+static inline void
+intel_dp_tunnel_atomic_free_bw(struct intel_atomic_state *state,
+			       struct intel_encoder *encoder,
+			       const struct intel_crtc_state *old_crtc_state,
+			       const struct drm_connector_state *old_conn_state) {}
+
+static inline int
+intel_dp_tunnel_mgr_init(struct drm_i915_private *i915)
+{
+	return 0;
+}
+
+static inline void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915) {}
+
+#endif /* CONFIG_DRM_I915_DP_TUNNEL */
+
+#endif /* __INTEL_DP_TUNNEL_H__ */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 12/19] drm/i915/dp: Add DP tunnel atomic state and check BW limit
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (10 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 11/19] drm/i915/dp: Add support for DP tunnel BW allocation Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-05 16:11   ` Ville Syrjälä
  2024-01-23 10:28 ` [PATCH 13/19] drm/i915/dp: Account for tunnel BW limit in intel_dp_max_link_data_rate() Imre Deak
                   ` (10 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Add the atomic state during a modeset required to enable the DP tunnel
BW allocation mode on links where such a tunnel was detected.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_atomic.c  |  8 ++++++++
 drivers/gpu/drm/i915/display/intel_display.c | 19 +++++++++++++++++++
 drivers/gpu/drm/i915/display/intel_link_bw.c |  5 +++++
 3 files changed, 32 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/intel_atomic.c b/drivers/gpu/drm/i915/display/intel_atomic.c
index 96ab37e158995..4236740ede9ed 100644
--- a/drivers/gpu/drm/i915/display/intel_atomic.c
+++ b/drivers/gpu/drm/i915/display/intel_atomic.c
@@ -260,6 +260,10 @@ intel_crtc_duplicate_state(struct drm_crtc *crtc)
 	if (crtc_state->post_csc_lut)
 		drm_property_blob_get(crtc_state->post_csc_lut);
 
+	if (crtc_state->dp_tunnel_ref.tunnel)
+		drm_dp_tunnel_ref_get(old_crtc_state->dp_tunnel_ref.tunnel,
+					&crtc_state->dp_tunnel_ref);
+
 	crtc_state->update_pipe = false;
 	crtc_state->update_m_n = false;
 	crtc_state->update_lrr = false;
@@ -311,6 +315,8 @@ intel_crtc_destroy_state(struct drm_crtc *crtc,
 
 	__drm_atomic_helper_crtc_destroy_state(&crtc_state->uapi);
 	intel_crtc_free_hw_state(crtc_state);
+	if (crtc_state->dp_tunnel_ref.tunnel)
+		drm_dp_tunnel_ref_put(&crtc_state->dp_tunnel_ref);
 	kfree(crtc_state);
 }
 
@@ -346,6 +352,8 @@ void intel_atomic_state_clear(struct drm_atomic_state *s)
 	/* state->internal not reset on purpose */
 
 	state->dpll_set = state->modeset = false;
+
+	intel_dp_tunnel_atomic_cleanup_inherited_state(state);
 }
 
 struct intel_crtc_state *
diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index b9f985a5e705b..46b27a32c8640 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -33,6 +33,7 @@
 #include <linux/string_helpers.h>
 
 #include <drm/display/drm_dp_helper.h>
+#include <drm/display/drm_dp_tunnel.h>
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_atomic_uapi.h>
@@ -73,6 +74,7 @@
 #include "intel_dp.h"
 #include "intel_dp_link_training.h"
 #include "intel_dp_mst.h"
+#include "intel_dp_tunnel.h"
 #include "intel_dpll.h"
 #include "intel_dpll_mgr.h"
 #include "intel_dpt.h"
@@ -4490,6 +4492,8 @@ copy_bigjoiner_crtc_state_modeset(struct intel_atomic_state *state,
 	saved_state->crc_enabled = slave_crtc_state->crc_enabled;
 
 	intel_crtc_free_hw_state(slave_crtc_state);
+	if (slave_crtc_state->dp_tunnel_ref.tunnel)
+		drm_dp_tunnel_ref_put(&slave_crtc_state->dp_tunnel_ref);
 	memcpy(slave_crtc_state, saved_state, sizeof(*slave_crtc_state));
 	kfree(saved_state);
 
@@ -4505,6 +4509,10 @@ copy_bigjoiner_crtc_state_modeset(struct intel_atomic_state *state,
 		      &master_crtc_state->hw.adjusted_mode);
 	slave_crtc_state->hw.scaling_filter = master_crtc_state->hw.scaling_filter;
 
+	if (master_crtc_state->dp_tunnel_ref.tunnel)
+		drm_dp_tunnel_ref_get(master_crtc_state->dp_tunnel_ref.tunnel,
+					&slave_crtc_state->dp_tunnel_ref);
+
 	copy_bigjoiner_crtc_state_nomodeset(state, slave_crtc);
 
 	slave_crtc_state->uapi.mode_changed = master_crtc_state->uapi.mode_changed;
@@ -4533,6 +4541,13 @@ intel_crtc_prepare_cleared_state(struct intel_atomic_state *state,
 	/* free the old crtc_state->hw members */
 	intel_crtc_free_hw_state(crtc_state);
 
+	if (crtc_state->dp_tunnel_ref.tunnel) {
+		drm_dp_tunnel_atomic_set_stream_bw(&state->base,
+						   crtc_state->dp_tunnel_ref.tunnel,
+						   crtc->pipe, 0);
+		drm_dp_tunnel_ref_put(&crtc_state->dp_tunnel_ref);
+	}
+
 	/* FIXME: before the switch to atomic started, a new pipe_config was
 	 * kzalloc'd. Code that depends on any field being zero should be
 	 * fixed, so that the crtc_state can be safely duplicated. For now,
@@ -5374,6 +5389,10 @@ static int intel_modeset_pipe(struct intel_atomic_state *state,
 	if (ret)
 		return ret;
 
+	ret = intel_dp_tunnel_atomic_add_state_for_crtc(state, crtc);
+	if (ret)
+		return ret;
+
 	ret = intel_dp_mst_add_topology_state_for_crtc(state, crtc);
 	if (ret)
 		return ret;
diff --git a/drivers/gpu/drm/i915/display/intel_link_bw.c b/drivers/gpu/drm/i915/display/intel_link_bw.c
index 9c6d35a405a18..5b539ba996ddf 100644
--- a/drivers/gpu/drm/i915/display/intel_link_bw.c
+++ b/drivers/gpu/drm/i915/display/intel_link_bw.c
@@ -8,6 +8,7 @@
 #include "intel_atomic.h"
 #include "intel_display_types.h"
 #include "intel_dp_mst.h"
+#include "intel_dp_tunnel.h"
 #include "intel_fdi.h"
 #include "intel_link_bw.h"
 
@@ -149,6 +150,10 @@ static int check_all_link_config(struct intel_atomic_state *state,
 	if (ret)
 		return ret;
 
+	ret = intel_dp_tunnel_atomic_check_link(state, limits);
+	if (ret)
+		return ret;
+
 	ret = intel_fdi_atomic_check_link(state, limits);
 	if (ret)
 		return ret;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 13/19] drm/i915/dp: Account for tunnel BW limit in intel_dp_max_link_data_rate()
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (11 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 12/19] drm/i915/dp: Add DP tunnel atomic state and check BW limit Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-06 20:42   ` Shankar, Uma
  2024-01-23 10:28 ` [PATCH 14/19] drm/i915/dp: Compute DP tunel BW during encoder state computation Imre Deak
                   ` (9 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Take any link BW limitation into account in
intel_dp_max_link_data_rate(). Such a limitation can be due to multiple
displays on (Thunderbolt) links with DP tunnels sharing the link BW.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 32 +++++++++++++++++++++----
 1 file changed, 28 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 323475569ee7f..78dfe8be6031d 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -63,6 +63,7 @@
 #include "intel_dp_hdcp.h"
 #include "intel_dp_link_training.h"
 #include "intel_dp_mst.h"
+#include "intel_dp_tunnel.h"
 #include "intel_dpio_phy.h"
 #include "intel_dpll.h"
 #include "intel_fifo_underrun.h"
@@ -152,6 +153,22 @@ int intel_dp_link_symbol_clock(int rate)
 	return DIV_ROUND_CLOSEST(rate * 10, intel_dp_link_symbol_size(rate));
 }
 
+static int max_dprx_rate(struct intel_dp *intel_dp)
+{
+	if (intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
+		return drm_dp_tunnel_max_dprx_rate(intel_dp->tunnel);
+
+	return drm_dp_bw_code_to_link_rate(intel_dp->dpcd[DP_MAX_LINK_RATE]);
+}
+
+static int max_dprx_lane_count(struct intel_dp *intel_dp)
+{
+	if (intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
+		return drm_dp_tunnel_max_dprx_lane_count(intel_dp->tunnel);
+
+	return drm_dp_max_lane_count(intel_dp->dpcd);
+}
+
 static void intel_dp_set_default_sink_rates(struct intel_dp *intel_dp)
 {
 	intel_dp->sink_rates[0] = 162000;
@@ -180,7 +197,7 @@ static void intel_dp_set_dpcd_sink_rates(struct intel_dp *intel_dp)
 	/*
 	 * Sink rates for 8b/10b.
 	 */
-	max_rate = drm_dp_bw_code_to_link_rate(intel_dp->dpcd[DP_MAX_LINK_RATE]);
+	max_rate = max_dprx_rate(intel_dp);
 	max_lttpr_rate = drm_dp_lttpr_max_link_rate(intel_dp->lttpr_common_caps);
 	if (max_lttpr_rate)
 		max_rate = min(max_rate, max_lttpr_rate);
@@ -259,7 +276,7 @@ static void intel_dp_set_max_sink_lane_count(struct intel_dp *intel_dp)
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
 	struct intel_encoder *encoder = &intel_dig_port->base;
 
-	intel_dp->max_sink_lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
+	intel_dp->max_sink_lane_count = max_dprx_lane_count(intel_dp);
 
 	switch (intel_dp->max_sink_lane_count) {
 	case 1:
@@ -389,14 +406,21 @@ int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
  * @max_dprx_rate: Maximum data rate of the DPRX
  * @max_dprx_lanes: Maximum lane count of the DPRX
  *
- * Calculate the maximum data rate for the provided link parameters.
+ * Calculate the maximum data rate for the provided link parameters taking into
+ * account any BW limitations by a DP tunnel attached to @intel_dp.
  *
  * Returns the maximum data rate in kBps units.
  */
 int intel_dp_max_link_data_rate(struct intel_dp *intel_dp,
 				int max_dprx_rate, int max_dprx_lanes)
 {
-	return drm_dp_max_dprx_data_rate(max_dprx_rate, max_dprx_lanes);
+	int max_rate = drm_dp_max_dprx_data_rate(max_dprx_rate, max_dprx_lanes);
+
+	if (intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
+		max_rate = min(max_rate,
+			       drm_dp_tunnel_available_bw(intel_dp->tunnel));
+
+	return max_rate;
 }
 
 bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp)
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 14/19] drm/i915/dp: Compute DP tunel BW during encoder state computation
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (12 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 13/19] drm/i915/dp: Account for tunnel BW limit in intel_dp_max_link_data_rate() Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-06 20:44   ` Shankar, Uma
  2024-02-06 23:25   ` Ville Syrjälä
  2024-01-23 10:28 ` [PATCH 15/19] drm/i915/dp: Allocate/free DP tunnel BW in the encoder enable/disable hooks Imre Deak
                   ` (8 subsequent siblings)
  22 siblings, 2 replies; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Compute the BW required through a DP tunnel on links with such tunnels
detected and add the corresponding atomic state during a modeset.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c     | 16 +++++++++++++---
 drivers/gpu/drm/i915/display/intel_dp_mst.c | 13 +++++++++++++
 2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 78dfe8be6031d..6968fdb7ffcdf 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2880,6 +2880,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
 			struct drm_connector_state *conn_state)
 {
 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+	struct intel_atomic_state *state = to_intel_atomic_state(conn_state->state);
 	struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
 	struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
 	const struct drm_display_mode *fixed_mode;
@@ -2980,6 +2981,9 @@ intel_dp_compute_config(struct intel_encoder *encoder,
 	intel_dp_compute_vsc_sdp(intel_dp, pipe_config, conn_state);
 	intel_dp_compute_hdr_metadata_infoframe_sdp(intel_dp, pipe_config, conn_state);
 
+	intel_dp_tunnel_atomic_compute_stream_bw(state, intel_dp, connector,
+						 pipe_config);
+
 	return 0;
 }
 
@@ -6087,6 +6091,15 @@ static int intel_dp_connector_atomic_check(struct drm_connector *conn,
 			return ret;
 	}
 
+	if (!intel_connector_needs_modeset(state, conn))
+		return 0;
+
+	ret = intel_dp_tunnel_atomic_check_state(state,
+						 intel_dp,
+						 intel_conn);
+	if (ret)
+		return ret;
+
 	/*
 	 * We don't enable port sync on BDW due to missing w/as and
 	 * due to not having adjusted the modeset sequence appropriately.
@@ -6094,9 +6107,6 @@ static int intel_dp_connector_atomic_check(struct drm_connector *conn,
 	if (DISPLAY_VER(dev_priv) < 9)
 		return 0;
 
-	if (!intel_connector_needs_modeset(state, conn))
-		return 0;
-
 	if (conn->has_tile) {
 		ret = intel_modeset_tile_group(state, conn->tile_group->id);
 		if (ret)
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index 520393dc8b453..cbfab3173b9ef 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -42,6 +42,7 @@
 #include "intel_dp.h"
 #include "intel_dp_hdcp.h"
 #include "intel_dp_mst.h"
+#include "intel_dp_tunnel.h"
 #include "intel_dpio_phy.h"
 #include "intel_hdcp.h"
 #include "intel_hotplug.h"
@@ -523,6 +524,7 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
 				       struct drm_connector_state *conn_state)
 {
 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+	struct intel_atomic_state *state = to_intel_atomic_state(conn_state->state);
 	struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
 	struct intel_dp *intel_dp = &intel_mst->primary->dp;
 	const struct intel_connector *connector =
@@ -619,6 +621,9 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
 
 	intel_psr_compute_config(intel_dp, pipe_config, conn_state);
 
+	intel_dp_tunnel_atomic_compute_stream_bw(state, intel_dp, connector,
+						 pipe_config);
+
 	return 0;
 }
 
@@ -876,6 +881,14 @@ intel_dp_mst_atomic_check(struct drm_connector *connector,
 	if (ret)
 		return ret;
 
+	if (intel_connector_needs_modeset(state, connector)) {
+		ret = intel_dp_tunnel_atomic_check_state(state,
+							 intel_connector->mst_port,
+							 intel_connector);
+		if (ret)
+			return ret;
+	}
+
 	return drm_dp_atomic_release_time_slots(&state->base,
 						&intel_connector->mst_port->mst_mgr,
 						intel_connector->port);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 15/19] drm/i915/dp: Allocate/free DP tunnel BW in the encoder enable/disable hooks
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (13 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 14/19] drm/i915/dp: Compute DP tunel BW during encoder state computation Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-06 20:45   ` Shankar, Uma
  2024-01-23 10:28 ` [PATCH 16/19] drm/i915/dp: Handle DP tunnel IRQs Imre Deak
                   ` (7 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Allocate and free the DP tunnel BW required by a stream while
enabling/disabling the stream during a modeset.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/g4x_dp.c    | 28 ++++++++++++++++++++++++
 drivers/gpu/drm/i915/display/intel_ddi.c |  7 ++++++
 2 files changed, 35 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/g4x_dp.c b/drivers/gpu/drm/i915/display/g4x_dp.c
index dfe0b07a122d1..1e498e1510adf 100644
--- a/drivers/gpu/drm/i915/display/g4x_dp.c
+++ b/drivers/gpu/drm/i915/display/g4x_dp.c
@@ -19,6 +19,7 @@
 #include "intel_dp.h"
 #include "intel_dp_aux.h"
 #include "intel_dp_link_training.h"
+#include "intel_dp_tunnel.h"
 #include "intel_dpio_phy.h"
 #include "intel_fifo_underrun.h"
 #include "intel_hdmi.h"
@@ -729,6 +730,24 @@ static void vlv_enable_dp(struct intel_atomic_state *state,
 	encoder->audio_enable(encoder, pipe_config, conn_state);
 }
 
+static void g4x_dp_pre_pll_enable(struct intel_atomic_state *state,
+				  struct intel_encoder *encoder,
+				  const struct intel_crtc_state *new_crtc_state,
+				  const struct drm_connector_state *new_conn_state)
+{
+	intel_dp_tunnel_atomic_alloc_bw(state, encoder,
+					new_crtc_state, new_conn_state);
+}
+
+static void g4x_dp_post_pll_disable(struct intel_atomic_state *state,
+				    struct intel_encoder *encoder,
+				    const struct intel_crtc_state *old_crtc_state,
+				    const struct drm_connector_state *old_conn_state)
+{
+	intel_dp_tunnel_atomic_free_bw(state, encoder,
+				       old_crtc_state, old_conn_state);
+}
+
 static void g4x_pre_enable_dp(struct intel_atomic_state *state,
 			      struct intel_encoder *encoder,
 			      const struct intel_crtc_state *pipe_config,
@@ -762,6 +781,8 @@ static void vlv_dp_pre_pll_enable(struct intel_atomic_state *state,
 	intel_dp_prepare(encoder, pipe_config);
 
 	vlv_phy_pre_pll_enable(encoder, pipe_config);
+
+	g4x_dp_pre_pll_enable(state, encoder, pipe_config, conn_state);
 }
 
 static void chv_pre_enable_dp(struct intel_atomic_state *state,
@@ -785,6 +806,8 @@ static void chv_dp_pre_pll_enable(struct intel_atomic_state *state,
 	intel_dp_prepare(encoder, pipe_config);
 
 	chv_phy_pre_pll_enable(encoder, pipe_config);
+
+	g4x_dp_pre_pll_enable(state, encoder, pipe_config, conn_state);
 }
 
 static void chv_dp_post_pll_disable(struct intel_atomic_state *state,
@@ -792,6 +815,8 @@ static void chv_dp_post_pll_disable(struct intel_atomic_state *state,
 				    const struct intel_crtc_state *old_crtc_state,
 				    const struct drm_connector_state *old_conn_state)
 {
+	g4x_dp_post_pll_disable(state, encoder, old_crtc_state, old_conn_state);
+
 	chv_phy_post_pll_disable(encoder, old_crtc_state);
 }
 
@@ -1349,11 +1374,14 @@ bool g4x_dp_init(struct drm_i915_private *dev_priv,
 		intel_encoder->enable = vlv_enable_dp;
 		intel_encoder->disable = vlv_disable_dp;
 		intel_encoder->post_disable = vlv_post_disable_dp;
+		intel_encoder->post_pll_disable = g4x_dp_post_pll_disable;
 	} else {
+		intel_encoder->pre_pll_enable = g4x_dp_pre_pll_enable;
 		intel_encoder->pre_enable = g4x_pre_enable_dp;
 		intel_encoder->enable = g4x_enable_dp;
 		intel_encoder->disable = g4x_disable_dp;
 		intel_encoder->post_disable = g4x_post_disable_dp;
+		intel_encoder->post_pll_disable = g4x_dp_post_pll_disable;
 	}
 	intel_encoder->audio_enable = g4x_dp_audio_enable;
 	intel_encoder->audio_disable = g4x_dp_audio_disable;
diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
index 922194b957be2..aa6e7da08fbce 100644
--- a/drivers/gpu/drm/i915/display/intel_ddi.c
+++ b/drivers/gpu/drm/i915/display/intel_ddi.c
@@ -54,6 +54,7 @@
 #include "intel_dp_aux.h"
 #include "intel_dp_link_training.h"
 #include "intel_dp_mst.h"
+#include "intel_dp_tunnel.h"
 #include "intel_dpio_phy.h"
 #include "intel_dsi.h"
 #include "intel_fdi.h"
@@ -3141,6 +3142,9 @@ static void intel_ddi_post_pll_disable(struct intel_atomic_state *state,
 
 	main_link_aux_power_domain_put(dig_port, old_crtc_state);
 
+	intel_dp_tunnel_atomic_free_bw(state, encoder,
+				       old_crtc_state, old_conn_state);
+
 	if (is_tc_port)
 		intel_tc_port_put_link(dig_port);
 }
@@ -3480,6 +3484,9 @@ intel_ddi_pre_pll_enable(struct intel_atomic_state *state,
 		intel_ddi_update_active_dpll(state, encoder, master_crtc);
 	}
 
+	intel_dp_tunnel_atomic_alloc_bw(state, encoder,
+					crtc_state, conn_state);
+
 	main_link_aux_power_domain_get(dig_port, crtc_state);
 
 	if (is_tc_port && !intel_tc_port_in_tbt_alt_mode(dig_port))
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 16/19] drm/i915/dp: Handle DP tunnel IRQs
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (14 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 15/19] drm/i915/dp: Allocate/free DP tunnel BW in the encoder enable/disable hooks Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-01-23 10:28 ` [PATCH 17/19] drm/i915/dp: Call intel_dp_sync_state() always for DDI DP encoders Imre Deak
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Handle DP tunnel IRQs a sink (or rather a BW management component like
the Thunderbolt Connection Manager) raises to signal the completion of a
BW request by the driver, or to signal any state change related to the
link BW.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 37 +++++++++++++++++++------
 include/drm/display/drm_dp.h            |  1 +
 2 files changed, 29 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 6968fdb7ffcdf..8ebfb039000f6 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -4911,13 +4911,15 @@ static bool intel_dp_mst_link_status(struct intel_dp *intel_dp)
  * - %true if pending interrupts were serviced (or no interrupts were
  *   pending) w/o detecting an error condition.
  * - %false if an error condition - like AUX failure or a loss of link - is
- *   detected, which needs servicing from the hotplug work.
+ *   detected, or another condition - like a DP tunnel BW state change - needs
+ *   servicing from the hotplug work.
  */
 static bool
 intel_dp_check_mst_status(struct intel_dp *intel_dp)
 {
 	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
 	bool link_ok = true;
+	bool reprobe_needed = false;
 
 	drm_WARN_ON_ONCE(&i915->drm, intel_dp->active_mst_links < 0);
 
@@ -4944,6 +4946,13 @@ intel_dp_check_mst_status(struct intel_dp *intel_dp)
 
 		intel_dp_mst_hpd_irq(intel_dp, esi, ack);
 
+		if (esi[3] & DP_TUNNELING_IRQ) {
+			if (drm_dp_tunnel_handle_irq(i915->display.dp_tunnel_mgr,
+						     &intel_dp->aux))
+				reprobe_needed = true;
+			ack[3] |= DP_TUNNELING_IRQ;
+		}
+
 		if (!memchr_inv(ack, 0, sizeof(ack)))
 			break;
 
@@ -4954,7 +4963,7 @@ intel_dp_check_mst_status(struct intel_dp *intel_dp)
 			drm_dp_mst_hpd_irq_send_new_request(&intel_dp->mst_mgr);
 	}
 
-	return link_ok;
+	return link_ok && !reprobe_needed;
 }
 
 static void
@@ -5330,23 +5339,32 @@ static void intel_dp_check_device_service_irq(struct intel_dp *intel_dp)
 		drm_dbg_kms(&i915->drm, "Sink specific irq unhandled\n");
 }
 
-static void intel_dp_check_link_service_irq(struct intel_dp *intel_dp)
+static bool intel_dp_check_link_service_irq(struct intel_dp *intel_dp)
 {
+	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+	bool reprobe_needed = false;
 	u8 val;
 
 	if (intel_dp->dpcd[DP_DPCD_REV] < 0x11)
-		return;
+		return false;
 
 	if (drm_dp_dpcd_readb(&intel_dp->aux,
 			      DP_LINK_SERVICE_IRQ_VECTOR_ESI0, &val) != 1 || !val)
-		return;
+		return false;
+
+	if ((val & DP_TUNNELING_IRQ) &&
+	    drm_dp_tunnel_handle_irq(i915->display.dp_tunnel_mgr,
+				     &intel_dp->aux))
+		reprobe_needed = true;
 
 	if (drm_dp_dpcd_writeb(&intel_dp->aux,
 			       DP_LINK_SERVICE_IRQ_VECTOR_ESI0, val) != 1)
-		return;
+		return reprobe_needed;
 
 	if (val & HDMI_LINK_STATUS_CHANGED)
 		intel_dp_handle_hdmi_link_status_change(intel_dp);
+
+	return reprobe_needed;
 }
 
 /*
@@ -5367,6 +5385,7 @@ intel_dp_short_pulse(struct intel_dp *intel_dp)
 {
 	struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
 	u8 old_sink_count = intel_dp->sink_count;
+	bool reprobe_needed = false;
 	bool ret;
 
 	/*
@@ -5389,7 +5408,7 @@ intel_dp_short_pulse(struct intel_dp *intel_dp)
 	}
 
 	intel_dp_check_device_service_irq(intel_dp);
-	intel_dp_check_link_service_irq(intel_dp);
+	reprobe_needed = intel_dp_check_link_service_irq(intel_dp);
 
 	/* Handle CEC interrupts, if any */
 	drm_dp_cec_irq(&intel_dp->aux);
@@ -5416,10 +5435,10 @@ intel_dp_short_pulse(struct intel_dp *intel_dp)
 		 * FIXME get rid of the ad-hoc phy test modeset code
 		 * and properly incorporate it into the normal modeset.
 		 */
-		return false;
+		reprobe_needed = true;
 	}
 
-	return true;
+	return !reprobe_needed;
 }
 
 /* XXX this is probably wrong for multiple downstream ports */
diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
index 8bfd5d007be8d..4891bd916d26a 100644
--- a/include/drm/display/drm_dp.h
+++ b/include/drm/display/drm_dp.h
@@ -1081,6 +1081,7 @@
 # define STREAM_STATUS_CHANGED               (1 << 2)
 # define HDMI_LINK_STATUS_CHANGED            (1 << 3)
 # define CONNECTED_OFF_ENTRY_REQUESTED       (1 << 4)
+# define DP_TUNNELING_IRQ                    (1 << 5)
 
 #define DP_PSR_ERROR_STATUS                 0x2006  /* XXX 1.2? */
 # define DP_PSR_LINK_CRC_ERROR              (1 << 0)
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 17/19] drm/i915/dp: Call intel_dp_sync_state() always for DDI DP encoders
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (15 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 16/19] drm/i915/dp: Handle DP tunnel IRQs Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-02-06 20:46   ` Shankar, Uma
  2024-01-23 10:28 ` [PATCH 18/19] drm/i915/dp: Suspend/resume DP tunnels Imre Deak
                   ` (5 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

A follow-up change will need to resume DP tunnels during system resume,
so call intel_dp_sync_state() always for DDI encoders, so this function
can resume the tunnels for all DP connectors.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_ddi.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
index aa6e7da08fbce..1e26e62b82d48 100644
--- a/drivers/gpu/drm/i915/display/intel_ddi.c
+++ b/drivers/gpu/drm/i915/display/intel_ddi.c
@@ -4131,7 +4131,7 @@ static void intel_ddi_sync_state(struct intel_encoder *encoder,
 		intel_tc_port_sanitize_mode(enc_to_dig_port(encoder),
 					    crtc_state);
 
-	if (crtc_state && intel_crtc_has_dp_encoder(crtc_state))
+	if (intel_encoder_is_dp(encoder))
 		intel_dp_sync_state(encoder, crtc_state);
 }
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 18/19] drm/i915/dp: Suspend/resume DP tunnels
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (16 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 17/19] drm/i915/dp: Call intel_dp_sync_state() always for DDI DP encoders Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-01-31 16:18   ` Ville Syrjälä
  2024-01-23 10:28 ` [PATCH 19/19] drm/i915/dp: Enable DP tunnel BW allocation mode Imre Deak
                   ` (4 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Suspend and resume DP tunnels during system suspend/resume, disabling
the BW allocation mode during suspend, re-enabling it after resume. This
reflects the link's BW management component (Thunderbolt CM) disabling
BWA during suspend. Before any BW requests the driver must read the
sink's DPRX capabilities (since the BW manager requires this
information, so snoops for it on AUX), so ensure this read takes place.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 8ebfb039000f6..bc138a54f8d7b 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -36,6 +36,7 @@
 #include <asm/byteorder.h>
 
 #include <drm/display/drm_dp_helper.h>
+#include <drm/display/drm_dp_tunnel.h>
 #include <drm/display/drm_dsc_helper.h>
 #include <drm/display/drm_hdmi_helper.h>
 #include <drm/drm_atomic_helper.h>
@@ -3320,18 +3321,21 @@ void intel_dp_sync_state(struct intel_encoder *encoder,
 			 const struct intel_crtc_state *crtc_state)
 {
 	struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
-
-	if (!crtc_state)
-		return;
+	bool dpcd_updated = false;
 
 	/*
 	 * Don't clobber DPCD if it's been already read out during output
 	 * setup (eDP) or detect.
 	 */
-	if (intel_dp->dpcd[DP_DPCD_REV] == 0)
+	if (crtc_state && intel_dp->dpcd[DP_DPCD_REV] == 0) {
 		intel_dp_get_dpcd(intel_dp);
+		dpcd_updated = true;
+	}
 
-	intel_dp_reset_max_link_params(intel_dp);
+	intel_dp_tunnel_resume(intel_dp, dpcd_updated);
+
+	if (crtc_state)
+		intel_dp_reset_max_link_params(intel_dp);
 }
 
 bool intel_dp_initial_fastset_check(struct intel_encoder *encoder,
@@ -5973,6 +5977,8 @@ void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder)
 	struct intel_dp *intel_dp = enc_to_intel_dp(intel_encoder);
 
 	intel_pps_vdd_off_sync(intel_dp);
+
+	intel_dp_tunnel_suspend(intel_dp);
 }
 
 void intel_dp_encoder_shutdown(struct intel_encoder *intel_encoder)
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 19/19] drm/i915/dp: Enable DP tunnel BW allocation mode
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (17 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 18/19] drm/i915/dp: Suspend/resume DP tunnels Imre Deak
@ 2024-01-23 10:28 ` Imre Deak
  2024-01-23 18:52 ` ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Add Display Port tunnel BW allocation support Patchwork
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Imre Deak @ 2024-01-23 10:28 UTC (permalink / raw)
  To: intel-gfx; +Cc: dri-devel

Detect DP tunnels and enable the BW allocation mode on them. Send a
hotplug notification to userspace in response to a BW change.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 .../drm/i915/display/intel_display_driver.c   | 20 +++++++++++++++----
 drivers/gpu/drm/i915/display/intel_dp.c       | 14 +++++++++++--
 2 files changed, 28 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display_driver.c b/drivers/gpu/drm/i915/display/intel_display_driver.c
index ecf9cb74734b6..62987b8427f7b 100644
--- a/drivers/gpu/drm/i915/display/intel_display_driver.c
+++ b/drivers/gpu/drm/i915/display/intel_display_driver.c
@@ -35,6 +35,7 @@
 #include "intel_dkl_phy.h"
 #include "intel_dmc.h"
 #include "intel_dp.h"
+#include "intel_dp_tunnel.h"
 #include "intel_dpll.h"
 #include "intel_dpll_mgr.h"
 #include "intel_fb.h"
@@ -435,10 +436,8 @@ int intel_display_driver_probe_nogem(struct drm_i915_private *i915)
 
 	for_each_pipe(i915, pipe) {
 		ret = intel_crtc_init(i915, pipe);
-		if (ret) {
-			intel_mode_config_cleanup(i915);
-			return ret;
-		}
+		if (ret)
+			goto err_mode_config;
 	}
 
 	intel_plane_possible_crtcs_init(i915);
@@ -460,6 +459,10 @@ int intel_display_driver_probe_nogem(struct drm_i915_private *i915)
 	intel_vga_disable(i915);
 	intel_setup_outputs(i915);
 
+	ret = intel_dp_tunnel_mgr_init(i915);
+	if (ret)
+		goto err_hdcp;
+
 	intel_display_driver_disable_user_access(i915);
 
 	drm_modeset_lock_all(dev);
@@ -482,6 +485,13 @@ int intel_display_driver_probe_nogem(struct drm_i915_private *i915)
 		ilk_wm_sanitize(i915);
 
 	return 0;
+
+err_hdcp:
+	intel_hdcp_component_fini(i915);
+err_mode_config:
+	intel_mode_config_cleanup(i915);
+
+	return ret;
 }
 
 /* part #3: call after gem init */
@@ -598,6 +608,8 @@ void intel_display_driver_remove_noirq(struct drm_i915_private *i915)
 
 	intel_mode_config_cleanup(i915);
 
+	intel_dp_tunnel_mgr_cleanup(i915);
+
 	intel_overlay_cleanup(i915);
 
 	intel_gmbus_teardown(i915);
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index bc138a54f8d7b..6133266d78276 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -5752,6 +5752,7 @@ intel_dp_detect(struct drm_connector *connector,
 	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
 	struct intel_encoder *encoder = &dig_port->base;
 	enum drm_connector_status status;
+	int ret;
 
 	drm_dbg_kms(&dev_priv->drm, "[CONNECTOR:%d:%s]\n",
 		    connector->base.id, connector->name);
@@ -5787,9 +5788,18 @@ intel_dp_detect(struct drm_connector *connector,
 							intel_dp->is_mst);
 		}
 
+		intel_dp_tunnel_disconnect(intel_dp);
+
 		goto out;
 	}
 
+	ret = intel_dp_tunnel_detect(intel_dp, ctx);
+	if (ret == -EDEADLK)
+		return ret;
+
+	if (ret == 1)
+		intel_connector->base.epoch_counter++;
+
 	intel_dp_detect_dsc_caps(intel_dp, intel_connector);
 
 	intel_dp_configure_mst(intel_dp);
@@ -5820,8 +5830,6 @@ intel_dp_detect(struct drm_connector *connector,
 	 * with an IRQ_HPD, so force a link status check.
 	 */
 	if (!intel_dp_is_edp(intel_dp)) {
-		int ret;
-
 		ret = intel_dp_retrain_link(encoder, ctx);
 		if (ret)
 			return ret;
@@ -5961,6 +5969,8 @@ void intel_dp_encoder_flush_work(struct drm_encoder *encoder)
 
 	intel_dp_mst_encoder_cleanup(dig_port);
 
+	intel_dp_tunnel_destroy(intel_dp);
+
 	intel_pps_vdd_off_sync(intel_dp);
 
 	/*
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Add Display Port tunnel BW allocation support
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (18 preceding siblings ...)
  2024-01-23 10:28 ` [PATCH 19/19] drm/i915/dp: Enable DP tunnel BW allocation mode Imre Deak
@ 2024-01-23 18:52 ` Patchwork
  2024-01-23 18:52 ` ✗ Fi.CI.SPARSE: " Patchwork
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Patchwork @ 2024-01-23 18:52 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Add Display Port tunnel BW allocation support
URL   : https://patchwork.freedesktop.org/series/129082/
State : warning

== Summary ==

Error: dim checkpatch failed
26db77226283 drm/dp: Add drm_dp_max_dprx_data_rate()
12093a39bda8 drm/dp: Add support for DP tunneling
Traceback (most recent call last):
  File "scripts/spdxcheck.py", line 6, in <module>
    from ply import lex, yacc
ModuleNotFoundError: No module named 'ply'
Traceback (most recent call last):
  File "scripts/spdxcheck.py", line 6, in <module>
    from ply import lex, yacc
ModuleNotFoundError: No module named 'ply'
-:79: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#79: 
new file mode 100644

-:109: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#109: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:26:
+#define for_each_new_group_in_state(__state, __new_group_state, __i) \
+	for ((__i) = 0; \
+	     (__i) < (__state)->num_private_objs; \
+	     (__i)++) \
+		for_each_if ((__state)->private_objs[__i].ptr && \
+			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
+			     ((__new_group_state) = \
+				to_group_state((__state)->private_objs[__i].new_state), 1))

-:109: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__state' - possible side-effects?
#109: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:26:
+#define for_each_new_group_in_state(__state, __new_group_state, __i) \
+	for ((__i) = 0; \
+	     (__i) < (__state)->num_private_objs; \
+	     (__i)++) \
+		for_each_if ((__state)->private_objs[__i].ptr && \
+			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
+			     ((__new_group_state) = \
+				to_group_state((__state)->private_objs[__i].new_state), 1))

-:109: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__i' - possible side-effects?
#109: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:26:
+#define for_each_new_group_in_state(__state, __new_group_state, __i) \
+	for ((__i) = 0; \
+	     (__i) < (__state)->num_private_objs; \
+	     (__i)++) \
+		for_each_if ((__state)->private_objs[__i].ptr && \
+			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
+			     ((__new_group_state) = \
+				to_group_state((__state)->private_objs[__i].new_state), 1))

-:113: WARNING:SPACING: space prohibited between function name and open parenthesis '('
#113: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:30:
+		for_each_if ((__state)->private_objs[__i].ptr && \

-:118: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#118: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:35:
+#define for_each_old_group_in_state(__state, __old_group_state, __i) \
+	for ((__i) = 0; \
+	     (__i) < (__state)->num_private_objs; \
+	     (__i)++) \
+		for_each_if ((__state)->private_objs[__i].ptr && \
+			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
+			     ((__old_group_state) = \
+				to_group_state((__state)->private_objs[__i].old_state), 1))

-:118: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__state' - possible side-effects?
#118: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:35:
+#define for_each_old_group_in_state(__state, __old_group_state, __i) \
+	for ((__i) = 0; \
+	     (__i) < (__state)->num_private_objs; \
+	     (__i)++) \
+		for_each_if ((__state)->private_objs[__i].ptr && \
+			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
+			     ((__old_group_state) = \
+				to_group_state((__state)->private_objs[__i].old_state), 1))

-:118: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__i' - possible side-effects?
#118: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:35:
+#define for_each_old_group_in_state(__state, __old_group_state, __i) \
+	for ((__i) = 0; \
+	     (__i) < (__state)->num_private_objs; \
+	     (__i)++) \
+		for_each_if ((__state)->private_objs[__i].ptr && \
+			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
+			     ((__old_group_state) = \
+				to_group_state((__state)->private_objs[__i].old_state), 1))

-:122: WARNING:SPACING: space prohibited between function name and open parenthesis '('
#122: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:39:
+		for_each_if ((__state)->private_objs[__i].ptr && \

-:140: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__bw' - possible side-effects?
#140: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:57:
+#define DPTUN_BW_ARG(__bw) ((__bw) < 0 ? (__bw) : kbytes_to_mbits(__bw))

-:142: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__tunnel' - possible side-effects?
#142: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:59:
+#define __tun_prn(__tunnel, __level, __type, __fmt, ...) \
+	drm_##__level##__type((__tunnel)->group->mgr->dev, \
+			      "[DPTUN %s][%s] " __fmt, \
+			      drm_dp_tunnel_name(__tunnel), \
+			      (__tunnel)->aux->name, ## \
+			      __VA_ARGS__)

-:152: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__tunnel' - possible side-effects?
#152: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:69:
+#define tun_dbg_stat(__tunnel, __err, __fmt, ...) do { \
+	if (__err) \
+		__tun_prn(__tunnel, dbg, _kms, __fmt " (Failed, err: %pe)\n", \
+			  ## __VA_ARGS__, ERR_PTR(__err)); \
+	else \
+		__tun_prn(__tunnel, dbg, _kms, __fmt " (Ok)\n", \
+			  ## __VA_ARGS__); \
+} while (0)

-:152: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__err' - possible side-effects?
#152: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:69:
+#define tun_dbg_stat(__tunnel, __err, __fmt, ...) do { \
+	if (__err) \
+		__tun_prn(__tunnel, dbg, _kms, __fmt " (Failed, err: %pe)\n", \
+			  ## __VA_ARGS__, ERR_PTR(__err)); \
+	else \
+		__tun_prn(__tunnel, dbg, _kms, __fmt " (Ok)\n", \
+			  ## __VA_ARGS__); \
+} while (0)

-:152: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__fmt' - possible side-effects?
#152: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:69:
+#define tun_dbg_stat(__tunnel, __err, __fmt, ...) do { \
+	if (__err) \
+		__tun_prn(__tunnel, dbg, _kms, __fmt " (Failed, err: %pe)\n", \
+			  ## __VA_ARGS__, ERR_PTR(__err)); \
+	else \
+		__tun_prn(__tunnel, dbg, _kms, __fmt " (Ok)\n", \
+			  ## __VA_ARGS__); \
+} while (0)

-:164: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__group' - possible side-effects?
#164: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:81:
+#define tun_grp_dbg(__group, __fmt, ...) \
+	drm_dbg_kms((__group)->mgr->dev, \
+		    "[DPTUN %s] " __fmt, \
+		    drm_dp_tunnel_group_name(__group), ## \
+		    __VA_ARGS__)

-:172: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'start' - possible side-effects?
#172: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:89:
+#define __DPTUN_REG_RANGE(start, size) \
+	GENMASK_ULL(start + size - 1, start)

-:172: CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'start' may be better as '(start)' to avoid precedence issues
#172: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:89:
+#define __DPTUN_REG_RANGE(start, size) \
+	GENMASK_ULL(start + size - 1, start)

-:172: CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'size' may be better as '(size)' to avoid precedence issues
#172: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:89:
+#define __DPTUN_REG_RANGE(start, size) \
+	GENMASK_ULL(start + size - 1, start)

-:290: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__address' - possible side-effects?
#290: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:207:
+#define tunnel_reg_ptr(__regs, __address) ({ \
+	WARN_ON(!test_bit((__address) - DP_TUNNELING_BASE, dptun_info_regs)); \
+	&(__regs)->buf[bitmap_weight(dptun_info_regs, (__address) - DP_TUNNELING_BASE)]; \
+})

-:496: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#496: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:413:
+drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel,
+		    struct ref_tracker **tracker)

-:505: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#505: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:422:
+void drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel,
+			 struct ref_tracker **tracker)

-:799: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#799: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:716:
+drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
+		       struct drm_dp_aux *aux)

-:1593: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#1593: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:1510:
+int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
+					 struct drm_dp_tunnel *tunnel,

-:1656: CHECK:SPACING: spaces preferred around that '*' (ctx:ExV)
#1656: FILE: drivers/gpu/drm/display/drm_dp_tunnel.c:1573:
+		*stream_mask |= tunnel_state->stream_mask;
 		^

-:1927: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#1927: FILE: include/drm/display/drm_dp_tunnel.h:52:
+static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
+					   struct drm_dp_tunnel_ref *tunnel_ref)

-:2006: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#2006: FILE: include/drm/display/drm_dp_tunnel.h:131:
+static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
+					   struct drm_dp_tunnel_ref *tunnel_ref) {}

-:2142: CHECK:LINE_SPACING: Please don't use multiple blank lines
#2142: FILE: include/drm/display/drm_dp_tunnel.h:267:
+
+

total: 2 errors, 3 warnings, 22 checks, 2082 lines checked
aa5d25901902 drm/i915/dp: Add support to notify MST connectors to retry modesets
74febb19c412 drm/i915/dp: Use drm_dp_max_dprx_data_rate()
8b1fbc2599b0 drm/i915/dp: Factor out intel_dp_config_required_rate()
04bac33a820f drm/i915/dp: Export intel_dp_max_common_rate/lane_count()
41ef244183f8 drm/i915/dp: Factor out intel_dp_update_sink_caps()
26dd64f150f2 drm/i915/dp: Factor out intel_dp_read_dprx_caps()
11359d46e98d drm/i915/dp: Add intel_dp_max_link_data_rate()
23b909ed6da2 drm/i915/dp: Add way to get active pipes with syncing commits
cdb7da82eb46 drm/i915/dp: Add support for DP tunnel BW allocation
Traceback (most recent call last):
  File "scripts/spdxcheck.py", line 6, in <module>
    from ply import lex, yacc
ModuleNotFoundError: No module named 'ply'
Traceback (most recent call last):
  File "scripts/spdxcheck.py", line 6, in <module>
    from ply import lex, yacc
ModuleNotFoundError: No module named 'ply'
-:151: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#151: 
new file mode 100644

-:329: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#329: FILE: drivers/gpu/drm/i915/display/intel_dp_tunnel.c:174:
+	tunnel = drm_dp_tunnel_detect(i915->display.dp_tunnel_mgr,
+					&intel_dp->aux);

total: 0 errors, 1 warnings, 1 checks, 862 lines checked
03aa21905e64 drm/i915/dp: Add DP tunnel atomic state and check BW limit
-:21: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#21: FILE: drivers/gpu/drm/i915/display/intel_atomic.c:265:
+		drm_dp_tunnel_ref_get(old_crtc_state->dp_tunnel_ref.tunnel,
+					&crtc_state->dp_tunnel_ref);

-:79: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#79: FILE: drivers/gpu/drm/i915/display/intel_display.c:4514:
+		drm_dp_tunnel_ref_get(master_crtc_state->dp_tunnel_ref.tunnel,
+					&slave_crtc_state->dp_tunnel_ref);

total: 0 errors, 0 warnings, 2 checks, 98 lines checked
9b234c4960d8 drm/i915/dp: Account for tunnel BW limit in intel_dp_max_link_data_rate()
c63a6432a776 drm/i915/dp: Compute DP tunel BW during encoder state computation
73f090d93e1f drm/i915/dp: Allocate/free DP tunnel BW in the encoder enable/disable hooks
e45e83c0d541 drm/i915/dp: Handle DP tunnel IRQs
bd2c0c269408 drm/i915/dp: Call intel_dp_sync_state() always for DDI DP encoders
8adaefc393bf drm/i915/dp: Suspend/resume DP tunnels
1b5ed26d474a drm/i915/dp: Enable DP tunnel BW allocation mode



^ permalink raw reply	[flat|nested] 61+ messages in thread

* ✗ Fi.CI.SPARSE: warning for drm/i915: Add Display Port tunnel BW allocation support
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (19 preceding siblings ...)
  2024-01-23 18:52 ` ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Add Display Port tunnel BW allocation support Patchwork
@ 2024-01-23 18:52 ` Patchwork
  2024-01-23 19:05 ` ✓ Fi.CI.BAT: success " Patchwork
  2024-01-24  3:31 ` ✓ Fi.CI.IGT: " Patchwork
  22 siblings, 0 replies; 61+ messages in thread
From: Patchwork @ 2024-01-23 18:52 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Add Display Port tunnel BW allocation support
URL   : https://patchwork.freedesktop.org/series/129082/
State : warning

== Summary ==

Error: dim sparse failed
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* ✓ Fi.CI.BAT: success for drm/i915: Add Display Port tunnel BW allocation support
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (20 preceding siblings ...)
  2024-01-23 18:52 ` ✗ Fi.CI.SPARSE: " Patchwork
@ 2024-01-23 19:05 ` Patchwork
  2024-01-24  3:31 ` ✓ Fi.CI.IGT: " Patchwork
  22 siblings, 0 replies; 61+ messages in thread
From: Patchwork @ 2024-01-23 19:05 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 8692 bytes --]

== Series Details ==

Series: drm/i915: Add Display Port tunnel BW allocation support
URL   : https://patchwork.freedesktop.org/series/129082/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_14166 -> Patchwork_129082v1
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/index.html

Participating hosts (38 -> 37)
------------------------------

  Additional (1): fi-pnv-d510 
  Missing    (2): bat-kbl-2 fi-snb-2520m 

Known issues
------------

  Here are the changes found in Patchwork_129082v1 that come from known issues:

### CI changes ###

#### Issues hit ####

  * boot:
    - bat-rpls-2:         [PASS][1] -> [FAIL][2] ([i915#10078])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/bat-rpls-2/boot.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-rpls-2/boot.html

  

### IGT changes ###

#### Issues hit ####

  * igt@gem_lmem_swapping@basic:
    - fi-pnv-d510:        NOTRUN -> [SKIP][3] ([fdo#109271]) +31 other tests skip
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/fi-pnv-d510/igt@gem_lmem_swapping@basic.html

  * igt@gem_lmem_swapping@verify-random:
    - bat-mtlp-6:         NOTRUN -> [SKIP][4] ([i915#4613]) +3 other tests skip
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@gem_lmem_swapping@verify-random.html

  * igt@i915_pm_rps@basic-api:
    - bat-mtlp-6:         NOTRUN -> [SKIP][5] ([i915#6621])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@i915_pm_rps@basic-api.html

  * igt@kms_addfb_basic@addfb25-x-tiled-legacy:
    - bat-mtlp-6:         NOTRUN -> [SKIP][6] ([i915#4212] / [i915#9792]) +8 other tests skip
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@kms_addfb_basic@addfb25-x-tiled-legacy.html

  * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
    - bat-mtlp-6:         NOTRUN -> [SKIP][7] ([i915#5190] / [i915#9792])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html

  * igt@kms_cursor_legacy@basic-flip-after-cursor-legacy:
    - bat-mtlp-6:         NOTRUN -> [SKIP][8] ([i915#9792]) +16 other tests skip
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@kms_cursor_legacy@basic-flip-after-cursor-legacy.html

  * igt@kms_flip@basic-flip-vs-dpms:
    - bat-mtlp-6:         NOTRUN -> [SKIP][9] ([i915#3637] / [i915#9792]) +3 other tests skip
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@kms_flip@basic-flip-vs-dpms.html

  * igt@kms_force_connector_basic@force-load-detect:
    - bat-mtlp-6:         NOTRUN -> [SKIP][10] ([fdo#109285] / [i915#9792])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@kms_force_connector_basic@force-load-detect.html

  * igt@kms_force_connector_basic@prune-stale-modes:
    - bat-mtlp-6:         NOTRUN -> [SKIP][11] ([i915#5274] / [i915#9792])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@kms_force_connector_basic@prune-stale-modes.html

  * igt@kms_frontbuffer_tracking@basic:
    - bat-mtlp-6:         NOTRUN -> [SKIP][12] ([i915#4342] / [i915#5354] / [i915#9792])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@kms_frontbuffer_tracking@basic.html

  * igt@kms_pm_backlight@basic-brightness:
    - bat-mtlp-6:         NOTRUN -> [SKIP][13] ([i915#5354] / [i915#9792])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@kms_pm_backlight@basic-brightness.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - bat-mtlp-6:         NOTRUN -> [SKIP][14] ([i915#3555] / [i915#8809] / [i915#9792])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@kms_setmode@basic-clone-single-crtc.html

  * igt@prime_vgem@basic-fence-flip:
    - bat-mtlp-6:         NOTRUN -> [SKIP][15] ([i915#3708] / [i915#9792])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@prime_vgem@basic-fence-flip.html

  * igt@prime_vgem@basic-fence-mmap:
    - bat-mtlp-6:         NOTRUN -> [SKIP][16] ([i915#3708] / [i915#4077]) +1 other test skip
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@prime_vgem@basic-fence-mmap.html

  * igt@prime_vgem@basic-write:
    - bat-mtlp-6:         NOTRUN -> [SKIP][17] ([i915#3708]) +2 other tests skip
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@prime_vgem@basic-write.html

  
#### Possible fixes ####

  * igt@i915_hangman@error-state-basic:
    - bat-mtlp-6:         [ABORT][18] ([i915#9414]) -> [PASS][19]
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/bat-mtlp-6/igt@i915_hangman@error-state-basic.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-6/igt@i915_hangman@error-state-basic.html

  * igt@i915_selftest@live@gt_mocs:
    - bat-mtlp-8:         [DMESG-WARN][20] -> [PASS][21]
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/bat-mtlp-8/igt@i915_selftest@live@gt_mocs.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/bat-mtlp-8/igt@i915_selftest@live@gt_mocs.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [i915#10078]: https://gitlab.freedesktop.org/drm/intel/issues/10078
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3637]: https://gitlab.freedesktop.org/drm/intel/issues/3637
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4212]: https://gitlab.freedesktop.org/drm/intel/issues/4212
  [i915#4342]: https://gitlab.freedesktop.org/drm/intel/issues/4342
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#5190]: https://gitlab.freedesktop.org/drm/intel/issues/5190
  [i915#5274]: https://gitlab.freedesktop.org/drm/intel/issues/5274
  [i915#5354]: https://gitlab.freedesktop.org/drm/intel/issues/5354
  [i915#6621]: https://gitlab.freedesktop.org/drm/intel/issues/6621
  [i915#8809]: https://gitlab.freedesktop.org/drm/intel/issues/8809
  [i915#9414]: https://gitlab.freedesktop.org/drm/intel/issues/9414
  [i915#9673]: https://gitlab.freedesktop.org/drm/intel/issues/9673
  [i915#9732]: https://gitlab.freedesktop.org/drm/intel/issues/9732
  [i915#9792]: https://gitlab.freedesktop.org/drm/intel/issues/9792


Build changes
-------------

  * Linux: CI_DRM_14166 -> Patchwork_129082v1

  CI-20190529: 20190529
  CI_DRM_14166: fc6b7c6ee7d786e6ed48425a2ce0e674906e4e5c @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7690: aa45298ff675abbe6bf8f04ae186e2388c35f03a @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_129082v1: fc6b7c6ee7d786e6ed48425a2ce0e674906e4e5c @ git://anongit.freedesktop.org/gfx-ci/linux


### Linux commits

edbffa35ef7e drm/i915/dp: Enable DP tunnel BW allocation mode
594d51dc763a drm/i915/dp: Suspend/resume DP tunnels
c72d9ee7488d drm/i915/dp: Call intel_dp_sync_state() always for DDI DP encoders
52e74ba21f1e drm/i915/dp: Handle DP tunnel IRQs
6d242799cb41 drm/i915/dp: Allocate/free DP tunnel BW in the encoder enable/disable hooks
eec4296ad64e drm/i915/dp: Compute DP tunel BW during encoder state computation
3473af06e679 drm/i915/dp: Account for tunnel BW limit in intel_dp_max_link_data_rate()
61c69346623a drm/i915/dp: Add DP tunnel atomic state and check BW limit
7c6506920c09 drm/i915/dp: Add support for DP tunnel BW allocation
c6c9d96984e9 drm/i915/dp: Add way to get active pipes with syncing commits
d134d73425f8 drm/i915/dp: Add intel_dp_max_link_data_rate()
1850fb5e9f86 drm/i915/dp: Factor out intel_dp_read_dprx_caps()
a60f27bb353f drm/i915/dp: Factor out intel_dp_update_sink_caps()
4447cd72b858 drm/i915/dp: Export intel_dp_max_common_rate/lane_count()
659428eb98e5 drm/i915/dp: Factor out intel_dp_config_required_rate()
03f1528d006b drm/i915/dp: Use drm_dp_max_dprx_data_rate()
f173ead2d3e4 drm/i915/dp: Add support to notify MST connectors to retry modesets
6b8db043f7f7 drm/dp: Add support for DP tunneling
61ff4de7836d drm/dp: Add drm_dp_max_dprx_data_rate()

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/index.html

[-- Attachment #2: Type: text/html, Size: 10520 bytes --]

^ permalink raw reply	[flat|nested] 61+ messages in thread

* ✓ Fi.CI.IGT: success for drm/i915: Add Display Port tunnel BW allocation support
  2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
                   ` (21 preceding siblings ...)
  2024-01-23 19:05 ` ✓ Fi.CI.BAT: success " Patchwork
@ 2024-01-24  3:31 ` Patchwork
  22 siblings, 0 replies; 61+ messages in thread
From: Patchwork @ 2024-01-24  3:31 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 80956 bytes --]

== Series Details ==

Series: drm/i915: Add Display Port tunnel BW allocation support
URL   : https://patchwork.freedesktop.org/series/129082/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_14166_full -> Patchwork_129082v1_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/index.html

Participating hosts (8 -> 8)
------------------------------

  No changes in participating hosts

New tests
---------

  New tests have been introduced between CI_DRM_14166_full and Patchwork_129082v1_full:

### New IGT tests (27) ###

  * igt@gem_exec_schedule@noreorder-corked@bcs0:
    - Statuses : 4 pass(s)
    - Exec time: [1.10, 1.13] s

  * igt@gem_exec_schedule@noreorder-priority@bcs0:
    - Statuses : 5 pass(s)
    - Exec time: [1.11, 1.13] s

  * igt@gem_exec_schedule@noreorder@bcs0:
    - Statuses : 4 pass(s)
    - Exec time: [0.04, 0.05] s

  * igt@gem_exercise_blt@fast-copy-emit@linear-lmem0-lmem0-emit:
    - Statuses : 2 pass(s)
    - Exec time: [0.06, 0.09] s

  * igt@gem_exercise_blt@fast-copy-emit@linear-lmem0-smem-emit:
    - Statuses : 2 pass(s)
    - Exec time: [0.06, 0.09] s

  * igt@gem_exercise_blt@fast-copy-emit@linear-smem-lmem0-emit:
    - Statuses : 2 pass(s)
    - Exec time: [0.01] s

  * igt@gem_exercise_blt@fast-copy-emit@tile4-lmem0-lmem0-emit:
    - Statuses : 2 pass(s)
    - Exec time: [0.06, 0.10] s

  * igt@gem_exercise_blt@fast-copy-emit@tile4-lmem0-smem-emit:
    - Statuses : 2 pass(s)
    - Exec time: [0.06, 0.10] s

  * igt@gem_exercise_blt@fast-copy-emit@tile4-smem-lmem0-emit:
    - Statuses : 2 pass(s)
    - Exec time: [0.01] s

  * igt@gem_exercise_blt@fast-copy-emit@tile64-lmem0-lmem0-emit:
    - Statuses : 2 pass(s)
    - Exec time: [0.06, 0.10] s

  * igt@gem_exercise_blt@fast-copy-emit@tile64-lmem0-smem-emit:
    - Statuses : 2 pass(s)
    - Exec time: [0.06, 0.10] s

  * igt@gem_exercise_blt@fast-copy-emit@tile64-smem-lmem0-emit:
    - Statuses : 2 pass(s)
    - Exec time: [0.01] s

  * igt@gem_exercise_blt@fast-copy-emit@ymajor-lmem0-lmem0-emit:
    - Statuses : 1 pass(s)
    - Exec time: [0.06] s

  * igt@gem_exercise_blt@fast-copy-emit@ymajor-lmem0-smem-emit:
    - Statuses : 1 pass(s)
    - Exec time: [0.06] s

  * igt@gem_exercise_blt@fast-copy-emit@ymajor-smem-lmem0-emit:
    - Statuses : 1 pass(s)
    - Exec time: [0.01] s

  * igt@gem_exercise_blt@fast-copy-emit@ymajor-smem-smem-emit:
    - Statuses : 3 pass(s)
    - Exec time: [0.01, 0.02] s

  * igt@gem_exercise_blt@fast-copy@linear-lmem0-lmem0:
    - Statuses : 1 pass(s)
    - Exec time: [0.09] s

  * igt@gem_exercise_blt@fast-copy@linear-lmem0-smem:
    - Statuses : 1 pass(s)
    - Exec time: [0.10] s

  * igt@gem_exercise_blt@fast-copy@linear-smem-lmem0:
    - Statuses : 1 pass(s)
    - Exec time: [0.01] s

  * igt@gem_exercise_blt@fast-copy@tile4-lmem0-lmem0:
    - Statuses : 1 pass(s)
    - Exec time: [0.10] s

  * igt@gem_exercise_blt@fast-copy@tile4-lmem0-smem:
    - Statuses : 1 pass(s)
    - Exec time: [0.10] s

  * igt@gem_exercise_blt@fast-copy@tile4-smem-lmem0:
    - Statuses : 1 pass(s)
    - Exec time: [0.01] s

  * igt@gem_exercise_blt@fast-copy@tile64-lmem0-lmem0:
    - Statuses : 1 pass(s)
    - Exec time: [0.10] s

  * igt@gem_exercise_blt@fast-copy@tile64-lmem0-smem:
    - Statuses : 1 pass(s)
    - Exec time: [0.11] s

  * igt@gem_exercise_blt@fast-copy@tile64-smem-lmem0:
    - Statuses : 1 pass(s)
    - Exec time: [0.01] s

  * igt@gem_exercise_blt@fast-copy@yfmajor-smem-smem:
    - Statuses : 1 pass(s)
    - Exec time: [0.04] s

  * igt@gem_exercise_blt@fast-copy@ymajor-smem-smem:
    - Statuses : 3 pass(s)
    - Exec time: [0.02, 0.04] s

  

Known issues
------------

  Here are the changes found in Patchwork_129082v1_full that come from known issues:

### CI changes ###

#### Possible fixes ####

  * boot:
    - shard-rkl:          ([PASS][1], [PASS][2], [PASS][3], [PASS][4], [PASS][5], [FAIL][6], [PASS][7], [PASS][8], [PASS][9], [PASS][10], [PASS][11], [PASS][12], [PASS][13], [PASS][14], [PASS][15], [PASS][16], [PASS][17], [PASS][18], [PASS][19], [PASS][20], [PASS][21], [PASS][22], [PASS][23], [PASS][24]) ([i915#8293]) -> ([PASS][25], [PASS][26], [PASS][27], [PASS][28], [PASS][29], [PASS][30], [PASS][31], [PASS][32], [PASS][33], [PASS][34], [PASS][35], [PASS][36], [PASS][37], [PASS][38], [PASS][39], [PASS][40], [PASS][41], [PASS][42], [PASS][43], [PASS][44], [PASS][45], [PASS][46], [PASS][47], [PASS][48])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-7/boot.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-7/boot.html
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-7/boot.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-7/boot.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-6/boot.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-6/boot.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-6/boot.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-5/boot.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-5/boot.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-5/boot.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-5/boot.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-4/boot.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-4/boot.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-4/boot.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-4/boot.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-3/boot.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-2/boot.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-2/boot.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-2/boot.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-1/boot.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-1/boot.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-1/boot.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-1/boot.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-1/boot.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-7/boot.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-7/boot.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-7/boot.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/boot.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/boot.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/boot.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/boot.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/boot.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-5/boot.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-5/boot.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-5/boot.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-5/boot.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-4/boot.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-4/boot.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-4/boot.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-4/boot.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-4/boot.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/boot.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-2/boot.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-2/boot.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-1/boot.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-1/boot.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-1/boot.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-1/boot.html

  

### IGT changes ###

#### Issues hit ####

  * igt@api_intel_bb@object-reloc-keep-cache:
    - shard-rkl:          NOTRUN -> [SKIP][49] ([i915#8411])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-2/igt@api_intel_bb@object-reloc-keep-cache.html

  * igt@api_intel_bb@object-reloc-purge-cache:
    - shard-dg2:          NOTRUN -> [SKIP][50] ([i915#8411]) +1 other test skip
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@api_intel_bb@object-reloc-purge-cache.html

  * igt@device_reset@unbind-cold-reset-rebind:
    - shard-dg2:          NOTRUN -> [SKIP][51] ([i915#7701])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@device_reset@unbind-cold-reset-rebind.html

  * igt@drm_buddy@drm_buddy@drm_test_buddy_alloc_limit:
    - shard-dg2:          NOTRUN -> [DMESG-WARN][52] ([i915#10140])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@drm_buddy@drm_buddy@drm_test_buddy_alloc_limit.html

  * igt@drm_fdinfo@busy@rcs0:
    - shard-dg2:          NOTRUN -> [SKIP][53] ([i915#8414]) +11 other tests skip
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@drm_fdinfo@busy@rcs0.html

  * igt@gem_basic@multigpu-create-close:
    - shard-rkl:          NOTRUN -> [SKIP][54] ([i915#7697]) +1 other test skip
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@gem_basic@multigpu-create-close.html

  * igt@gem_ccs@suspend-resume:
    - shard-tglu:         NOTRUN -> [SKIP][55] ([i915#9323])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@gem_ccs@suspend-resume.html

  * igt@gem_close_race@multigpu-basic-process:
    - shard-dg2:          NOTRUN -> [SKIP][56] ([i915#7697])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@gem_close_race@multigpu-basic-process.html

  * igt@gem_ctx_exec@basic-nohangcheck:
    - shard-rkl:          [PASS][57] -> [FAIL][58] ([i915#6268])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-2/igt@gem_ctx_exec@basic-nohangcheck.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-7/igt@gem_ctx_exec@basic-nohangcheck.html
    - shard-tglu:         [PASS][59] -> [FAIL][60] ([i915#6268])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-tglu-2/igt@gem_ctx_exec@basic-nohangcheck.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-7/igt@gem_ctx_exec@basic-nohangcheck.html

  * igt@gem_ctx_param@set-priority-not-supported:
    - shard-rkl:          NOTRUN -> [SKIP][61] ([fdo#109314])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@gem_ctx_param@set-priority-not-supported.html

  * igt@gem_ctx_sseu@invalid-args:
    - shard-tglu:         NOTRUN -> [SKIP][62] ([i915#280])
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@gem_ctx_sseu@invalid-args.html

  * igt@gem_eio@hibernate:
    - shard-tglu:         [PASS][63] -> [ABORT][64] ([i915#10030] / [i915#7975] / [i915#8213])
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-tglu-5/igt@gem_eio@hibernate.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-10/igt@gem_eio@hibernate.html

  * igt@gem_eio@kms:
    - shard-dg1:          [PASS][65] -> [FAIL][66] ([i915#5784])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-dg1-18/igt@gem_eio@kms.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-15/igt@gem_eio@kms.html

  * igt@gem_exec_balancer@sliced:
    - shard-dg2:          NOTRUN -> [SKIP][67] ([i915#4812]) +2 other tests skip
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@gem_exec_balancer@sliced.html

  * igt@gem_exec_capture@capture-recoverable:
    - shard-rkl:          NOTRUN -> [SKIP][68] ([i915#6344])
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@gem_exec_capture@capture-recoverable.html

  * igt@gem_exec_endless@dispatch@vcs0:
    - shard-dg1:          [PASS][69] -> [TIMEOUT][70] ([i915#3778])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-dg1-12/igt@gem_exec_endless@dispatch@vcs0.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-15/igt@gem_exec_endless@dispatch@vcs0.html

  * igt@gem_exec_fair@basic-none-rrul:
    - shard-dg2:          NOTRUN -> [SKIP][71] ([i915#3539] / [i915#4852]) +3 other tests skip
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@gem_exec_fair@basic-none-rrul.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
    - shard-rkl:          [PASS][72] -> [FAIL][73] ([i915#2842])
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-2/igt@gem_exec_fair@basic-none-solo@rcs0.html
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-7/igt@gem_exec_fair@basic-none-solo@rcs0.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - shard-rkl:          NOTRUN -> [FAIL][74] ([i915#2876])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@gem_exec_fair@basic-pace@rcs0.html
    - shard-tglu:         [PASS][75] -> [FAIL][76] ([i915#2842])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-tglu-8/igt@gem_exec_fair@basic-pace@rcs0.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-2/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_exec_fair@basic-pace@vecs0:
    - shard-rkl:          NOTRUN -> [FAIL][77] ([i915#2842]) +2 other tests fail
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@gem_exec_fair@basic-pace@vecs0.html

  * igt@gem_exec_params@rsvd2-dirt:
    - shard-dg2:          NOTRUN -> [SKIP][78] ([fdo#109283] / [i915#5107])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@gem_exec_params@rsvd2-dirt.html
    - shard-tglu:         NOTRUN -> [SKIP][79] ([fdo#109283])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@gem_exec_params@rsvd2-dirt.html

  * igt@gem_exec_params@secure-non-master:
    - shard-dg2:          NOTRUN -> [SKIP][80] ([fdo#112283])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@gem_exec_params@secure-non-master.html

  * igt@gem_exec_params@secure-non-root:
    - shard-tglu:         NOTRUN -> [SKIP][81] ([fdo#112283])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@gem_exec_params@secure-non-root.html

  * igt@gem_exec_reloc@basic-gtt-noreloc:
    - shard-dg1:          NOTRUN -> [SKIP][82] ([i915#3281])
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@gem_exec_reloc@basic-gtt-noreloc.html

  * igt@gem_exec_reloc@basic-gtt-read:
    - shard-dg2:          NOTRUN -> [SKIP][83] ([i915#3281]) +10 other tests skip
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@gem_exec_reloc@basic-gtt-read.html

  * igt@gem_exec_reloc@basic-gtt-wc-noreloc:
    - shard-rkl:          NOTRUN -> [SKIP][84] ([i915#3281]) +5 other tests skip
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@gem_exec_reloc@basic-gtt-wc-noreloc.html

  * igt@gem_exec_suspend@basic-s4-devices@lmem0:
    - shard-dg2:          NOTRUN -> [ABORT][85] ([i915#7975] / [i915#8213])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@gem_exec_suspend@basic-s4-devices@lmem0.html

  * igt@gem_exec_suspend@basic-s4-devices@smem:
    - shard-rkl:          NOTRUN -> [ABORT][86] ([i915#7975] / [i915#8213])
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-2/igt@gem_exec_suspend@basic-s4-devices@smem.html

  * igt@gem_fence_thrash@bo-copy:
    - shard-dg2:          NOTRUN -> [SKIP][87] ([i915#4860])
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@gem_fence_thrash@bo-copy.html
    - shard-dg1:          NOTRUN -> [SKIP][88] ([i915#4860])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@gem_fence_thrash@bo-copy.html

  * igt@gem_lmem_swapping@heavy-verify-random-ccs:
    - shard-tglu:         NOTRUN -> [SKIP][89] ([i915#4613]) +1 other test skip
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-5/igt@gem_lmem_swapping@heavy-verify-random-ccs.html

  * igt@gem_lmem_swapping@parallel-random-engines:
    - shard-glk:          NOTRUN -> [SKIP][90] ([fdo#109271] / [i915#4613])
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-glk8/igt@gem_lmem_swapping@parallel-random-engines.html

  * igt@gem_lmem_swapping@random-engines:
    - shard-rkl:          NOTRUN -> [SKIP][91] ([i915#4613]) +1 other test skip
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@gem_lmem_swapping@random-engines.html

  * igt@gem_lmem_swapping@verify-random-ccs@lmem0:
    - shard-dg1:          NOTRUN -> [SKIP][92] ([i915#4565])
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@gem_lmem_swapping@verify-random-ccs@lmem0.html

  * igt@gem_mmap_gtt@cpuset-big-copy-odd:
    - shard-dg2:          NOTRUN -> [SKIP][93] ([i915#4077]) +9 other tests skip
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@gem_mmap_gtt@cpuset-big-copy-odd.html

  * igt@gem_mmap_wc@bad-size:
    - shard-dg2:          NOTRUN -> [SKIP][94] ([i915#4083]) +2 other tests skip
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@gem_mmap_wc@bad-size.html
    - shard-dg1:          NOTRUN -> [SKIP][95] ([i915#4083])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@gem_mmap_wc@bad-size.html

  * igt@gem_partial_pwrite_pread@reads:
    - shard-dg2:          NOTRUN -> [SKIP][96] ([i915#3282]) +3 other tests skip
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@gem_partial_pwrite_pread@reads.html

  * igt@gem_partial_pwrite_pread@writes-after-reads-uncached:
    - shard-rkl:          NOTRUN -> [SKIP][97] ([i915#3282]) +2 other tests skip
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@gem_partial_pwrite_pread@writes-after-reads-uncached.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-glk:          NOTRUN -> [WARN][98] ([i915#2658])
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-glk8/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_pxp@create-valid-protected-context:
    - shard-rkl:          NOTRUN -> [SKIP][99] ([i915#4270]) +1 other test skip
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@gem_pxp@create-valid-protected-context.html

  * igt@gem_pxp@dmabuf-shared-protected-dst-is-context-refcounted:
    - shard-dg2:          NOTRUN -> [SKIP][100] ([i915#4270]) +1 other test skip
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@gem_pxp@dmabuf-shared-protected-dst-is-context-refcounted.html

  * igt@gem_pxp@protected-raw-src-copy-not-readible:
    - shard-tglu:         NOTRUN -> [SKIP][101] ([i915#4270])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-5/igt@gem_pxp@protected-raw-src-copy-not-readible.html

  * igt@gem_set_tiling_vs_blt@untiled-to-tiled:
    - shard-dg2:          NOTRUN -> [SKIP][102] ([i915#4079])
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@gem_set_tiling_vs_blt@untiled-to-tiled.html

  * igt@gem_softpin@evict-snoop-interruptible:
    - shard-tglu:         NOTRUN -> [SKIP][103] ([fdo#109312])
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-5/igt@gem_softpin@evict-snoop-interruptible.html

  * igt@gem_userptr_blits@create-destroy-unsync:
    - shard-rkl:          NOTRUN -> [SKIP][104] ([i915#3297]) +1 other test skip
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@gem_userptr_blits@create-destroy-unsync.html

  * igt@gem_userptr_blits@dmabuf-unsync:
    - shard-dg2:          NOTRUN -> [SKIP][105] ([i915#3297]) +2 other tests skip
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@gem_userptr_blits@dmabuf-unsync.html

  * igt@gem_userptr_blits@map-fixed-invalidate-busy:
    - shard-dg2:          NOTRUN -> [SKIP][106] ([i915#3297] / [i915#4880])
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@gem_userptr_blits@map-fixed-invalidate-busy.html

  * igt@gem_userptr_blits@sd-probe:
    - shard-dg2:          NOTRUN -> [SKIP][107] ([i915#3297] / [i915#4958])
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@gem_userptr_blits@sd-probe.html

  * igt@gen7_exec_parse@batch-without-end:
    - shard-rkl:          NOTRUN -> [SKIP][108] ([fdo#109289]) +1 other test skip
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@gen7_exec_parse@batch-without-end.html

  * igt@gen9_exec_parse@bb-start-far:
    - shard-tglu:         NOTRUN -> [SKIP][109] ([i915#2527] / [i915#2856]) +1 other test skip
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@gen9_exec_parse@bb-start-far.html

  * igt@gen9_exec_parse@secure-batches:
    - shard-dg2:          NOTRUN -> [SKIP][110] ([i915#2856]) +2 other tests skip
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@gen9_exec_parse@secure-batches.html

  * igt@gen9_exec_parse@valid-registers:
    - shard-rkl:          NOTRUN -> [SKIP][111] ([i915#2527]) +2 other tests skip
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@gen9_exec_parse@valid-registers.html

  * igt@i915_module_load@load:
    - shard-tglu:         NOTRUN -> [SKIP][112] ([i915#6227])
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@i915_module_load@load.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-dg2:          NOTRUN -> [WARN][113] ([i915#7356])
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@i915_module_load@reload-with-fault-injection.html

  * igt@i915_pm_rc6_residency@media-rc6-accuracy:
    - shard-tglu:         NOTRUN -> [SKIP][114] ([fdo#109289])
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@i915_pm_rc6_residency@media-rc6-accuracy.html

  * igt@i915_pm_rc6_residency@rc6-idle@gt0-vcs0:
    - shard-dg1:          [PASS][115] -> [FAIL][116] ([i915#3591])
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-dg1-18/igt@i915_pm_rc6_residency@rc6-idle@gt0-vcs0.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-13/igt@i915_pm_rc6_residency@rc6-idle@gt0-vcs0.html

  * igt@i915_selftest@mock@memory_region:
    - shard-rkl:          NOTRUN -> [DMESG-WARN][117] ([i915#9311])
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@i915_selftest@mock@memory_region.html

  * igt@kms_addfb_basic@basic-x-tiled-legacy:
    - shard-dg2:          NOTRUN -> [SKIP][118] ([i915#4212])
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_addfb_basic@basic-x-tiled-legacy.html

  * igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-hdmi-a-1-y-rc-ccs-cc:
    - shard-rkl:          NOTRUN -> [SKIP][119] ([i915#8709]) +3 other tests skip
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-4/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-hdmi-a-1-y-rc-ccs-cc.html

  * igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-hdmi-a-3-y-rc-ccs:
    - shard-dg1:          NOTRUN -> [SKIP][120] ([i915#8709]) +7 other tests skip
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-13/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-hdmi-a-3-y-rc-ccs.html

  * igt@kms_async_flips@async-flip-with-page-flip-events@pipe-d-hdmi-a-3-4-mc-ccs:
    - shard-dg2:          NOTRUN -> [SKIP][121] ([i915#8709]) +11 other tests skip
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-d-hdmi-a-3-4-mc-ccs.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels:
    - shard-tglu:         NOTRUN -> [SKIP][122] ([i915#1769] / [i915#3555])
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html

  * igt@kms_big_fb@4-tiled-64bpp-rotate-180:
    - shard-mtlp:         [PASS][123] -> [FAIL][124] ([i915#5138])
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-mtlp-1/igt@kms_big_fb@4-tiled-64bpp-rotate-180.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-mtlp-8/igt@kms_big_fb@4-tiled-64bpp-rotate-180.html

  * igt@kms_big_fb@4-tiled-8bpp-rotate-0:
    - shard-rkl:          NOTRUN -> [SKIP][125] ([i915#5286]) +3 other tests skip
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-2/igt@kms_big_fb@4-tiled-8bpp-rotate-0.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
    - shard-tglu:         NOTRUN -> [SKIP][126] ([fdo#111615] / [i915#5286]) +2 other tests skip
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html

  * igt@kms_big_fb@linear-8bpp-rotate-90:
    - shard-dg2:          NOTRUN -> [SKIP][127] ([fdo#111614]) +2 other tests skip
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_big_fb@linear-8bpp-rotate-90.html

  * igt@kms_big_fb@x-tiled-8bpp-rotate-90:
    - shard-rkl:          NOTRUN -> [SKIP][128] ([fdo#111614] / [i915#3638])
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@kms_big_fb@x-tiled-8bpp-rotate-90.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-async-flip:
    - shard-tglu:         [PASS][129] -> [FAIL][130] ([i915#3743])
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-tglu-4/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-async-flip.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-8/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-async-flip.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-90:
    - shard-tglu:         NOTRUN -> [SKIP][131] ([fdo#111614]) +1 other test skip
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip:
    - shard-dg2:          NOTRUN -> [SKIP][132] ([i915#5190]) +13 other tests skip
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html

  * igt@kms_big_fb@yf-tiled-64bpp-rotate-0:
    - shard-dg2:          NOTRUN -> [SKIP][133] ([i915#4538] / [i915#5190]) +2 other tests skip
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_big_fb@yf-tiled-64bpp-rotate-0.html

  * igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow:
    - shard-rkl:          NOTRUN -> [SKIP][134] ([fdo#111615])
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-rkl:          NOTRUN -> [SKIP][135] ([fdo#110723]) +1 other test skip
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-2/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
    - shard-tglu:         NOTRUN -> [SKIP][136] ([fdo#111615]) +1 other test skip
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html

  * igt@kms_big_joiner@2x-modeset:
    - shard-dg2:          NOTRUN -> [SKIP][137] ([i915#2705])
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_big_joiner@2x-modeset.html

  * igt@kms_big_joiner@invalid-modeset:
    - shard-tglu:         NOTRUN -> [SKIP][138] ([i915#2705])
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_big_joiner@invalid-modeset.html

  * igt@kms_ccs@pipe-b-bad-aux-stride-4-tiled-mtl-rc-ccs:
    - shard-tglu:         NOTRUN -> [SKIP][139] ([i915#5354] / [i915#6095]) +18 other tests skip
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_ccs@pipe-b-bad-aux-stride-4-tiled-mtl-rc-ccs.html

  * igt@kms_ccs@pipe-b-bad-rotation-90-y-tiled-gen12-mc-ccs:
    - shard-dg2:          NOTRUN -> [SKIP][140] ([i915#5354]) +86 other tests skip
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_ccs@pipe-b-bad-rotation-90-y-tiled-gen12-mc-ccs.html

  * igt@kms_ccs@pipe-b-crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc:
    - shard-rkl:          NOTRUN -> [SKIP][141] ([i915#5354] / [i915#6095]) +12 other tests skip
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@kms_ccs@pipe-b-crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc.html

  * igt@kms_ccs@pipe-c-bad-pixel-format-4-tiled-mtl-rc-ccs:
    - shard-dg1:          NOTRUN -> [SKIP][142] ([i915#5354] / [i915#6095])
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@kms_ccs@pipe-c-bad-pixel-format-4-tiled-mtl-rc-ccs.html

  * igt@kms_ccs@pipe-d-crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc:
    - shard-rkl:          NOTRUN -> [SKIP][143] ([i915#5354]) +18 other tests skip
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@kms_ccs@pipe-d-crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc.html

  * igt@kms_chamelium_audio@hdmi-audio-edid:
    - shard-dg1:          NOTRUN -> [SKIP][144] ([i915#7828])
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@kms_chamelium_audio@hdmi-audio-edid.html

  * igt@kms_chamelium_color@ctm-blue-to-red:
    - shard-tglu:         NOTRUN -> [SKIP][145] ([fdo#111827])
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_chamelium_color@ctm-blue-to-red.html

  * igt@kms_chamelium_color@degamma:
    - shard-dg2:          NOTRUN -> [SKIP][146] ([fdo#111827])
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_chamelium_color@degamma.html

  * igt@kms_chamelium_edid@hdmi-edid-change-during-suspend:
    - shard-rkl:          NOTRUN -> [SKIP][147] ([i915#7828]) +3 other tests skip
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@kms_chamelium_edid@hdmi-edid-change-during-suspend.html

  * igt@kms_chamelium_frames@hdmi-crc-fast:
    - shard-dg2:          NOTRUN -> [SKIP][148] ([i915#7828]) +9 other tests skip
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_chamelium_frames@hdmi-crc-fast.html

  * igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe:
    - shard-tglu:         NOTRUN -> [SKIP][149] ([i915#7828]) +3 other tests skip
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe.html

  * igt@kms_content_protection@dp-mst-lic-type-0:
    - shard-dg2:          NOTRUN -> [SKIP][150] ([i915#3299])
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_content_protection@dp-mst-lic-type-0.html

  * igt@kms_content_protection@legacy:
    - shard-rkl:          NOTRUN -> [SKIP][151] ([i915#7118])
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@kms_content_protection@legacy.html

  * igt@kms_content_protection@mei-interface:
    - shard-tglu:         NOTRUN -> [SKIP][152] ([i915#6944] / [i915#9424])
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_content_protection@mei-interface.html

  * igt@kms_content_protection@srm:
    - shard-tglu:         NOTRUN -> [SKIP][153] ([i915#6944] / [i915#7116] / [i915#7118])
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_content_protection@srm.html

  * igt@kms_cursor_crc@cursor-random-32x10:
    - shard-tglu:         NOTRUN -> [SKIP][154] ([i915#3555]) +3 other tests skip
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_cursor_crc@cursor-random-32x10.html

  * igt@kms_cursor_crc@cursor-random-512x170:
    - shard-dg2:          NOTRUN -> [SKIP][155] ([i915#3359]) +2 other tests skip
   [155]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_cursor_crc@cursor-random-512x170.html

  * igt@kms_cursor_crc@cursor-rapid-movement-256x85:
    - shard-snb:          NOTRUN -> [SKIP][156] ([fdo#109271]) +73 other tests skip
   [156]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-snb2/igt@kms_cursor_crc@cursor-rapid-movement-256x85.html

  * igt@kms_cursor_crc@cursor-rapid-movement-512x170:
    - shard-rkl:          NOTRUN -> [SKIP][157] ([i915#3359])
   [157]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@kms_cursor_crc@cursor-rapid-movement-512x170.html

  * igt@kms_cursor_crc@cursor-sliding-32x10:
    - shard-dg2:          NOTRUN -> [SKIP][158] ([i915#3555]) +6 other tests skip
   [158]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_cursor_crc@cursor-sliding-32x10.html
    - shard-dg1:          NOTRUN -> [SKIP][159] ([i915#3555])
   [159]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@kms_cursor_crc@cursor-sliding-32x10.html

  * igt@kms_cursor_legacy@2x-long-flip-vs-cursor-atomic:
    - shard-tglu:         NOTRUN -> [SKIP][160] ([fdo#109274]) +2 other tests skip
   [160]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-atomic.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size:
    - shard-rkl:          NOTRUN -> [SKIP][161] ([fdo#111825]) +5 other tests skip
   [161]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html

  * igt@kms_cursor_legacy@cursorb-vs-flipa-atomic:
    - shard-dg2:          NOTRUN -> [SKIP][162] ([fdo#109274] / [i915#5354]) +4 other tests skip
   [162]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic.html

  * igt@kms_cursor_legacy@modeset-atomic-cursor-hotspot:
    - shard-dg2:          NOTRUN -> [SKIP][163] ([i915#9067])
   [163]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_cursor_legacy@modeset-atomic-cursor-hotspot.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
    - shard-dg2:          NOTRUN -> [SKIP][164] ([i915#4103] / [i915#4213]) +2 other tests skip
   [164]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html
    - shard-rkl:          NOTRUN -> [SKIP][165] ([i915#4103])
   [165]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-2/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html

  * igt@kms_dirtyfb@drrs-dirtyfb-ioctl:
    - shard-dg2:          NOTRUN -> [SKIP][166] ([i915#9833])
   [166]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_dirtyfb@drrs-dirtyfb-ioctl.html

  * igt@kms_dirtyfb@fbc-dirtyfb-ioctl@a-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [SKIP][167] ([i915#9227])
   [167]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_dirtyfb@fbc-dirtyfb-ioctl@a-hdmi-a-3.html

  * igt@kms_dirtyfb@fbc-dirtyfb-ioctl@a-hdmi-a-4:
    - shard-dg1:          NOTRUN -> [SKIP][168] ([i915#9723])
   [168]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@kms_dirtyfb@fbc-dirtyfb-ioctl@a-hdmi-a-4.html

  * igt@kms_display_modes@mst-extended-mode-negative:
    - shard-dg2:          NOTRUN -> [SKIP][169] ([i915#8588])
   [169]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_display_modes@mst-extended-mode-negative.html

  * igt@kms_dsc@dsc-fractional-bpp-with-bpc:
    - shard-rkl:          NOTRUN -> [SKIP][170] ([i915#3840])
   [170]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@kms_dsc@dsc-fractional-bpp-with-bpc.html

  * igt@kms_dsc@dsc-with-bpc:
    - shard-tglu:         NOTRUN -> [SKIP][171] ([i915#3555] / [i915#3840])
   [171]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_dsc@dsc-with-bpc.html

  * igt@kms_feature_discovery@display-2x:
    - shard-rkl:          NOTRUN -> [SKIP][172] ([i915#1839])
   [172]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@kms_feature_discovery@display-2x.html

  * igt@kms_feature_discovery@display-3x:
    - shard-tglu:         NOTRUN -> [SKIP][173] ([i915#1839])
   [173]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-5/igt@kms_feature_discovery@display-3x.html

  * igt@kms_feature_discovery@dp-mst:
    - shard-dg2:          NOTRUN -> [SKIP][174] ([i915#9337])
   [174]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_feature_discovery@dp-mst.html

  * igt@kms_feature_discovery@psr2:
    - shard-dg2:          NOTRUN -> [SKIP][175] ([i915#658])
   [175]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_feature_discovery@psr2.html

  * igt@kms_fence_pin_leak:
    - shard-dg2:          NOTRUN -> [SKIP][176] ([i915#4881])
   [176]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_fence_pin_leak.html

  * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible:
    - shard-dg2:          NOTRUN -> [SKIP][177] ([fdo#109274] / [fdo#111767])
   [177]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible.html

  * igt@kms_flip@2x-flip-vs-suspend-interruptible:
    - shard-dg2:          NOTRUN -> [SKIP][178] ([fdo#109274]) +2 other tests skip
   [178]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_flip@2x-flip-vs-suspend-interruptible.html

  * igt@kms_flip@2x-plain-flip-ts-check-interruptible:
    - shard-tglu:         NOTRUN -> [SKIP][179] ([fdo#109274] / [i915#3637]) +3 other tests skip
   [179]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-5/igt@kms_flip@2x-plain-flip-ts-check-interruptible.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-downscaling@pipe-a-valid-mode:
    - shard-rkl:          NOTRUN -> [SKIP][180] ([i915#2672]) +1 other test skip
   [180]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling@pipe-a-valid-mode:
    - shard-dg2:          NOTRUN -> [SKIP][181] ([i915#2672]) +2 other tests skip
   [181]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling@pipe-a-valid-mode:
    - shard-tglu:         NOTRUN -> [SKIP][182] ([i915#2587] / [i915#2672]) +1 other test skip
   [182]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling@pipe-a-valid-mode.html

  * igt@kms_force_connector_basic@prune-stale-modes:
    - shard-dg2:          NOTRUN -> [SKIP][183] ([i915#5274])
   [183]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_force_connector_basic@prune-stale-modes.html

  * igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-mmap-cpu:
    - shard-dg2:          NOTRUN -> [FAIL][184] ([i915#6880])
   [184]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-gtt:
    - shard-snb:          [PASS][185] -> [SKIP][186] ([fdo#109271]) +9 other tests skip
   [185]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-snb7/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-gtt.html
   [186]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-snb2/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@fbc-tiling-4:
    - shard-tglu:         NOTRUN -> [SKIP][187] ([i915#5439])
   [187]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_frontbuffer_tracking@fbc-tiling-4.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-mmap-gtt:
    - shard-rkl:          NOTRUN -> [SKIP][188] ([i915#3023]) +12 other tests skip
   [188]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-blt:
    - shard-dg1:          NOTRUN -> [SKIP][189] ([fdo#111825]) +2 other tests skip
   [189]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-shrfb-fliptrack-mmap-gtt:
    - shard-dg2:          NOTRUN -> [SKIP][190] ([i915#8708]) +19 other tests skip
   [190]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-shrfb-fliptrack-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@pipe-fbc-rte:
    - shard-dg2:          NOTRUN -> [SKIP][191] ([i915#9766])
   [191]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_frontbuffer_tracking@pipe-fbc-rte.html

  * igt@kms_frontbuffer_tracking@plane-fbc-rte:
    - shard-dg2:          NOTRUN -> [SKIP][192] ([i915#10070])
   [192]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_frontbuffer_tracking@plane-fbc-rte.html

  * igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-indfb-draw-render:
    - shard-tglu:         NOTRUN -> [SKIP][193] ([fdo#110189]) +5 other tests skip
   [193]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@psr-1p-pri-indfb-multidraw:
    - shard-glk:          NOTRUN -> [SKIP][194] ([fdo#109271]) +47 other tests skip
   [194]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-glk8/igt@kms_frontbuffer_tracking@psr-1p-pri-indfb-multidraw.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-move:
    - shard-dg2:          NOTRUN -> [SKIP][195] ([i915#3458]) +14 other tests skip
   [195]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-move.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-shrfb-draw-pwrite:
    - shard-rkl:          NOTRUN -> [SKIP][196] ([fdo#111825] / [i915#1825]) +19 other tests skip
   [196]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-2/igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-shrfb-draw-pwrite.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-mmap-cpu:
    - shard-tglu:         NOTRUN -> [SKIP][197] ([fdo#109280]) +17 other tests skip
   [197]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@psr-rgb101010-draw-blt:
    - shard-dg1:          NOTRUN -> [SKIP][198] ([i915#3458])
   [198]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@kms_frontbuffer_tracking@psr-rgb101010-draw-blt.html

  * igt@kms_getfb@getfb-reject-ccs:
    - shard-dg2:          NOTRUN -> [SKIP][199] ([i915#6118])
   [199]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_getfb@getfb-reject-ccs.html

  * igt@kms_hdr@bpc-switch-dpms:
    - shard-dg2:          NOTRUN -> [SKIP][200] ([i915#3555] / [i915#8228])
   [200]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_hdr@bpc-switch-dpms.html

  * igt@kms_hdr@static-toggle-dpms:
    - shard-tglu:         NOTRUN -> [SKIP][201] ([i915#3555] / [i915#8228])
   [201]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-5/igt@kms_hdr@static-toggle-dpms.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-dg2:          NOTRUN -> [SKIP][202] ([i915#4816])
   [202]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  * igt@kms_panel_fitting@atomic-fastset:
    - shard-rkl:          NOTRUN -> [SKIP][203] ([i915#6301])
   [203]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@kms_panel_fitting@atomic-fastset.html

  * igt@kms_pipe_b_c_ivb@from-pipe-c-to-b-with-3-lanes:
    - shard-dg2:          NOTRUN -> [SKIP][204] ([fdo#109289]) +4 other tests skip
   [204]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_pipe_b_c_ivb@from-pipe-c-to-b-with-3-lanes.html

  * igt@kms_plane_multiple@tiling-4:
    - shard-rkl:          NOTRUN -> [SKIP][205] ([i915#3555]) +2 other tests skip
   [205]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-2/igt@kms_plane_multiple@tiling-4.html

  * igt@kms_plane_scaling@2x-scaler-multi-pipe:
    - shard-dg2:          NOTRUN -> [SKIP][206] ([fdo#109274] / [i915#5354] / [i915#9423])
   [206]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_plane_scaling@2x-scaler-multi-pipe.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-b-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [SKIP][207] ([i915#9423]) +3 other tests skip
   [207]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-b-hdmi-a-3.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation@pipe-a-hdmi-a-4:
    - shard-dg1:          NOTRUN -> [SKIP][208] ([i915#9423]) +19 other tests skip
   [208]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-18/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation@pipe-a-hdmi-a-4.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-b-hdmi-a-1:
    - shard-tglu:         NOTRUN -> [SKIP][209] ([i915#9423]) +3 other tests skip
   [209]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-b-hdmi-a-1.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-75-with-rotation@pipe-a-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][210] ([i915#9423]) +1 other test skip
   [210]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@kms_plane_scaling@plane-downscale-factor-0-75-with-rotation@pipe-a-hdmi-a-2.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-unity-scaling@pipe-c-hdmi-a-3:
    - shard-dg1:          NOTRUN -> [SKIP][211] ([i915#5235]) +15 other tests skip
   [211]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-13/igt@kms_plane_scaling@planes-downscale-factor-0-25-unity-scaling@pipe-c-hdmi-a-3.html

  * igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-a-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][212] ([i915#5235]) +5 other tests skip
   [212]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-1/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-a-hdmi-a-2.html

  * igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-25@pipe-a-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [SKIP][213] ([i915#5235] / [i915#9423]) +11 other tests skip
   [213]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-25@pipe-a-hdmi-a-3.html

  * igt@kms_pm_dc@dc3co-vpb-simulation:
    - shard-tglu:         NOTRUN -> [SKIP][214] ([i915#9685]) +1 other test skip
   [214]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_pm_dc@dc3co-vpb-simulation.html

  * igt@kms_pm_dc@dc6-dpms:
    - shard-dg2:          NOTRUN -> [SKIP][215] ([i915#5978])
   [215]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_pm_dc@dc6-dpms.html
    - shard-dg1:          NOTRUN -> [SKIP][216] ([i915#3361])
   [216]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@kms_pm_dc@dc6-dpms.html

  * igt@kms_pm_rpm@dpms-mode-unset-lpsp:
    - shard-rkl:          NOTRUN -> [SKIP][217] ([i915#9519])
   [217]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@kms_pm_rpm@dpms-mode-unset-lpsp.html

  * igt@kms_pm_rpm@modeset-non-lpsp-stress:
    - shard-dg2:          [PASS][218] -> [SKIP][219] ([i915#9519])
   [218]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-dg2-2/igt@kms_pm_rpm@modeset-non-lpsp-stress.html
   [219]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_pm_rpm@modeset-non-lpsp-stress.html

  * igt@kms_pm_rpm@pc8-residency:
    - shard-rkl:          NOTRUN -> [SKIP][220] ([fdo#109293] / [fdo#109506])
   [220]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@kms_pm_rpm@pc8-residency.html

  * igt@kms_prime@basic-modeset-hybrid:
    - shard-rkl:          NOTRUN -> [SKIP][221] ([i915#6524])
   [221]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@kms_prime@basic-modeset-hybrid.html

  * igt@kms_psr2_su@page_flip-xrgb8888:
    - shard-dg2:          NOTRUN -> [SKIP][222] ([i915#9683]) +3 other tests skip
   [222]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_psr2_su@page_flip-xrgb8888.html
    - shard-dg1:          NOTRUN -> [SKIP][223] ([fdo#111068] / [i915#9683])
   [223]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@kms_psr2_su@page_flip-xrgb8888.html

  * igt@kms_psr_stress_test@flip-primary-invalidate-overlay:
    - shard-dg2:          NOTRUN -> [SKIP][224] ([i915#9685])
   [224]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_psr_stress_test@flip-primary-invalidate-overlay.html

  * igt@kms_rotation_crc@primary-rotation-90:
    - shard-dg2:          NOTRUN -> [SKIP][225] ([i915#4235])
   [225]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@kms_rotation_crc@primary-rotation-90.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0:
    - shard-tglu:         NOTRUN -> [SKIP][226] ([fdo#111615] / [i915#5289])
   [226]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0.html

  * igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_build_fourcc_list:
    - shard-rkl:          NOTRUN -> [DMESG-FAIL][227] ([i915#10143])
   [227]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_build_fourcc_list.html

  * igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_clip_offset:
    - shard-rkl:          NOTRUN -> [DMESG-WARN][228] ([i915#10143])
   [228]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_clip_offset.html

  * igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_swab:
    - shard-glk:          [PASS][229] -> [DMESG-WARN][230] ([i915#10143]) +1 other test dmesg-warn
   [229]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-glk9/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_swab.html
   [230]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-glk3/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_swab.html

  * igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_xrgb8888_to_abgr8888:
    - shard-dg1:          [PASS][231] -> [DMESG-WARN][232] ([i915#10143])
   [231]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-dg1-18/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_xrgb8888_to_abgr8888.html
   [232]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-15/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_xrgb8888_to_abgr8888.html

  * igt@kms_setmode@basic@pipe-a-vga-1-pipe-b-hdmi-a-1:
    - shard-snb:          NOTRUN -> [FAIL][233] ([i915#5465]) +3 other tests fail
   [233]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-snb7/igt@kms_setmode@basic@pipe-a-vga-1-pipe-b-hdmi-a-1.html

  * igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1:
    - shard-snb:          [PASS][234] -> [FAIL][235] ([i915#9196])
   [234]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-snb1/igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1.html
   [235]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-snb5/igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1.html

  * igt@kms_universal_plane@cursor-fb-leak@pipe-d-edp-1:
    - shard-mtlp:         [PASS][236] -> [FAIL][237] ([i915#9196]) +2 other tests fail
   [236]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-mtlp-5/igt@kms_universal_plane@cursor-fb-leak@pipe-d-edp-1.html
   [237]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-mtlp-7/igt@kms_universal_plane@cursor-fb-leak@pipe-d-edp-1.html

  * igt@kms_universal_plane@cursor-fb-leak@pipe-d-hdmi-a-1:
    - shard-tglu:         [PASS][238] -> [FAIL][239] ([i915#9196]) +1 other test fail
   [238]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-tglu-6/igt@kms_universal_plane@cursor-fb-leak@pipe-d-hdmi-a-1.html
   [239]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-7/igt@kms_universal_plane@cursor-fb-leak@pipe-d-hdmi-a-1.html

  * igt@kms_vrr@flip-basic-fastset:
    - shard-tglu:         NOTRUN -> [SKIP][240] ([i915#9906])
   [240]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-5/igt@kms_vrr@flip-basic-fastset.html

  * igt@kms_writeback@writeback-check-output:
    - shard-rkl:          NOTRUN -> [SKIP][241] ([i915#2437])
   [241]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-6/igt@kms_writeback@writeback-check-output.html

  * igt@kms_writeback@writeback-fb-id-xrgb2101010:
    - shard-dg2:          NOTRUN -> [SKIP][242] ([i915#2437] / [i915#9412])
   [242]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@kms_writeback@writeback-fb-id-xrgb2101010.html

  * igt@perf@mi-rpc:
    - shard-dg2:          NOTRUN -> [SKIP][243] ([i915#2434])
   [243]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@perf@mi-rpc.html

  * igt@perf_pmu@most-busy-check-all@rcs0:
    - shard-rkl:          [PASS][244] -> [FAIL][245] ([i915#4349])
   [244]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-7/igt@perf_pmu@most-busy-check-all@rcs0.html
   [245]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-5/igt@perf_pmu@most-busy-check-all@rcs0.html

  * igt@perf_pmu@rc6-all-gts:
    - shard-dg2:          NOTRUN -> [SKIP][246] ([i915#8516])
   [246]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@perf_pmu@rc6-all-gts.html

  * igt@prime_udl:
    - shard-dg2:          NOTRUN -> [SKIP][247] ([fdo#109291])
   [247]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@prime_udl.html

  * igt@prime_vgem@basic-fence-flip:
    - shard-dg2:          NOTRUN -> [SKIP][248] ([i915#3708])
   [248]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@prime_vgem@basic-fence-flip.html

  * igt@prime_vgem@basic-fence-read:
    - shard-rkl:          NOTRUN -> [SKIP][249] ([fdo#109295] / [i915#3291] / [i915#3708])
   [249]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@prime_vgem@basic-fence-read.html

  * igt@prime_vgem@basic-read:
    - shard-dg2:          NOTRUN -> [SKIP][250] ([i915#3291] / [i915#3708])
   [250]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@prime_vgem@basic-read.html

  * igt@prime_vgem@coherency-gtt:
    - shard-tglu:         NOTRUN -> [SKIP][251] ([fdo#109295] / [fdo#111656])
   [251]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@prime_vgem@coherency-gtt.html

  * igt@sriov_basic@enable-vfs-autoprobe-off:
    - shard-dg2:          NOTRUN -> [SKIP][252] ([i915#9917])
   [252]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@sriov_basic@enable-vfs-autoprobe-off.html

  * igt@sriov_basic@enable-vfs-bind-unbind-each:
    - shard-tglu:         NOTRUN -> [SKIP][253] ([i915#9917])
   [253]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-5/igt@sriov_basic@enable-vfs-bind-unbind-each.html

  * igt@syncobj_wait@invalid-wait-zero-handles:
    - shard-tglu:         NOTRUN -> [FAIL][254] ([i915#9779])
   [254]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@syncobj_wait@invalid-wait-zero-handles.html

  * igt@tools_test@sysfs_l3_parity:
    - shard-dg2:          NOTRUN -> [SKIP][255] ([i915#4818])
   [255]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@tools_test@sysfs_l3_parity.html

  * igt@v3d/v3d_job_submission@threaded-job-submission:
    - shard-dg1:          NOTRUN -> [SKIP][256] ([i915#2575]) +1 other test skip
   [256]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@v3d/v3d_job_submission@threaded-job-submission.html

  * igt@v3d/v3d_submit_cl@bad-bo:
    - shard-dg2:          NOTRUN -> [SKIP][257] ([i915#2575]) +11 other tests skip
   [257]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-10/igt@v3d/v3d_submit_cl@bad-bo.html

  * igt@v3d/v3d_submit_cl@bad-extension:
    - shard-tglu:         NOTRUN -> [SKIP][258] ([fdo#109315] / [i915#2575]) +5 other tests skip
   [258]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@v3d/v3d_submit_cl@bad-extension.html

  * igt@v3d/v3d_submit_cl@single-in-sync:
    - shard-rkl:          NOTRUN -> [SKIP][259] ([fdo#109315]) +7 other tests skip
   [259]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@v3d/v3d_submit_cl@single-in-sync.html

  * igt@vc4/vc4_perfmon@create-perfmon-invalid-events:
    - shard-dg2:          NOTRUN -> [SKIP][260] ([i915#7711]) +8 other tests skip
   [260]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-6/igt@vc4/vc4_perfmon@create-perfmon-invalid-events.html

  * igt@vc4/vc4_tiling@get-bad-flags:
    - shard-rkl:          NOTRUN -> [SKIP][261] ([i915#7711]) +2 other tests skip
   [261]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-2/igt@vc4/vc4_tiling@get-bad-flags.html

  * igt@vc4/vc4_wait_seqno@bad-seqno-0ns:
    - shard-tglu:         NOTRUN -> [SKIP][262] ([i915#2575]) +2 other tests skip
   [262]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@vc4/vc4_wait_seqno@bad-seqno-0ns.html

  
#### Possible fixes ####

  * igt@drm_fdinfo@most-busy-check-all@rcs0:
    - shard-rkl:          [FAIL][263] ([i915#7742]) -> [PASS][264]
   [263]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-4/igt@drm_fdinfo@most-busy-check-all@rcs0.html
   [264]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-5/igt@drm_fdinfo@most-busy-check-all@rcs0.html

  * igt@gem_ctx_isolation@preservation-s3@vecs0:
    - shard-tglu:         [ABORT][265] -> [PASS][266]
   [265]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-tglu-9/igt@gem_ctx_isolation@preservation-s3@vecs0.html
   [266]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-6/igt@gem_ctx_isolation@preservation-s3@vecs0.html

  * igt@gem_eio@reset-stress:
    - shard-dg1:          [FAIL][267] ([i915#5784]) -> [PASS][268]
   [267]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-dg1-13/igt@gem_eio@reset-stress.html
   [268]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-16/igt@gem_eio@reset-stress.html

  * igt@gem_eio@suspend:
    - shard-tglu:         [ABORT][269] ([i915#10030]) -> [PASS][270]
   [269]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-tglu-9/igt@gem_eio@suspend.html
   [270]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-5/igt@gem_eio@suspend.html

  * igt@gem_exec_fair@basic-none@vecs0:
    - shard-rkl:          [FAIL][271] ([i915#2842]) -> [PASS][272] +2 other tests pass
   [271]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-7/igt@gem_exec_fair@basic-none@vecs0.html
   [272]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-4/igt@gem_exec_fair@basic-none@vecs0.html

  * igt@gem_exec_fair@basic-pace-solo@rcs0:
    - shard-tglu:         [FAIL][273] ([i915#2842]) -> [PASS][274] +1 other test pass
   [273]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-tglu-5/igt@gem_exec_fair@basic-pace-solo@rcs0.html
   [274]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-3/igt@gem_exec_fair@basic-pace-solo@rcs0.html

  * igt@i915_selftest@live@hangcheck:
    - shard-dg1:          [ABORT][275] ([i915#9413]) -> [PASS][276]
   [275]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-dg1-16/igt@i915_selftest@live@hangcheck.html
   [276]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-19/igt@i915_selftest@live@hangcheck.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0:
    - shard-mtlp:         [FAIL][277] ([i915#5138]) -> [PASS][278]
   [277]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-mtlp-3/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0.html
   [278]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-mtlp-7/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
    - shard-tglu:         [FAIL][279] ([i915#3743]) -> [PASS][280] +1 other test pass
   [279]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-tglu-2/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
   [280]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-4/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-pwrite:
    - shard-snb:          [SKIP][281] ([fdo#109271]) -> [PASS][282] +4 other tests pass
   [281]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-snb2/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-pwrite.html
   [282]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-snb7/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-pwrite.html

  * igt@kms_pm_rpm@dpms-non-lpsp:
    - shard-rkl:          [SKIP][283] ([i915#9519]) -> [PASS][284]
   [283]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-5/igt@kms_pm_rpm@dpms-non-lpsp.html
   [284]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-1/igt@kms_pm_rpm@dpms-non-lpsp.html

  * igt@kms_rotation_crc@primary-rotation-270:
    - shard-rkl:          [ABORT][285] ([i915#8875]) -> [PASS][286]
   [285]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-6/igt@kms_rotation_crc@primary-rotation-270.html
   [286]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-3/igt@kms_rotation_crc@primary-rotation-270.html

  * igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_clip_offset:
    - shard-glk:          [DMESG-WARN][287] ([i915#10143]) -> [PASS][288]
   [287]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-glk9/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_clip_offset.html
   [288]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-glk3/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_clip_offset.html

  * igt@perf_pmu@busy-double-start@ccs0:
    - shard-mtlp:         [FAIL][289] ([i915#4349]) -> [PASS][290]
   [289]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-mtlp-4/igt@perf_pmu@busy-double-start@ccs0.html
   [290]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-mtlp-2/igt@perf_pmu@busy-double-start@ccs0.html

  
#### Warnings ####

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-mtlp:         [ABORT][291] ([i915#10131] / [i915#9820]) -> [ABORT][292] ([i915#10131] / [i915#9697])
   [291]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-mtlp-2/igt@i915_module_load@reload-with-fault-injection.html
   [292]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-mtlp-1/igt@i915_module_load@reload-with-fault-injection.html

  * igt@kms_content_protection@mei-interface:
    - shard-dg1:          [SKIP][293] ([i915#9424]) -> [SKIP][294] ([i915#9433])
   [293]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-dg1-17/igt@kms_content_protection@mei-interface.html
   [294]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg1-13/igt@kms_content_protection@mei-interface.html

  * igt@kms_content_protection@uevent:
    - shard-snb:          [SKIP][295] ([fdo#109271]) -> [INCOMPLETE][296] ([i915#8816])
   [295]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-snb2/igt@kms_content_protection@uevent.html
   [296]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-snb7/igt@kms_content_protection@uevent.html

  * igt@kms_fbcon_fbt@psr:
    - shard-rkl:          [SKIP][297] ([fdo#110189] / [i915#3955]) -> [SKIP][298] ([i915#3955])
   [297]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-2/igt@kms_fbcon_fbt@psr.html
   [298]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-7/igt@kms_fbcon_fbt@psr.html

  * igt@kms_pm_dc@dc9-dpms:
    - shard-rkl:          [SKIP][299] ([i915#4281]) -> [SKIP][300] ([i915#3361])
   [299]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-rkl-5/igt@kms_pm_dc@dc9-dpms.html
   [300]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-rkl-1/igt@kms_pm_dc@dc9-dpms.html

  * igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_build_fourcc_list:
    - shard-tglu:         [FAIL][301] ([i915#10136]) -> [DMESG-FAIL][302] ([i915#10143])
   [301]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-tglu-8/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_build_fourcc_list.html
   [302]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-tglu-2/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_build_fourcc_list.html
    - shard-glk:          [DMESG-FAIL][303] ([i915#10143]) -> [FAIL][304] ([i915#10136])
   [303]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-glk9/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_build_fourcc_list.html
   [304]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-glk3/igt@kms_selftest@drm_format_helper@drm_format_helper_test-drm_test_fb_build_fourcc_list.html

  * igt@prime_mmap@test_aperture_limit@test_aperture_limit-smem:
    - shard-dg2:          [INCOMPLETE][305] ([i915#5493]) -> [CRASH][306] ([i915#9351])
   [305]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14166/shard-dg2-5/igt@prime_mmap@test_aperture_limit@test_aperture_limit-smem.html
   [306]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/shard-dg2-5/igt@prime_mmap@test_aperture_limit@test_aperture_limit-smem.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109274]: https://bugs.freedesktop.org/show_bug.cgi?id=109274
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109283]: https://bugs.freedesktop.org/show_bug.cgi?id=109283
  [fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
  [fdo#109291]: https://bugs.freedesktop.org/show_bug.cgi?id=109291
  [fdo#109293]: https://bugs.freedesktop.org/show_bug.cgi?id=109293
  [fdo#109295]: https://bugs.freedesktop.org/show_bug.cgi?id=109295
  [fdo#109312]: https://bugs.freedesktop.org/show_bug.cgi?id=109312
  [fdo#109314]: https://bugs.freedesktop.org/show_bug.cgi?id=109314
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [fdo#109506]: https://bugs.freedesktop.org/show_bug.cgi?id=109506
  [fdo#110189]: https://bugs.freedesktop.org/show_bug.cgi?id=110189
  [fdo#110723]: https://bugs.freedesktop.org/show_bug.cgi?id=110723
  [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
  [fdo#111614]: https://bugs.freedesktop.org/show_bug.cgi?id=111614
  [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
  [fdo#111656]: https://bugs.freedesktop.org/show_bug.cgi?id=111656
  [fdo#111767]: https://bugs.freedesktop.org/show_bug.cgi?id=111767
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [fdo#112283]: https://bugs.freedesktop.org/show_bug.cgi?id=112283
  [i915#10030]: https://gitlab.freedesktop.org/drm/intel/issues/10030
  [i915#10070]: https://gitlab.freedesktop.org/drm/intel/issues/10070
  [i915#10131]: https://gitlab.freedesktop.org/drm/intel/issues/10131
  [i915#10136]: https://gitlab.freedesktop.org/drm/intel/issues/10136
  [i915#10137]: https://gitlab.freedesktop.org/drm/intel/issues/10137
  [i915#10140]: https://gitlab.freedesktop.org/drm/intel/issues/10140
  [i915#10143]: https://gitlab.freedesktop.org/drm/intel/issues/10143
  [i915#1769]: https://gitlab.freedesktop.org/drm/intel/issues/1769
  [i915#1825]: https://gitlab.freedesktop.org/drm/intel/issues/1825
  [i915#1839]: https://gitlab.freedesktop.org/drm/intel/issues/1839
  [i915#2434]: https://gitlab.freedesktop.org/drm/intel/issues/2434
  [i915#2437]: https://gitlab.freedesktop.org/drm/intel/issues/2437
  [i915#2527]: https://gitlab.freedesktop.org/drm/intel/issues/2527
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#2587]: https://gitlab.freedesktop.org/drm/intel/issues/2587
  [i915#2658]: https://gitlab.freedesktop.org/drm/intel/issues/2658
  [i915#2672]: https://gitlab.freedesktop.org/drm/intel/issues/2672
  [i915#2705]: https://gitlab.freedesktop.org/drm/intel/issues/2705
  [i915#280]: https://gitlab.freedesktop.org/drm/intel/issues/280
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#2856]: https://gitlab.freedesktop.org/drm/intel/issues/2856
  [i915#2876]: https://gitlab.freedesktop.org/drm/intel/issues/2876
  [i915#3023]: https://gitlab.freedesktop.org/drm/intel/issues/3023
  [i915#3281]: https://gitlab.freedesktop.org/drm/intel/issues/3281
  [i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
  [i915#3291]: https://gitlab.freedesktop.org/drm/intel/issues/3291
  [i915#3297]: https://gitlab.freedesktop.org/drm/intel/issues/3297
  [i915#3299]: https://gitlab.freedesktop.org/drm/intel/issues/3299
  [i915#3359]: https://gitlab.freedesktop.org/drm/intel/issues/3359
  [i915#3361]: https://gitlab.freedesktop.org/drm/intel/issues/3361
  [i915#3458]: https://gitlab.freedesktop.org/drm/intel/issues/3458
  [i915#3539]: https://gitlab.freedesktop.org/drm/intel/issues/3539
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3591]: https://gitlab.freedesktop.org/drm/intel/issues/3591
  [i915#3637]: https://gitlab.freedesktop.org/drm/intel/issues/3637
  [i915#3638]: https://gitlab.freedesktop.org/drm/intel/issues/3638
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#3743]: https://gitlab.freedesktop.org/drm/intel/issues/3743
  [i915#3778]: https://gitlab.freedesktop.org/drm/intel/issues/3778
  [i915#3840]: https://gitlab.freedesktop.org/drm/intel/issues/3840
  [i915#3955]: https://gitlab.freedesktop.org/drm/intel/issues/3955
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4079]: https://gitlab.freedesktop.org/drm/intel/issues/4079
  [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#4212]: https://gitlab.freedesktop.org/drm/intel/issues/4212
  [i915#4213]: https://gitlab.freedesktop.org/drm/intel/issues/4213
  [i915#4235]: https://gitlab.freedesktop.org/drm/intel/issues/4235
  [i915#4270]: https://gitlab.freedesktop.org/drm/intel/issues/4270
  [i915#4281]: https://gitlab.freedesktop.org/drm/intel/issues/4281
  [i915#4349]: https://gitlab.freedesktop.org/drm/intel/issues/4349
  [i915#4538]: https://gitlab.freedesktop.org/drm/intel/issues/4538
  [i915#4565]: https://gitlab.freedesktop.org/drm/intel/issues/4565
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4812]: https://gitlab.freedesktop.org/drm/intel/issues/4812
  [i915#4816]: https://gitlab.freedesktop.org/drm/intel/issues/4816
  [i915#4818]: https://gitlab.freedesktop.org/drm/intel/issues/4818
  [i915#4852]: https://gitlab.freedesktop.org/drm/intel/issues/4852
  [i915#4860]: https://gitlab.freedesktop.org/drm/intel/issues/4860
  [i915#4880]: https://gitlab.freedesktop.org/drm/intel/issues/4880
  [i915#4881]: https://gitlab.freedesktop.org/drm/intel/issues/4881
  [i915#4958]: https://gitlab.freedesktop.org/drm/intel/issues/4958
  [i915#5107]: https://gitlab.freedesktop.org/drm/intel/issues/5107
  [i915#5138]: https://gitlab.freedesktop.org/drm/intel/issues/5138
  [i915#5190]: https://gitlab.freedesktop.org/drm/intel/issues/5190
  [i915#5235]: https://gitlab.freedesktop.org/drm/intel/issues/5235
  [i915#5274]: https://gitlab.freedesktop.org/drm/intel/issues/5274
  [i915#5286]: https://gitlab.freedesktop.org/drm/intel/issues/5286
  [i915#5289]: https://gitlab.freedesktop.org/drm/intel/issues/5289
  [i915#5354]: https://gitlab.freedesktop.org/drm/intel/issues/5354
  [i915#5439]: https://gitlab.freedesktop.org/drm/intel/issues/5439
  [i915#5465]: https://gitlab.freedesktop.org/drm/intel/issues/5465
  [i915#5493]: https://gitlab.freedesktop.org/drm/intel/issues/5493
  [i915#5784]: https://gitlab.freedesktop.org/drm/intel/issues/5784
  [i915#5978]: https://gitlab.freedesktop.org/drm/intel/issues/5978
  [i915#6095]: https://gitlab.freedesktop.org/drm/intel/issues/6095
  [i915#6118]: https://gitlab.freedesktop.org/drm/intel/issues/6118
  [i915#6227]: https://gitlab.freedesktop.org/drm/intel/issues/6227
  [i915#6268]: https://gitlab.freedesktop.org/drm/intel/issues/6268
  [i915#6301]: https://gitlab.freedesktop.org/drm/intel/issues/6301
  [i915#6344]: https://gitlab.freedesktop.org/drm/intel/issues/6344
  [i915#6524]: https://gitlab.freedesktop.org/drm/intel/issues/6524
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#6880]: https://gitlab.freedesktop.org/drm/intel/issues/6880
  [i915#6944]: https://gitlab.freedesktop.org/drm/intel/issues/6944
  [i915#7116]: https://gitlab.freedesktop.org/drm/intel/issues/7116
  [i915#7118]: https://gitlab.freedesktop.org/drm/intel/issues/7118
  [i915#7356]: https://gitlab.freedesktop.org/drm/intel/issues/7356
  [i915#7697]: https://gitlab.freedesktop.org/drm/intel/issues/7697
  [i915#7701]: https://gitlab.freedesktop.org/drm/intel/issues/7701
  [i915#7711]: https://gitlab.freedesktop.org/drm/intel/issues/7711
  [i915#7742]: https://gitlab.freedesktop.org/drm/intel/issues/7742
  [i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
  [i915#7975]: https://gitlab.freedesktop.org/drm/intel/issues/7975
  [i915#8213]: https://gitlab.freedesktop.org/drm/intel/issues/8213
  [i915#8228]: https://gitlab.freedesktop.org/drm/intel/issues/8228
  [i915#8293]: https://gitlab.freedesktop.org/drm/intel/issues/8293
  [i915#8411]: https://gitlab.freedesktop.org/drm/intel/issues/8411
  [i915#8414]: https://gitlab.freedesktop.org/drm/intel/issues/8414
  [i915#8516]: https://gitlab.freedesktop.org/drm/intel/issues/8516
  [i915#8588]: https://gitlab.freedesktop.org/drm/intel/issues/8588
  [i915#8708]: https://gitlab.freedesktop.org/drm/intel/issues/8708
  [i915#8709]: https://gitlab.freedesktop.org/drm/intel/issues/8709
  [i915#8816]: https://gitlab.freedesktop.org/drm/intel/issues/8816
  [i915#8875]: https://gitlab.freedesktop.org/drm/intel/issues/8875
  [i915#9067]: https://gitlab.freedesktop.org/drm/intel/issues/9067
  [i915#9196]: https://gitlab.freedesktop.org/drm/intel/issues/9196
  [i915#9227]: https://gitlab.freedesktop.org/drm/intel/issues/9227
  [i915#9311]: https://gitlab.freedesktop.org/drm/intel/issues/9311
  [i915#9323]: https://gitlab.freedesktop.org/drm/intel/issues/9323
  [i915#9337]: https://gitlab.freedesktop.org/drm/intel/issues/9337
  [i915#9351]: https://gitlab.freedesktop.org/drm/intel/issues/9351
  [i915#9412]: https://gitlab.freedesktop.org/drm/intel/issues/9412
  [i915#9413]: https://gitlab.freedesktop.org/drm/intel/issues/9413
  [i915#9423]: https://gitlab.freedesktop.org/drm/intel/issues/9423
  [i915#9424]: https://gitlab.freedesktop.org/drm/intel/issues/9424
  [i915#9433]: https://gitlab.freedesktop.org/drm/intel/issues/9433
  [i915#9519]: https://gitlab.freedesktop.org/drm/intel/issues/9519
  [i915#9683]: https://gitlab.freedesktop.org/drm/intel/issues/9683
  [i915#9685]: https://gitlab.freedesktop.org/drm/intel/issues/9685
  [i915#9697]: https://gitlab.freedesktop.org/drm/intel/issues/9697
  [i915#9723]: https://gitlab.freedesktop.org/drm/intel/issues/9723
  [i915#9732]: https://gitlab.freedesktop.org/drm/intel/issues/9732
  [i915#9766]: https://gitlab.freedesktop.org/drm/intel/issues/9766
  [i915#9779]: https://gitlab.freedesktop.org/drm/intel/issues/9779
  [i915#9820]: https://gitlab.freedesktop.org/drm/intel/issues/9820
  [i915#9833]: https://gitlab.freedesktop.org/drm/intel/issues/9833
  [i915#9906]: https://gitlab.freedesktop.org/drm/intel/issues/9906
  [i915#9917]: https://gitlab.freedesktop.org/drm/intel/issues/9917


Build changes
-------------

  * Linux: CI_DRM_14166 -> Patchwork_129082v1

  CI-20190529: 20190529
  CI_DRM_14166: fc6b7c6ee7d786e6ed48425a2ce0e674906e4e5c @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7690: aa45298ff675abbe6bf8f04ae186e2388c35f03a @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_129082v1: fc6b7c6ee7d786e6ed48425a2ce0e674906e4e5c @ git://anongit.freedesktop.org/gfx-ci/linux

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_129082v1/index.html

[-- Attachment #2: Type: text/html, Size: 96741 bytes --]

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 01/19] drm/dp: Add drm_dp_max_dprx_data_rate()
  2024-01-23 10:28 ` [PATCH 01/19] drm/dp: Add drm_dp_max_dprx_data_rate() Imre Deak
@ 2024-01-26 11:36   ` Ville Syrjälä
  2024-01-26 13:28     ` Imre Deak
  0 siblings, 1 reply; 61+ messages in thread
From: Ville Syrjälä @ 2024-01-26 11:36 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx, dri-devel

On Tue, Jan 23, 2024 at 12:28:32PM +0200, Imre Deak wrote:
> Copy intel_dp_max_data_rate() to DRM core. It will be needed by a
> follow-up DP tunnel patch, checking the maximum rate the DPRX (sink)
> supports. Accordingly use the drm_dp_max_dprx_data_rate() name for
> clarity. This patchset will also switch calling the new DRM function
> in i915 instead of intel_dp_max_data_rate().
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/display/drm_dp_helper.c | 58 +++++++++++++++++++++++++
>  include/drm/display/drm_dp_helper.h     |  2 +
>  2 files changed, 60 insertions(+)
> 
> diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
> index b1ca3a1100dab..24911243d4d3a 100644
> --- a/drivers/gpu/drm/display/drm_dp_helper.c
> +++ b/drivers/gpu/drm/display/drm_dp_helper.c
> @@ -4058,3 +4058,61 @@ int drm_dp_bw_channel_coding_efficiency(bool is_uhbr)
>  		return 800000;
>  }
>  EXPORT_SYMBOL(drm_dp_bw_channel_coding_efficiency);
> +
> +/*
> + * Given a link rate and lanes, get the data bandwidth.
> + *
> + * Data bandwidth is the actual payload rate, which depends on the data
> + * bandwidth efficiency and the link rate.
> + *
> + * For 8b/10b channel encoding, SST and non-FEC, the data bandwidth efficiency
> + * is 80%. For example, for a 1.62 Gbps link, 1.62*10^9 bps * 0.80 * (1/8) =
> + * 162000 kBps. With 8-bit symbols, we have 162000 kHz symbol clock. Just by
> + * coincidence, the port clock in kHz matches the data bandwidth in kBps, and
> + * they equal the link bit rate in Gbps multiplied by 100000. (Note that this no
> + * longer holds for data bandwidth as soon as FEC or MST is taken into account!)
> + *
> + * For 128b/132b channel encoding, the data bandwidth efficiency is 96.71%. For
> + * example, for a 10 Gbps link, 10*10^9 bps * 0.9671 * (1/8) = 1208875
> + * kBps. With 32-bit symbols, we have 312500 kHz symbol clock. The value 1000000
> + * does not match the symbol clock, the port clock (not even if you think in
> + * terms of a byte clock), nor the data bandwidth. It only matches the link bit
> + * rate in units of 10000 bps.
> + *
> + * Note that protocol layers above the DPRX link level considered here can
> + * further limit the maximum data rate. Such layers are the MST topology (with
> + * limits on the link between the source and first branch device as well as on
> + * the whole MST path until the DPRX link) and (Thunderbolt) DP tunnels -
> + * which in turn can encapsulate an MST link with its own limit - with each
> + * SST or MST encapsulated tunnel sharing the BW of a tunnel group.
> + *
> + * TODO: Add support for querying the max data rate with the above limits as
> + * well.
> + *
> + * Returns the maximum data rate in kBps units.
> + */
> +int drm_dp_max_dprx_data_rate(int max_link_rate, int max_lanes)
> +{
> +	int ch_coding_efficiency =
> +		drm_dp_bw_channel_coding_efficiency(drm_dp_is_uhbr_rate(max_link_rate));
> +	int max_link_rate_kbps = max_link_rate * 10;

That x10 value seems rather pointless.

> +
> +	/*
> +	 * UHBR rates always use 128b/132b channel encoding, and have
> +	 * 97.71% data bandwidth efficiency. Consider max_link_rate the
> +	 * link bit rate in units of 10000 bps.
> +	 */
> +	/*
> +	 * Lower than UHBR rates always use 8b/10b channel encoding, and have
> +	 * 80% data bandwidth efficiency for SST non-FEC. However, this turns
> +	 * out to be a nop by coincidence:
> +	 *
> +	 *	int max_link_rate_kbps = max_link_rate * 10;
> +	 *	max_link_rate_kbps = DIV_ROUND_DOWN_ULL(max_link_rate_kbps * 8, 10);
> +	 *	max_link_rate = max_link_rate_kbps / 8;
> +	 */

Not sure why we are repeating the nuts and bolts detils in the
comments so much? Doesn't drm_dp_bw_channel_coding_efficiency()
explain all this already?

> +	return DIV_ROUND_DOWN_ULL(mul_u32_u32(max_link_rate_kbps * max_lanes,
> +					      ch_coding_efficiency),
> +				  1000000 * 8);
> +}
> +EXPORT_SYMBOL(drm_dp_max_dprx_data_rate);
> diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h
> index 863b2e7add29e..454ae7517419a 100644
> --- a/include/drm/display/drm_dp_helper.h
> +++ b/include/drm/display/drm_dp_helper.h
> @@ -813,4 +813,6 @@ int drm_dp_bw_overhead(int lane_count, int hactive,
>  		       int bpp_x16, unsigned long flags);
>  int drm_dp_bw_channel_coding_efficiency(bool is_uhbr);
>  
> +int drm_dp_max_dprx_data_rate(int max_link_rate, int max_lanes);
> +
>  #endif /* _DRM_DP_HELPER_H_ */
> -- 
> 2.39.2

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 01/19] drm/dp: Add drm_dp_max_dprx_data_rate()
  2024-01-26 11:36   ` Ville Syrjälä
@ 2024-01-26 13:28     ` Imre Deak
  2024-02-06 20:23       ` Shankar, Uma
  0 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-26 13:28 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, dri-devel

On Fri, Jan 26, 2024 at 01:36:02PM +0200, Ville Syrjälä wrote:
> On Tue, Jan 23, 2024 at 12:28:32PM +0200, Imre Deak wrote:
> > Copy intel_dp_max_data_rate() to DRM core. It will be needed by a
> > follow-up DP tunnel patch, checking the maximum rate the DPRX (sink)
> > supports. Accordingly use the drm_dp_max_dprx_data_rate() name for
> > clarity. This patchset will also switch calling the new DRM function
> > in i915 instead of intel_dp_max_data_rate().
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/display/drm_dp_helper.c | 58 +++++++++++++++++++++++++
> >  include/drm/display/drm_dp_helper.h     |  2 +
> >  2 files changed, 60 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
> > index b1ca3a1100dab..24911243d4d3a 100644
> > --- a/drivers/gpu/drm/display/drm_dp_helper.c
> > +++ b/drivers/gpu/drm/display/drm_dp_helper.c
> > @@ -4058,3 +4058,61 @@ int drm_dp_bw_channel_coding_efficiency(bool is_uhbr)
> >  		return 800000;
> >  }
> >  EXPORT_SYMBOL(drm_dp_bw_channel_coding_efficiency);
> > +
> > +/*
> > + * Given a link rate and lanes, get the data bandwidth.
> > + *
> > + * Data bandwidth is the actual payload rate, which depends on the data
> > + * bandwidth efficiency and the link rate.
> > + *
> > + * For 8b/10b channel encoding, SST and non-FEC, the data bandwidth efficiency
> > + * is 80%. For example, for a 1.62 Gbps link, 1.62*10^9 bps * 0.80 * (1/8) =
> > + * 162000 kBps. With 8-bit symbols, we have 162000 kHz symbol clock. Just by
> > + * coincidence, the port clock in kHz matches the data bandwidth in kBps, and
> > + * they equal the link bit rate in Gbps multiplied by 100000. (Note that this no
> > + * longer holds for data bandwidth as soon as FEC or MST is taken into account!)
> > + *
> > + * For 128b/132b channel encoding, the data bandwidth efficiency is 96.71%. For
> > + * example, for a 10 Gbps link, 10*10^9 bps * 0.9671 * (1/8) = 1208875
> > + * kBps. With 32-bit symbols, we have 312500 kHz symbol clock. The value 1000000
> > + * does not match the symbol clock, the port clock (not even if you think in
> > + * terms of a byte clock), nor the data bandwidth. It only matches the link bit
> > + * rate in units of 10000 bps.
> > + *
> > + * Note that protocol layers above the DPRX link level considered here can
> > + * further limit the maximum data rate. Such layers are the MST topology (with
> > + * limits on the link between the source and first branch device as well as on
> > + * the whole MST path until the DPRX link) and (Thunderbolt) DP tunnels -
> > + * which in turn can encapsulate an MST link with its own limit - with each
> > + * SST or MST encapsulated tunnel sharing the BW of a tunnel group.
> > + *
> > + * TODO: Add support for querying the max data rate with the above limits as
> > + * well.
> > + *
> > + * Returns the maximum data rate in kBps units.
> > + */
> > +int drm_dp_max_dprx_data_rate(int max_link_rate, int max_lanes)
> > +{
> > +	int ch_coding_efficiency =
> > +		drm_dp_bw_channel_coding_efficiency(drm_dp_is_uhbr_rate(max_link_rate));
> > +	int max_link_rate_kbps = max_link_rate * 10;
> 
> That x10 value seems rather pointless.

I suppose the point was to make the units clearer, but it could be
clarified instead in max_link_rates' documentation, which is missing
atm.

> > +
> > +	/*
> > +	 * UHBR rates always use 128b/132b channel encoding, and have
> > +	 * 97.71% data bandwidth efficiency. Consider max_link_rate the
> > +	 * link bit rate in units of 10000 bps.
> > +	 */
> > +	/*
> > +	 * Lower than UHBR rates always use 8b/10b channel encoding, and have
> > +	 * 80% data bandwidth efficiency for SST non-FEC. However, this turns
> > +	 * out to be a nop by coincidence:
> > +	 *
> > +	 *	int max_link_rate_kbps = max_link_rate * 10;
> > +	 *	max_link_rate_kbps = DIV_ROUND_DOWN_ULL(max_link_rate_kbps * 8, 10);
> > +	 *	max_link_rate = max_link_rate_kbps / 8;
> > +	 */
> 
> Not sure why we are repeating the nuts and bolts detils in the
> comments so much? Doesn't drm_dp_bw_channel_coding_efficiency()
> explain all this already?

I simply copied the function, but yes in this context there is
duplication, thanks for reading through all that. Will consolidate both
the above and the bigger comment before the function with the existing
docs here.

> 
> > +	return DIV_ROUND_DOWN_ULL(mul_u32_u32(max_link_rate_kbps * max_lanes,
> > +					      ch_coding_efficiency),
> > +				  1000000 * 8);
> > +}
> > +EXPORT_SYMBOL(drm_dp_max_dprx_data_rate);
> > diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h
> > index 863b2e7add29e..454ae7517419a 100644
> > --- a/include/drm/display/drm_dp_helper.h
> > +++ b/include/drm/display/drm_dp_helper.h
> > @@ -813,4 +813,6 @@ int drm_dp_bw_overhead(int lane_count, int hactive,
> >  		       int bpp_x16, unsigned long flags);
> >  int drm_dp_bw_channel_coding_efficiency(bool is_uhbr);
> >  
> > +int drm_dp_max_dprx_data_rate(int max_link_rate, int max_lanes);
> > +
> >  #endif /* _DRM_DP_HELPER_H_ */
> > -- 
> > 2.39.2
> 
> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/19] drm/i915/dp: Add support to notify MST connectors to retry modesets
  2024-01-23 10:28 ` [PATCH 03/19] drm/i915/dp: Add support to notify MST connectors to retry modesets Imre Deak
@ 2024-01-29 10:36   ` Hogander, Jouni
  2024-01-29 11:00     ` Imre Deak
  0 siblings, 1 reply; 61+ messages in thread
From: Hogander, Jouni @ 2024-01-29 10:36 UTC (permalink / raw)
  To: intel-gfx, Deak, Imre; +Cc: dri-devel

On Tue, 2024-01-23 at 12:28 +0200, Imre Deak wrote:
> On shared (Thunderbolt) links with DP tunnels, the modeset may need
> to
> be retried on all connectors on the link due to a link BW limitation
> arising only after the atomic check phase. To support this add a
> helper
> function queuing a work to retry the modeset on a given port's
> connector
> and at the same time any MST connector with streams through the same
> port. A follow-up change enabling the DP tunnel Bandwidth Allocation
> Mode will take this into use.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_display.c  |  5 +-
>  drivers/gpu/drm/i915/display/intel_dp.c       | 55
> ++++++++++++++++++-
>  drivers/gpu/drm/i915/display/intel_dp.h       |  8 +++
>  .../drm/i915/display/intel_dp_link_training.c |  3 +-
>  drivers/gpu/drm/i915/display/intel_dp_mst.c   |  2 +
>  5 files changed, 67 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c
> b/drivers/gpu/drm/i915/display/intel_display.c
> index a92e959c8ac7b..0caebbb3e2dbb 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -8060,8 +8060,9 @@ void intel_hpd_poll_fini(struct
> drm_i915_private *i915)
>         /* Kill all the work that may have been queued by hpd. */
>         drm_connector_list_iter_begin(&i915->drm, &conn_iter);
>         for_each_intel_connector_iter(connector, &conn_iter) {
> -               if (connector->modeset_retry_work.func)
> -                       cancel_work_sync(&connector-
> >modeset_retry_work);
> +               if (connector->modeset_retry_work.func &&
> +                   cancel_work_sync(&connector->modeset_retry_work))
> +                       drm_connector_put(&connector->base);
>                 if (connector->hdcp.shim) {
>                         cancel_delayed_work_sync(&connector-
> >hdcp.check_work);
>                         cancel_work_sync(&connector->hdcp.prop_work);
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index ab415f41924d7..4e36c2c39888e 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2837,6 +2837,50 @@ intel_dp_audio_compute_config(struct
> intel_encoder *encoder,
>                                         intel_dp_is_uhbr(pipe_config)
> ;
>  }
>  
> +void intel_dp_queue_modeset_retry_work(struct intel_connector
> *connector)
> +{
> +       struct drm_i915_private *i915 = to_i915(connector->base.dev);
> +
> +       drm_connector_get(&connector->base);
> +       if (!queue_work(i915->unordered_wq, &connector-
> >modeset_retry_work))
> +               drm_connector_put(&connector->base);
> +}
> +
> +void
> +intel_dp_queue_modeset_retry_for_link(struct intel_atomic_state
> *state,
> +                                     struct intel_encoder *encoder,
> +                                     const struct intel_crtc_state
> *crtc_state,
> +                                     const struct
> drm_connector_state *conn_state)
> +{
> +       struct drm_i915_private *i915 = to_i915(crtc_state-
> >uapi.crtc->dev);
> +       struct intel_connector *connector;
> +       struct intel_digital_connector_state *iter_conn_state;
> +       struct intel_dp *intel_dp;
> +       int i;
> +
> +       if (conn_state) {
> +               connector = to_intel_connector(conn_state-
> >connector);
> +               intel_dp_queue_modeset_retry_work(connector);
> +
> +               return;
> +       }
> +
> +       if (drm_WARN_ON(&i915->drm,
> +                       !intel_crtc_has_type(crtc_state,
> INTEL_OUTPUT_DP_MST)))
> +               return;
> +
> +       intel_dp = enc_to_intel_dp(encoder);
> +
> +       for_each_new_intel_connector_in_state(state, connector,
> iter_conn_state, i) {
> +               (void)iter_conn_state;

Checked iter_conn_state->base->crtc documentation:

@crtc: CRTC to connect connector to, NULL if disabled.

Do we need to check if connector is "disabled" or is it impossible
scenario?

BR,

Jouni Högander

 
> +
> +               if (connector->mst_port != intel_dp)
> +                       continue;
> +
> +               intel_dp_queue_modeset_retry_work(connector);
> +       }
> +}
> +
>  int
>  intel_dp_compute_config(struct intel_encoder *encoder,
>                         struct intel_crtc_state *pipe_config,
> @@ -6436,6 +6480,14 @@ static void
> intel_dp_modeset_retry_work_fn(struct work_struct *work)
>         mutex_unlock(&connector->dev->mode_config.mutex);
>         /* Send Hotplug uevent so userspace can reprobe */
>         drm_kms_helper_connector_hotplug_event(connector);
> +
> +       drm_connector_put(connector);
> +}
> +
> +void intel_dp_init_modeset_retry_work(struct intel_connector
> *connector)
> +{
> +       INIT_WORK(&connector->modeset_retry_work,
> +                 intel_dp_modeset_retry_work_fn);
>  }
>  
>  bool
> @@ -6452,8 +6504,7 @@ intel_dp_init_connector(struct
> intel_digital_port *dig_port,
>         int type;
>  
>         /* Initialize the work for modeset in case of link train
> failure */
> -       INIT_WORK(&intel_connector->modeset_retry_work,
> -                 intel_dp_modeset_retry_work_fn);
> +       intel_dp_init_modeset_retry_work(intel_connector);
>  
>         if (drm_WARN(dev, dig_port->max_lanes < 1,
>                      "Not enough lanes (%d) for DP on
> [ENCODER:%d:%s]\n",
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h
> b/drivers/gpu/drm/i915/display/intel_dp.h
> index 530cc97bc42f4..105c2086310db 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -23,6 +23,8 @@ struct intel_digital_port;
>  struct intel_dp;
>  struct intel_encoder;
>  
> +struct work_struct;
> +
>  struct link_config_limits {
>         int min_rate, max_rate;
>         int min_lane_count, max_lane_count;
> @@ -43,6 +45,12 @@ void intel_dp_adjust_compliance_config(struct
> intel_dp *intel_dp,
>  bool intel_dp_limited_color_range(const struct intel_crtc_state
> *crtc_state,
>                                   const struct drm_connector_state
> *conn_state);
>  int intel_dp_min_bpp(enum intel_output_format output_format);
> +void intel_dp_init_modeset_retry_work(struct intel_connector
> *connector);
> +void intel_dp_queue_modeset_retry_work(struct intel_connector
> *connector);
> +void intel_dp_queue_modeset_retry_for_link(struct intel_atomic_state
> *state,
> +                                          struct intel_encoder
> *encoder,
> +                                          const struct
> intel_crtc_state *crtc_state,
> +                                          const struct
> drm_connector_state *conn_state);
>  bool intel_dp_init_connector(struct intel_digital_port *dig_port,
>                              struct intel_connector
> *intel_connector);
>  void intel_dp_set_link_params(struct intel_dp *intel_dp,
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> index 1abfafbbfa757..7b140cbf8dd31 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> @@ -1075,7 +1075,6 @@ static void
> intel_dp_schedule_fallback_link_training(struct intel_dp *intel_dp,
>                                                      const struct
> intel_crtc_state *crtc_state)
>  {
>         struct intel_connector *intel_connector = intel_dp-
> >attached_connector;
> -       struct drm_i915_private *i915 = dp_to_i915(intel_dp);
>  
>         if (!intel_digital_port_connected(&dp_to_dig_port(intel_dp)-
> >base)) {
>                 lt_dbg(intel_dp, DP_PHY_DPRX, "Link Training failed
> on disconnected sink.\n");
> @@ -1093,7 +1092,7 @@ static void
> intel_dp_schedule_fallback_link_training(struct intel_dp *intel_dp,
>         }
>  
>         /* Schedule a Hotplug Uevent to userspace to start modeset */
> -       queue_work(i915->unordered_wq, &intel_connector-
> >modeset_retry_work);
> +       intel_dp_queue_modeset_retry_work(intel_connector);
>  }
>  
>  /* Perform the link training on all LTTPRs and the DPRX on a link.
> */
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index 5fa25a5a36b55..b15e43ebf138b 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -1542,6 +1542,8 @@ static struct drm_connector
> *intel_dp_add_mst_connector(struct drm_dp_mst_topolo
>         intel_connector->port = port;
>         drm_dp_mst_get_port_malloc(port);
>  
> +       intel_dp_init_modeset_retry_work(intel_connector);
> +
>         intel_connector->dp.dsc_decompression_aux =
> drm_dp_mst_dsc_aux_for_port(port);
>         intel_dp_mst_read_decompression_port_dsc_caps(intel_dp,
> intel_connector);
>         intel_connector->dp.dsc_hblank_expansion_quirk =


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/19] drm/i915/dp: Add support to notify MST connectors to retry modesets
  2024-01-29 10:36   ` Hogander, Jouni
@ 2024-01-29 11:00     ` Imre Deak
  0 siblings, 0 replies; 61+ messages in thread
From: Imre Deak @ 2024-01-29 11:00 UTC (permalink / raw)
  To: Hogander, Jouni; +Cc: intel-gfx, dri-devel

On Mon, Jan 29, 2024 at 12:36:12PM +0200, Hogander, Jouni wrote:
> On Tue, 2024-01-23 at 12:28 +0200, Imre Deak wrote:
> > [...]
> > +void
> > +intel_dp_queue_modeset_retry_for_link(struct intel_atomic_state *state,
> > +                                     struct intel_encoder *encoder,
> > +                                     const struct intel_crtc_state *crtc_state,
> > +                                     const struct drm_connector_state *conn_state)
> > +{
> > +       struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
> > +       struct intel_connector *connector;
> > +       struct intel_digital_connector_state *iter_conn_state;
> > +       struct intel_dp *intel_dp;
> > +       int i;
> > +
> > +       if (conn_state) {
> > +               connector = to_intel_connector(conn_state->connector);
> > +               intel_dp_queue_modeset_retry_work(connector);
> > +
> > +               return;
> > +       }
> > +
> > +       if (drm_WARN_ON(&i915->drm,
> > +                       !intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST)))
> > +               return;
> > +
> > +       intel_dp = enc_to_intel_dp(encoder);
> > +
> > +       for_each_new_intel_connector_in_state(state, connector, iter_conn_state, i) {
> > +               (void)iter_conn_state;
> 
> Checked iter_conn_state->base->crtc documentation:
> 
> @crtc: CRTC to connect connector to, NULL if disabled.
> 
> Do we need to check if connector is "disabled" or is it impossible
> scenario?

Yes, it does show if the connector is disabled and it would make sense
to not notify those. However the check for that would be racy, at least
during a non-blocking commit, but I think also in general where
userspace could be in the middle of enabling this connector.

The point of the notification is that userspace re-checks the mode it
wants on each MST connector to be enabled, so to prevent that it would
miss the re-check on connectors with a pending enabling like above, the
notification is simply sent to all the connectors in the MST topology.

> 
> BR,
> 
> Jouni Högander
> 
> 
> > +
> > +               if (connector->mst_port != intel_dp)
> > +                       continue;
> > +
> > +               intel_dp_queue_modeset_retry_work(connector);
> > +       }
> > +}
> > +
> >  int
> >  intel_dp_compute_config(struct intel_encoder *encoder,
> >                         struct intel_crtc_state *pipe_config,
> > @@ -6436,6 +6480,14 @@ static void
> > intel_dp_modeset_retry_work_fn(struct work_struct *work)
> >         mutex_unlock(&connector->dev->mode_config.mutex);
> >         /* Send Hotplug uevent so userspace can reprobe */
> >         drm_kms_helper_connector_hotplug_event(connector);
> > +
> > +       drm_connector_put(connector);
> > +}
> > +
> > +void intel_dp_init_modeset_retry_work(struct intel_connector
> > *connector)
> > +{
> > +       INIT_WORK(&connector->modeset_retry_work,
> > +                 intel_dp_modeset_retry_work_fn);
> >  }
> >
> >  bool
> > @@ -6452,8 +6504,7 @@ intel_dp_init_connector(struct
> > intel_digital_port *dig_port,
> >         int type;
> >
> >         /* Initialize the work for modeset in case of link train
> > failure */
> > -       INIT_WORK(&intel_connector->modeset_retry_work,
> > -                 intel_dp_modeset_retry_work_fn);
> > +       intel_dp_init_modeset_retry_work(intel_connector);
> >
> >         if (drm_WARN(dev, dig_port->max_lanes < 1,
> >                      "Not enough lanes (%d) for DP on
> > [ENCODER:%d:%s]\n",
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.h
> > b/drivers/gpu/drm/i915/display/intel_dp.h
> > index 530cc97bc42f4..105c2086310db 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.h
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> > @@ -23,6 +23,8 @@ struct intel_digital_port;
> >  struct intel_dp;
> >  struct intel_encoder;
> >
> > +struct work_struct;
> > +
> >  struct link_config_limits {
> >         int min_rate, max_rate;
> >         int min_lane_count, max_lane_count;
> > @@ -43,6 +45,12 @@ void intel_dp_adjust_compliance_config(struct
> > intel_dp *intel_dp,
> >  bool intel_dp_limited_color_range(const struct intel_crtc_state
> > *crtc_state,
> >                                   const struct drm_connector_state
> > *conn_state);
> >  int intel_dp_min_bpp(enum intel_output_format output_format);
> > +void intel_dp_init_modeset_retry_work(struct intel_connector
> > *connector);
> > +void intel_dp_queue_modeset_retry_work(struct intel_connector
> > *connector);
> > +void intel_dp_queue_modeset_retry_for_link(struct intel_atomic_state
> > *state,
> > +                                          struct intel_encoder
> > *encoder,
> > +                                          const struct
> > intel_crtc_state *crtc_state,
> > +                                          const struct
> > drm_connector_state *conn_state);
> >  bool intel_dp_init_connector(struct intel_digital_port *dig_port,
> >                              struct intel_connector
> > *intel_connector);
> >  void intel_dp_set_link_params(struct intel_dp *intel_dp,
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > index 1abfafbbfa757..7b140cbf8dd31 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > @@ -1075,7 +1075,6 @@ static void
> > intel_dp_schedule_fallback_link_training(struct intel_dp *intel_dp,
> >                                                      const struct
> > intel_crtc_state *crtc_state)
> >  {
> >         struct intel_connector *intel_connector = intel_dp-
> > >attached_connector;
> > -       struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> >
> >         if (!intel_digital_port_connected(&dp_to_dig_port(intel_dp)-
> > >base)) {
> >                 lt_dbg(intel_dp, DP_PHY_DPRX, "Link Training failed
> > on disconnected sink.\n");
> > @@ -1093,7 +1092,7 @@ static void
> > intel_dp_schedule_fallback_link_training(struct intel_dp *intel_dp,
> >         }
> >
> >         /* Schedule a Hotplug Uevent to userspace to start modeset */
> > -       queue_work(i915->unordered_wq, &intel_connector-
> > >modeset_retry_work);
> > +       intel_dp_queue_modeset_retry_work(intel_connector);
> >  }
> >
> >  /* Perform the link training on all LTTPRs and the DPRX on a link.
> > */
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> > b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> > index 5fa25a5a36b55..b15e43ebf138b 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> > @@ -1542,6 +1542,8 @@ static struct drm_connector
> > *intel_dp_add_mst_connector(struct drm_dp_mst_topolo
> >         intel_connector->port = port;
> >         drm_dp_mst_get_port_malloc(port);
> >
> > +       intel_dp_init_modeset_retry_work(intel_connector);
> > +
> >         intel_connector->dp.dsc_decompression_aux =
> > drm_dp_mst_dsc_aux_for_port(port);
> >         intel_dp_mst_read_decompression_port_dsc_caps(intel_dp,
> > intel_connector);
> >         intel_connector->dp.dsc_hblank_expansion_quirk =
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-01-23 10:28 ` [PATCH 02/19] drm/dp: Add support for DP tunneling Imre Deak
@ 2024-01-31 12:50   ` Hogander, Jouni
  2024-01-31 13:58     ` Imre Deak
  2024-01-31 16:09   ` Ville Syrjälä
  2024-02-07 20:02   ` Ville Syrjälä
  2 siblings, 1 reply; 61+ messages in thread
From: Hogander, Jouni @ 2024-01-31 12:50 UTC (permalink / raw)
  To: intel-gfx, Deak, Imre; +Cc: dri-devel

On Tue, 2024-01-23 at 12:28 +0200, Imre Deak wrote:
> Add support for Display Port DP tunneling. For now this includes the
> support for Bandwidth Allocation Mode, leaving adding Panel Replay
> support for later.
> 
> BWA allows using displays that share the same (Thunderbolt) link with
> their maximum resolution. Atm, this may not be possible due to the
> coarse granularity of partitioning the link BW among the displays on
> the
> link: the BW allocation policy is in a SW/FW/HW component on the link
> (on Thunderbolt it's the SW or FW Connection Manager), independent of
> the driver. This policy will set the DPRX maximum rate and lane count
> DPCD registers the GFX driver will see (0x00000, 0x00001, 0x02200,
> 0x02201) based on the available link BW.
> 
> The granularity of the current BW allocation policy is course, based
> on
> the required link rate in the 1.62Gbs..8.1Gbps range and it may
> prevent
> using higher resolutions all together: the display connected first
> will
> get a share of the link BW which corresponds to its full DPRX
> capability
> (regardless of the actual mode it uses). A subsequent display
> connected
> will only get the remaining BW, which could be well below its full
> capability.
> 
> BWA solves the above course granularity (reducing it to a
> 250Mbs..1Gps
> range) and first-come/first-served issues by letting the driver
> request
> the BW for each display on a link which reflects the actual modes the
> displays use.
> 
> This patch adds the DRM core helper functions, while a follow-up
> change
> in the patchset takes them into use in the i915 driver.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/display/Kconfig         |   17 +
>  drivers/gpu/drm/display/Makefile        |    2 +
>  drivers/gpu/drm/display/drm_dp_tunnel.c | 1715
> +++++++++++++++++++++++
>  include/drm/display/drm_dp.h            |   60 +
>  include/drm/display/drm_dp_tunnel.h     |  270 ++++
>  5 files changed, 2064 insertions(+)
>  create mode 100644 drivers/gpu/drm/display/drm_dp_tunnel.c
>  create mode 100644 include/drm/display/drm_dp_tunnel.h
> 
> diff --git a/drivers/gpu/drm/display/Kconfig
> b/drivers/gpu/drm/display/Kconfig
> index 09712b88a5b83..b024a84b94c1c 100644
> --- a/drivers/gpu/drm/display/Kconfig
> +++ b/drivers/gpu/drm/display/Kconfig
> @@ -17,6 +17,23 @@ config DRM_DISPLAY_DP_HELPER
>         help
>           DRM display helpers for DisplayPort.
>  
> +config DRM_DISPLAY_DP_TUNNEL
> +       bool
> +       select DRM_DISPLAY_DP_HELPER
> +       help
> +         Enable support for DisplayPort tunnels.
> +
> +config DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +       bool "Enable debugging the DP tunnel state"
> +       depends on REF_TRACKER
> +       depends on DRM_DISPLAY_DP_TUNNEL
> +       depends on DEBUG_KERNEL
> +       depends on EXPERT
> +       help
> +         Enables debugging the DP tunnel manager's status.
> +
> +         If in doubt, say "N".
> +
>  config DRM_DISPLAY_HDCP_HELPER
>         bool
>         depends on DRM_DISPLAY_HELPER
> diff --git a/drivers/gpu/drm/display/Makefile
> b/drivers/gpu/drm/display/Makefile
> index 17ac4a1006a80..7ca61333c6696 100644
> --- a/drivers/gpu/drm/display/Makefile
> +++ b/drivers/gpu/drm/display/Makefile
> @@ -8,6 +8,8 @@ drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_HELPER) +=
> \
>         drm_dp_helper.o \
>         drm_dp_mst_topology.o \
>         drm_dsc_helper.o
> +drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_TUNNEL) += \
> +       drm_dp_tunnel.o
>  drm_display_helper-$(CONFIG_DRM_DISPLAY_HDCP_HELPER) +=
> drm_hdcp_helper.o
>  drm_display_helper-$(CONFIG_DRM_DISPLAY_HDMI_HELPER) += \
>         drm_hdmi_helper.o \
> diff --git a/drivers/gpu/drm/display/drm_dp_tunnel.c
> b/drivers/gpu/drm/display/drm_dp_tunnel.c
> new file mode 100644
> index 0000000000000..58f6330db7d9d
> --- /dev/null
> +++ b/drivers/gpu/drm/display/drm_dp_tunnel.c
> @@ -0,0 +1,1715 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#include <linux/ref_tracker.h>
> +#include <linux/types.h>
> +
> +#include <drm/drm_atomic_state_helper.h>
> +
> +#include <drm/drm_atomic.h>
> +#include <drm/drm_print.h>
> +#include <drm/display/drm_dp.h>
> +#include <drm/display/drm_dp_helper.h>
> +#include <drm/display/drm_dp_tunnel.h>
> +
> +#define to_group(__private_obj) \
> +       container_of(__private_obj, struct drm_dp_tunnel_group, base)
> +
> +#define to_group_state(__private_state) \
> +       container_of(__private_state, struct
> drm_dp_tunnel_group_state, base)
> +
> +#define is_dp_tunnel_private_obj(__obj) \
> +       ((__obj)->funcs == &tunnel_group_funcs)
> +
> +#define for_each_new_group_in_state(__state, __new_group_state, __i)
> \
> +       for ((__i) = 0; \
> +            (__i) < (__state)->num_private_objs; \
> +            (__i)++) \
> +               for_each_if ((__state)->private_objs[__i].ptr && \
> +                            is_dp_tunnel_private_obj((__state)-
> >private_objs[__i].ptr) && \
> +                            ((__new_group_state) = \
> +                               to_group_state((__state)-
> >private_objs[__i].new_state), 1))
> +
> +#define for_each_old_group_in_state(__state, __old_group_state, __i)
> \
> +       for ((__i) = 0; \
> +            (__i) < (__state)->num_private_objs; \
> +            (__i)++) \
> +               for_each_if ((__state)->private_objs[__i].ptr && \
> +                            is_dp_tunnel_private_obj((__state)-
> >private_objs[__i].ptr) && \
> +                            ((__old_group_state) = \
> +                               to_group_state((__state)-
> >private_objs[__i].old_state), 1))
> +
> +#define for_each_tunnel_in_group(__group, __tunnel) \
> +       list_for_each_entry(__tunnel, &(__group)->tunnels, node)
> +
> +#define for_each_tunnel_state(__group_state, __tunnel_state) \
> +       list_for_each_entry(__tunnel_state, &(__group_state)-
> >tunnel_states, node)
> +
> +#define for_each_tunnel_state_safe(__group_state, __tunnel_state,
> __tunnel_state_tmp) \
> +       list_for_each_entry_safe(__tunnel_state, __tunnel_state_tmp,
> \
> +                                &(__group_state)->tunnel_states,
> node)
> +
> +#define kbytes_to_mbits(__kbytes) \
> +       DIV_ROUND_UP((__kbytes) * 8, 1000)
> +
> +#define DPTUN_BW_ARG(__bw) ((__bw) < 0 ? (__bw) :
> kbytes_to_mbits(__bw))
> +
> +#define __tun_prn(__tunnel, __level, __type, __fmt, ...) \
> +       drm_##__level##__type((__tunnel)->group->mgr->dev, \
> +                             "[DPTUN %s][%s] " __fmt, \
> +                             drm_dp_tunnel_name(__tunnel), \
> +                             (__tunnel)->aux->name, ## \
> +                             __VA_ARGS__)
> +
> +#define tun_dbg(__tunnel, __fmt, ...) \
> +       __tun_prn(__tunnel, dbg, _kms, __fmt, ## __VA_ARGS__)
> +
> +#define tun_dbg_stat(__tunnel, __err, __fmt, ...) do { \
> +       if (__err) \
> +               __tun_prn(__tunnel, dbg, _kms, __fmt " (Failed, err:
> %pe)\n", \
> +                         ## __VA_ARGS__, ERR_PTR(__err)); \
> +       else \
> +               __tun_prn(__tunnel, dbg, _kms, __fmt " (Ok)\n", \
> +                         ## __VA_ARGS__); \
> +} while (0)
> +
> +#define tun_dbg_atomic(__tunnel, __fmt, ...) \
> +       __tun_prn(__tunnel, dbg, _atomic, __fmt, ## __VA_ARGS__)
> +
> +#define tun_grp_dbg(__group, __fmt, ...) \
> +       drm_dbg_kms((__group)->mgr->dev, \
> +                   "[DPTUN %s] " __fmt, \
> +                   drm_dp_tunnel_group_name(__group), ## \
> +                   __VA_ARGS__)
> +
> +#define DP_TUNNELING_BASE DP_TUNNELING_OUI
> +
> +#define __DPTUN_REG_RANGE(start, size) \
> +       GENMASK_ULL(start + size - 1, start)
> +
> +#define DPTUN_REG_RANGE(addr, size) \
> +       __DPTUN_REG_RANGE((addr) - DP_TUNNELING_BASE, size)
> +
> +#define DPTUN_REG(addr) DPTUN_REG_RANGE(addr, 1)
> +
> +#define DPTUN_INFO_REG_MASK ( \
> +       DPTUN_REG_RANGE(DP_TUNNELING_OUI, DP_TUNNELING_OUI_BYTES) | \
> +       DPTUN_REG_RANGE(DP_TUNNELING_DEV_ID,
> DP_TUNNELING_DEV_ID_BYTES) | \
> +       DPTUN_REG(DP_TUNNELING_HW_REV) | \
> +       DPTUN_REG(DP_TUNNELING_SW_REV_MAJOR) | \
> +       DPTUN_REG(DP_TUNNELING_SW_REV_MINOR) | \
> +       DPTUN_REG(DP_TUNNELING_CAPABILITIES) | \
> +       DPTUN_REG(DP_IN_ADAPTER_INFO) | \
> +       DPTUN_REG(DP_USB4_DRIVER_ID) | \
> +       DPTUN_REG(DP_USB4_DRIVER_BW_CAPABILITY) | \
> +       DPTUN_REG(DP_IN_ADAPTER_TUNNEL_INFORMATION) | \
> +       DPTUN_REG(DP_BW_GRANULARITY) | \
> +       DPTUN_REG(DP_ESTIMATED_BW) | \
> +       DPTUN_REG(DP_ALLOCATED_BW) | \
> +       DPTUN_REG(DP_TUNNELING_MAX_LINK_RATE) | \
> +       DPTUN_REG(DP_TUNNELING_MAX_LANE_COUNT) | \
> +       DPTUN_REG(DP_DPTX_BW_ALLOCATION_MODE_CONTROL))
> +
> +static const DECLARE_BITMAP(dptun_info_regs, 64) = {
> +       DPTUN_INFO_REG_MASK & -1UL,
> +#if BITS_PER_LONG == 32
> +       DPTUN_INFO_REG_MASK >> 32,
> +#endif
> +};
> +
> +struct drm_dp_tunnel_regs {
> +       u8 buf[HWEIGHT64(DPTUN_INFO_REG_MASK)];
> +};
> +
> +struct drm_dp_tunnel_group;
> +
> +struct drm_dp_tunnel {
> +       struct drm_dp_tunnel_group *group;
> +
> +       struct list_head node;
> +
> +       struct kref kref;
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +       struct ref_tracker *tracker;
> +#endif
> +       struct drm_dp_aux *aux;
> +       char name[8];
> +
> +       int bw_granularity;
> +       int estimated_bw;
> +       int allocated_bw;
> +
> +       int max_dprx_rate;
> +       u8 max_dprx_lane_count;
> +
> +       u8 adapter_id;
> +
> +       bool bw_alloc_supported:1;
> +       bool bw_alloc_enabled:1;
> +       bool has_io_error:1;
> +       bool destroyed:1;
> +};
> +
> +struct drm_dp_tunnel_group_state;
> +
> +struct drm_dp_tunnel_state {
> +       struct drm_dp_tunnel_group_state *group_state;
> +
> +       struct drm_dp_tunnel_ref tunnel_ref;
> +
> +       struct list_head node;
> +
> +       u32 stream_mask;

I'm wondering if drm_dp_tunnel_state can really contain several streams
and what kind of scenario this would be? From i915 point of view I
would understand that several pipes are routed to DP tunnel. Is it
bigjoiner case?

BR,

Jouni Högander

> +       int *stream_bw;
> +};
> +
> +struct drm_dp_tunnel_group_state {
> +       struct drm_private_state base;
> +
> +       struct list_head tunnel_states;
> +};
> +
> +struct drm_dp_tunnel_group {
> +       struct drm_private_obj base;
> +       struct drm_dp_tunnel_mgr *mgr;
> +
> +       struct list_head tunnels;
> +
> +       int available_bw;       /* available BW including the
> allocated_bw of all tunnels */
> +       int drv_group_id;
> +
> +       char name[8];
> +
> +       bool active:1;
> +};
> +
> +struct drm_dp_tunnel_mgr {
> +       struct drm_device *dev;
> +
> +       int group_count;
> +       struct drm_dp_tunnel_group *groups;
> +       wait_queue_head_t bw_req_queue;
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +       struct ref_tracker_dir ref_tracker;
> +#endif
> +};
> +
> +static int next_reg_area(int *offset)
> +{
> +       *offset = find_next_bit(dptun_info_regs, 64, *offset);
> +
> +       return find_next_zero_bit(dptun_info_regs, 64, *offset + 1) -
> *offset;
> +}
> +
> +#define tunnel_reg_ptr(__regs, __address) ({ \
> +       WARN_ON(!test_bit((__address) - DP_TUNNELING_BASE,
> dptun_info_regs)); \
> +       &(__regs)->buf[bitmap_weight(dptun_info_regs, (__address) -
> DP_TUNNELING_BASE)]; \
> +})
> +
> +static int read_tunnel_regs(struct drm_dp_aux *aux, struct
> drm_dp_tunnel_regs *regs)
> +{
> +       int offset = 0;
> +       int len;
> +
> +       while ((len = next_reg_area(&offset))) {
> +               int address = DP_TUNNELING_BASE + offset;
> +
> +               if (drm_dp_dpcd_read(aux, address,
> tunnel_reg_ptr(regs, address), len) < 0)
> +                       return -EIO;
> +
> +               offset += len;
> +       }
> +
> +       return 0;
> +}
> +
> +static u8 tunnel_reg(const struct drm_dp_tunnel_regs *regs, int
> address)
> +{
> +       return *tunnel_reg_ptr(regs, address);
> +}
> +
> +static int tunnel_reg_drv_group_id(const struct drm_dp_tunnel_regs
> *regs)
> +{
> +       int drv_id = tunnel_reg(regs, DP_USB4_DRIVER_ID) &
> DP_USB4_DRIVER_ID_MASK;
> +       int group_id = tunnel_reg(regs,
> DP_IN_ADAPTER_TUNNEL_INFORMATION) & DP_GROUP_ID_MASK;
> +
> +       if (!group_id)
> +               return 0;
> +
> +       return (drv_id << DP_GROUP_ID_BITS) | group_id;
> +}
> +
> +/* Return granularity in kB/s units */
> +static int tunnel_reg_bw_granularity(const struct drm_dp_tunnel_regs
> *regs)
> +{
> +       int gr = tunnel_reg(regs, DP_BW_GRANULARITY) &
> DP_BW_GRANULARITY_MASK;
> +
> +       WARN_ON(gr > 2);
> +
> +       return (250000 << gr) / 8;
> +}
> +
> +static int tunnel_reg_max_dprx_rate(const struct drm_dp_tunnel_regs
> *regs)
> +{
> +       u8 bw_code = tunnel_reg(regs, DP_TUNNELING_MAX_LINK_RATE);
> +
> +       return drm_dp_bw_code_to_link_rate(bw_code);
> +}
> +
> +static int tunnel_reg_max_dprx_lane_count(const struct
> drm_dp_tunnel_regs *regs)
> +{
> +       u8 lane_count = tunnel_reg(regs, DP_TUNNELING_MAX_LANE_COUNT)
> &
> +                       DP_TUNNELING_MAX_LANE_COUNT_MASK;
> +
> +       return lane_count;
> +}
> +
> +static bool tunnel_reg_bw_alloc_supported(const struct
> drm_dp_tunnel_regs *regs)
> +{
> +       u8 cap_mask = DP_TUNNELING_SUPPORT |
> DP_IN_BW_ALLOCATION_MODE_SUPPORT;
> +
> +       if ((tunnel_reg(regs, DP_TUNNELING_CAPABILITIES) & cap_mask)
> != cap_mask)
> +               return false;
> +
> +       return tunnel_reg(regs, DP_USB4_DRIVER_BW_CAPABILITY) &
> +              DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT;
> +}
> +
> +static bool tunnel_reg_bw_alloc_enabled(const struct
> drm_dp_tunnel_regs *regs)
> +{
> +       return tunnel_reg(regs, DP_DPTX_BW_ALLOCATION_MODE_CONTROL) &
> +               DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE;
> +}
> +
> +static int tunnel_group_drv_id(int drv_group_id)
> +{
> +       return drv_group_id >> DP_GROUP_ID_BITS;
> +}
> +
> +static int tunnel_group_id(int drv_group_id)
> +{
> +       return drv_group_id & DP_GROUP_ID_MASK;
> +}
> +
> +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
> +{
> +       return tunnel->name;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_name);
> +
> +static const char *drm_dp_tunnel_group_name(const struct
> drm_dp_tunnel_group *group)
> +{
> +       return group->name;
> +}
> +
> +static struct drm_dp_tunnel_group *
> +lookup_or_alloc_group(struct drm_dp_tunnel_mgr *mgr, int
> drv_group_id)
> +{
> +       struct drm_dp_tunnel_group *group = NULL;
> +       int i;
> +
> +       for (i = 0; i < mgr->group_count; i++) {
> +               /*
> +                * A tunnel group with 0 group ID shouldn't have more
> than one
> +                * tunnels.
> +                */
> +               if (tunnel_group_id(drv_group_id) &&
> +                   mgr->groups[i].drv_group_id == drv_group_id)
> +                       return &mgr->groups[i];
> +
> +               if (!group && !mgr->groups[i].active)
> +                       group = &mgr->groups[i];
> +       }
> +
> +       if (!group) {
> +               drm_dbg_kms(mgr->dev,
> +                           "DPTUN: Can't allocate more tunnel
> groups\n");
> +               return NULL;
> +       }
> +
> +       group->drv_group_id = drv_group_id;
> +       group->active = true;
> +
> +       snprintf(group->name, sizeof(group->name), "%d:%d:*",
> +                tunnel_group_drv_id(drv_group_id) & ((1 <<
> DP_GROUP_ID_BITS) - 1),
> +                tunnel_group_id(drv_group_id) & ((1 <<
> DP_USB4_DRIVER_ID_BITS) - 1));
> +
> +       return group;
> +}
> +
> +static void free_group(struct drm_dp_tunnel_group *group)
> +{
> +       struct drm_dp_tunnel_mgr *mgr = group->mgr;
> +
> +       if (drm_WARN_ON(mgr->dev, !list_empty(&group->tunnels)))
> +               return;
> +
> +       group->drv_group_id = 0;
> +       group->available_bw = -1;
> +       group->active = false;
> +}
> +
> +static struct drm_dp_tunnel *
> +tunnel_get(struct drm_dp_tunnel *tunnel)
> +{
> +       kref_get(&tunnel->kref);
> +
> +       return tunnel;
> +}
> +
> +static void free_tunnel(struct kref *kref)
> +{
> +       struct drm_dp_tunnel *tunnel = container_of(kref,
> typeof(*tunnel), kref);
> +       struct drm_dp_tunnel_group *group = tunnel->group;
> +
> +       list_del(&tunnel->node);
> +       if (list_empty(&group->tunnels))
> +               free_group(group);
> +
> +       kfree(tunnel);
> +}
> +
> +static void tunnel_put(struct drm_dp_tunnel *tunnel)
> +{
> +       kref_put(&tunnel->kref, free_tunnel);
> +}
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +static void track_tunnel_ref(struct drm_dp_tunnel *tunnel,
> +                            struct ref_tracker **tracker)
> +{
> +       ref_tracker_alloc(&tunnel->group->mgr->ref_tracker,
> +                         tracker, GFP_KERNEL);
> +}
> +
> +static void untrack_tunnel_ref(struct drm_dp_tunnel *tunnel,
> +                              struct ref_tracker **tracker)
> +{
> +       ref_tracker_free(&tunnel->group->mgr->ref_tracker,
> +                        tracker);
> +}
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> +{
> +       track_tunnel_ref(tunnel, NULL);
> +
> +       return tunnel_get(tunnel);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
> +
> +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel)
> +{
> +       tunnel_put(tunnel);
> +       untrack_tunnel_ref(tunnel, NULL);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_put_untracked);
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel,
> +                   struct ref_tracker **tracker)
> +{
> +       track_tunnel_ref(tunnel, tracker);
> +
> +       return tunnel_get(tunnel);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_get);
> +
> +void drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel,
> +                        struct ref_tracker **tracker)
> +{
> +       untrack_tunnel_ref(tunnel, tracker);
> +       tunnel_put(tunnel);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_put);
> +#else
> +#define track_tunnel_ref(tunnel, tracker) do {} while (0)
> +#define untrack_tunnel_ref(tunnel, tracker) do {} while (0)
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> +{
> +       return tunnel_get(tunnel);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
> +
> +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel)
> +{
> +       tunnel_put(tunnel);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_put_untracked);
> +#endif
> +
> +static bool add_tunnel_to_group(struct drm_dp_tunnel_mgr *mgr,
> +                               int drv_group_id,
> +                               struct drm_dp_tunnel *tunnel)
> +{
> +       struct drm_dp_tunnel_group *group =
> +               lookup_or_alloc_group(mgr, drv_group_id);
> +
> +       if (!group)
> +               return false;
> +
> +       tunnel->group = group;
> +       list_add(&tunnel->node, &group->tunnels);
> +
> +       return true;
> +}
> +
> +static struct drm_dp_tunnel *
> +create_tunnel(struct drm_dp_tunnel_mgr *mgr,
> +             struct drm_dp_aux *aux,
> +             const struct drm_dp_tunnel_regs *regs)
> +{
> +       int drv_group_id = tunnel_reg_drv_group_id(regs);
> +       struct drm_dp_tunnel *tunnel;
> +
> +       tunnel = kzalloc(sizeof(*tunnel), GFP_KERNEL);
> +       if (!tunnel)
> +               return NULL;
> +
> +       INIT_LIST_HEAD(&tunnel->node);
> +
> +       kref_init(&tunnel->kref);
> +
> +       tunnel->aux = aux;
> +
> +       tunnel->adapter_id = tunnel_reg(regs, DP_IN_ADAPTER_INFO) &
> DP_IN_ADAPTER_NUMBER_MASK;
> +
> +       snprintf(tunnel->name, sizeof(tunnel->name), "%d:%d:%d",
> +                tunnel_group_drv_id(drv_group_id) & ((1 <<
> DP_GROUP_ID_BITS) - 1),
> +                tunnel_group_id(drv_group_id) & ((1 <<
> DP_USB4_DRIVER_ID_BITS) - 1),
> +                tunnel->adapter_id & ((1 <<
> DP_IN_ADAPTER_NUMBER_BITS) - 1));
> +
> +       tunnel->bw_granularity = tunnel_reg_bw_granularity(regs);
> +       tunnel->allocated_bw = tunnel_reg(regs, DP_ALLOCATED_BW) *
> +                              tunnel->bw_granularity;
> +
> +       tunnel->bw_alloc_supported =
> tunnel_reg_bw_alloc_supported(regs);
> +       tunnel->bw_alloc_enabled = tunnel_reg_bw_alloc_enabled(regs);
> +
> +       if (!add_tunnel_to_group(mgr, drv_group_id, tunnel)) {
> +               kfree(tunnel);
> +
> +               return NULL;
> +       }
> +
> +       track_tunnel_ref(tunnel, &tunnel->tracker);
> +
> +       return tunnel;
> +}
> +
> +static void destroy_tunnel(struct drm_dp_tunnel *tunnel)
> +{
> +       untrack_tunnel_ref(tunnel, &tunnel->tracker);
> +       tunnel_put(tunnel);
> +}
> +
> +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel)
> +{
> +       tunnel->has_io_error = true;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_set_io_error);
> +
> +static char yes_no_chr(int val)
> +{
> +       return val ? 'Y' : 'N';
> +}
> +
> +#define SKIP_DPRX_CAPS_CHECK           BIT(0)
> +#define ALLOW_ALLOCATED_BW_CHANGE      BIT(1)
> +
> +static bool tunnel_regs_are_valid(struct drm_dp_tunnel_mgr *mgr,
> +                                 const struct drm_dp_tunnel_regs
> *regs,
> +                                 unsigned int flags)
> +{
> +       int drv_group_id = tunnel_reg_drv_group_id(regs);
> +       bool check_dprx = !(flags & SKIP_DPRX_CAPS_CHECK);
> +       bool ret = true;
> +
> +       if (!tunnel_reg_bw_alloc_supported(regs)) {
> +               if (tunnel_group_id(drv_group_id)) {
> +                       drm_dbg_kms(mgr->dev,
> +                                   "DPTUN: A non-zero group ID is
> only allowed with BWA support\n");
> +                       ret = false;
> +               }
> +
> +               if (tunnel_reg(regs, DP_ALLOCATED_BW)) {
> +                       drm_dbg_kms(mgr->dev,
> +                                   "DPTUN: BW is allocated without
> BWA support\n");
> +                       ret = false;
> +               }
> +
> +               return ret;
> +       }
> +
> +       if (!tunnel_group_id(drv_group_id)) {
> +               drm_dbg_kms(mgr->dev,
> +                           "DPTUN: BWA support requires a non-zero
> group ID\n");
> +               ret = false;
> +       }
> +
> +       if (check_dprx &&
> hweight8(tunnel_reg_max_dprx_lane_count(regs)) != 1) {
> +               drm_dbg_kms(mgr->dev,
> +                           "DPTUN: Invalid DPRX lane count: %d\n",
> +                           tunnel_reg_max_dprx_lane_count(regs));
> +
> +               ret = false;
> +       }
> +
> +       if (check_dprx && !tunnel_reg_max_dprx_rate(regs)) {
> +               drm_dbg_kms(mgr->dev,
> +                           "DPTUN: DPRX rate is 0\n");
> +
> +               ret = false;
> +       }
> +
> +       if (tunnel_reg(regs, DP_ALLOCATED_BW) > tunnel_reg(regs,
> DP_ESTIMATED_BW)) {
> +               drm_dbg_kms(mgr->dev,
> +                           "DPTUN: Allocated BW %d > estimated BW %d
> Mb/s\n",
> +                           DPTUN_BW_ARG(tunnel_reg(regs,
> DP_ALLOCATED_BW) *
> +                                       
> tunnel_reg_bw_granularity(regs)),
> +                           DPTUN_BW_ARG(tunnel_reg(regs,
> DP_ESTIMATED_BW) *
> +                                       
> tunnel_reg_bw_granularity(regs)));
> +
> +               ret = false;
> +       }
> +
> +       return ret;
> +}
> +
> +static bool tunnel_info_changes_are_valid(struct drm_dp_tunnel
> *tunnel,
> +                                         const struct
> drm_dp_tunnel_regs *regs,
> +                                         unsigned int flags)
> +{
> +       int new_drv_group_id = tunnel_reg_drv_group_id(regs);
> +       bool ret = true;
> +
> +       if (tunnel->bw_alloc_supported !=
> tunnel_reg_bw_alloc_supported(regs)) {
> +               tun_dbg(tunnel,
> +                       "BW alloc support has changed %c -> %c\n",
> +                       yes_no_chr(tunnel->bw_alloc_supported),
> +                       yes_no_chr(tunnel_reg_bw_alloc_supported(regs
> )));
> +
> +               ret = false;
> +       }
> +
> +       if (tunnel->group->drv_group_id != new_drv_group_id) {
> +               tun_dbg(tunnel,
> +                       "Driver/group ID has changed %d:%d:* ->
> %d:%d:*\n",
> +                       tunnel_group_drv_id(tunnel->group-
> >drv_group_id),
> +                       tunnel_group_id(tunnel->group->drv_group_id),
> +                       tunnel_group_drv_id(new_drv_group_id),
> +                       tunnel_group_id(new_drv_group_id));
> +
> +               ret = false;
> +       }
> +
> +       if (!tunnel->bw_alloc_supported)
> +               return ret;
> +
> +       if (tunnel->bw_granularity !=
> tunnel_reg_bw_granularity(regs)) {
> +               tun_dbg(tunnel,
> +                       "BW granularity has changed: %d -> %d
> Mb/s\n",
> +                       DPTUN_BW_ARG(tunnel->bw_granularity),
> +                       DPTUN_BW_ARG(tunnel_reg_bw_granularity(regs))
> );
> +
> +               ret = false;
> +       }
> +
> +       /*
> +        * On some devices at least the BW alloc mode enabled status
> is always
> +        * reported as 0, so skip checking that here.
> +        */
> +
> +       if (!(flags & ALLOW_ALLOCATED_BW_CHANGE) &&
> +           tunnel->allocated_bw !=
> +           tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel-
> >bw_granularity) {
> +               tun_dbg(tunnel,
> +                       "Allocated BW has changed: %d -> %d Mb/s\n",
> +                       DPTUN_BW_ARG(tunnel->allocated_bw),
> +                       DPTUN_BW_ARG(tunnel_reg(regs,
> DP_ALLOCATED_BW) * tunnel->bw_granularity));
> +
> +               ret = false;
> +       }
> +
> +       return ret;
> +}
> +
> +static int
> +read_and_verify_tunnel_regs(struct drm_dp_tunnel *tunnel,
> +                           struct drm_dp_tunnel_regs *regs,
> +                           unsigned int flags)
> +{
> +       int err;
> +
> +       err = read_tunnel_regs(tunnel->aux, regs);
> +       if (err < 0) {
> +               drm_dp_tunnel_set_io_error(tunnel);
> +
> +               return err;
> +       }
> +
> +       if (!tunnel_regs_are_valid(tunnel->group->mgr, regs, flags))
> +               return -EINVAL;
> +
> +       if (!tunnel_info_changes_are_valid(tunnel, regs, flags))
> +               return -EINVAL;
> +
> +       return 0;
> +}
> +
> +static bool update_dprx_caps(struct drm_dp_tunnel *tunnel, const
> struct drm_dp_tunnel_regs *regs)
> +{
> +       bool changed = false;
> +
> +       if (tunnel_reg_max_dprx_rate(regs) != tunnel->max_dprx_rate)
> {
> +               tunnel->max_dprx_rate =
> tunnel_reg_max_dprx_rate(regs);
> +               changed = true;
> +       }
> +
> +       if (tunnel_reg_max_dprx_lane_count(regs) != tunnel-
> >max_dprx_lane_count) {
> +               tunnel->max_dprx_lane_count =
> tunnel_reg_max_dprx_lane_count(regs);
> +               changed = true;
> +       }
> +
> +       return changed;
> +}
> +
> +static int dev_id_len(const u8 *dev_id, int max_len)
> +{
> +       while (max_len && dev_id[max_len - 1] == '\0')
> +               max_len--;
> +
> +       return max_len;
> +}
> +
> +static int get_max_dprx_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +       int bw = drm_dp_max_dprx_data_rate(tunnel->max_dprx_rate,
> +                                          tunnel-
> >max_dprx_lane_count);
> +
> +       return min(roundup(bw, tunnel->bw_granularity),
> +                  MAX_DP_REQUEST_BW * tunnel->bw_granularity);
> +}
> +
> +static int get_max_tunnel_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +       return min(get_max_dprx_bw(tunnel), tunnel->group-
> >available_bw);
> +}
> +
> +/**
> + * drm_dp_tunnel_detect - Detect DP tunnel on the link
> + * @mgr: Tunnel manager
> + * @aux: DP AUX on which the tunnel will be detected
> + *
> + * Detect if there is any DP tunnel on the link and add it to the
> tunnel
> + * group's tunnel list.
> + *
> + * Returns 0 on success, negative error code on failure.
> + */
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> +                      struct drm_dp_aux *aux)
> +{
> +       struct drm_dp_tunnel_regs regs;
> +       struct drm_dp_tunnel *tunnel;
> +       int err;
> +
> +       err = read_tunnel_regs(aux, &regs);
> +       if (err)
> +               return ERR_PTR(err);
> +
> +       if (!(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
> +             DP_TUNNELING_SUPPORT))
> +               return ERR_PTR(-ENODEV);
> +
> +       /* The DPRX caps are valid only after enabling BW alloc mode.
> */
> +       if (!tunnel_regs_are_valid(mgr, &regs, SKIP_DPRX_CAPS_CHECK))
> +               return ERR_PTR(-EINVAL);
> +
> +       tunnel = create_tunnel(mgr, aux, &regs);
> +       if (!tunnel)
> +               return ERR_PTR(-ENOMEM);
> +
> +       tun_dbg(tunnel,
> +               "OUI:%*phD DevID:%*pE Rev-HW:%d.%d SW:%d.%d PR-Sup:%c
> BWA-Sup:%c BWA-En:%c\n",
> +               DP_TUNNELING_OUI_BYTES,
> +                       tunnel_reg_ptr(&regs, DP_TUNNELING_OUI),
> +               dev_id_len(tunnel_reg_ptr(&regs,
> DP_TUNNELING_DEV_ID), DP_TUNNELING_DEV_ID_BYTES),
> +                       tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID),
> +               (tunnel_reg(&regs, DP_TUNNELING_HW_REV) &
> DP_TUNNELING_HW_REV_MAJOR_MASK) >>
> +                       DP_TUNNELING_HW_REV_MAJOR_SHIFT,
> +               (tunnel_reg(&regs, DP_TUNNELING_HW_REV) &
> DP_TUNNELING_HW_REV_MINOR_MASK) >>
> +                       DP_TUNNELING_HW_REV_MINOR_SHIFT,
> +               tunnel_reg(&regs, DP_TUNNELING_SW_REV_MAJOR),
> +               tunnel_reg(&regs, DP_TUNNELING_SW_REV_MINOR),
> +               yes_no_chr(tunnel_reg(&regs,
> DP_TUNNELING_CAPABILITIES) &
> +                          DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT),
> +               yes_no_chr(tunnel->bw_alloc_supported),
> +               yes_no_chr(tunnel->bw_alloc_enabled));
> +
> +       return tunnel;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_detect);
> +
> +/**
> + * drm_dp_tunnel_destroy - Destroy tunnel object
> + * @tunnel: Tunnel object
> + *
> + * Remove the tunnel from the tunnel topology and destroy it.
> + */
> +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> +{
> +       if (drm_WARN_ON(tunnel->group->mgr->dev, tunnel->destroyed))
> +               return -ENODEV;
> +
> +       tun_dbg(tunnel, "destroying\n");
> +
> +       tunnel->destroyed = true;
> +       destroy_tunnel(tunnel);
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_destroy);
> +
> +static int check_tunnel(const struct drm_dp_tunnel *tunnel)
> +{
> +       if (tunnel->destroyed)
> +               return -ENODEV;
> +
> +       if (tunnel->has_io_error)
> +               return -EIO;
> +
> +       return 0;
> +}
> +
> +static int group_allocated_bw(struct drm_dp_tunnel_group *group)
> +{
> +       struct drm_dp_tunnel *tunnel;
> +       int group_allocated_bw = 0;
> +
> +       for_each_tunnel_in_group(group, tunnel) {
> +               if (check_tunnel(tunnel) == 0 &&
> +                   tunnel->bw_alloc_enabled)
> +                       group_allocated_bw += tunnel->allocated_bw;
> +       }
> +
> +       return group_allocated_bw;
> +}
> +
> +static int calc_group_available_bw(const struct drm_dp_tunnel
> *tunnel)
> +{
> +       return group_allocated_bw(tunnel->group) -
> +              tunnel->allocated_bw +
> +              tunnel->estimated_bw;
> +}
> +
> +static int update_group_available_bw(struct drm_dp_tunnel *tunnel,
> +                                    const struct drm_dp_tunnel_regs
> *regs)
> +{
> +       struct drm_dp_tunnel *tunnel_iter;
> +       int group_available_bw;
> +       bool changed;
> +
> +       tunnel->estimated_bw = tunnel_reg(regs, DP_ESTIMATED_BW) *
> tunnel->bw_granularity;
> +
> +       if (calc_group_available_bw(tunnel) == tunnel->group-
> >available_bw)
> +               return 0;
> +
> +       for_each_tunnel_in_group(tunnel->group, tunnel_iter) {
> +               int err;
> +
> +               if (tunnel_iter == tunnel)
> +                       continue;
> +
> +               if (check_tunnel(tunnel_iter) != 0 ||
> +                   !tunnel_iter->bw_alloc_enabled)
> +                       continue;
> +
> +               err = drm_dp_dpcd_probe(tunnel_iter->aux,
> DP_DPCD_REV);
> +               if (err) {
> +                       tun_dbg(tunnel_iter,
> +                               "Probe failed, assume disconnected
> (err %pe)\n",
> +                               ERR_PTR(err));
> +                       drm_dp_tunnel_set_io_error(tunnel_iter);
> +               }
> +       }
> +
> +       group_available_bw = calc_group_available_bw(tunnel);
> +
> +       tun_dbg(tunnel, "Updated group available BW: %d->%d\n",
> +               DPTUN_BW_ARG(tunnel->group->available_bw),
> +               DPTUN_BW_ARG(group_available_bw));
> +
> +       changed = tunnel->group->available_bw != group_available_bw;
> +
> +       tunnel->group->available_bw = group_available_bw;
> +
> +       return changed ? 1 : 0;
> +}
> +
> +static int set_bw_alloc_mode(struct drm_dp_tunnel *tunnel, bool
> enable)
> +{
> +       u8 mask = DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE |
> DP_UNMASK_BW_ALLOCATION_IRQ;
> +       u8 val;
> +
> +       if (drm_dp_dpcd_readb(tunnel->aux,
> DP_DPTX_BW_ALLOCATION_MODE_CONTROL, &val) < 0)
> +               goto out_err;
> +
> +       if (enable)
> +               val |= mask;
> +       else
> +               val &= ~mask;
> +
> +       if (drm_dp_dpcd_writeb(tunnel->aux,
> DP_DPTX_BW_ALLOCATION_MODE_CONTROL, val) < 0)
> +               goto out_err;
> +
> +       tunnel->bw_alloc_enabled = enable;
> +
> +       return 0;
> +
> +out_err:
> +       drm_dp_tunnel_set_io_error(tunnel);
> +
> +       return -EIO;
> +}
> +
> +/**
> + * drm_dp_tunnel_enable_bw_alloc: Enable DP tunnel BW allocation
> mode
> + * @tunnel: Tunnel object
> + *
> + * Enable the DP tunnel BW allocation mode on @tunnel if it supports
> it.
> + *
> + * Returns 0 in case of success, negative error code otherwise.
> + */
> +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> +{
> +       struct drm_dp_tunnel_regs regs;
> +       int err = check_tunnel(tunnel);
> +
> +       if (err)
> +               return err;
> +
> +       if (!tunnel->bw_alloc_supported)
> +               return -EOPNOTSUPP;
> +
> +       if (!tunnel_group_id(tunnel->group->drv_group_id))
> +               return -EINVAL;
> +
> +       err = set_bw_alloc_mode(tunnel, true);
> +       if (err)
> +               goto out;
> +
> +       err = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> +       if (err) {
> +               set_bw_alloc_mode(tunnel, false);
> +
> +               goto out;
> +       }
> +
> +       if (!tunnel->max_dprx_rate)
> +               update_dprx_caps(tunnel, &regs);
> +
> +       if (tunnel->group->available_bw == -1) {
> +               err = update_group_available_bw(tunnel, &regs);
> +               if (err > 0)
> +                       err = 0;
> +       }
> +out:
> +       tun_dbg_stat(tunnel, err,
> +                    "Enabling BW alloc mode: DPRX:%dx%d Group
> alloc:%d/%d Mb/s",
> +                    tunnel->max_dprx_rate / 100, tunnel-
> >max_dprx_lane_count,
> +                    DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> +                    DPTUN_BW_ARG(tunnel->group->available_bw));
> +
> +       return err;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_enable_bw_alloc);
> +
> +/**
> + * drm_dp_tunnel_disable_bw_alloc: Disable DP tunnel BW allocation
> mode
> + * @tunnel: Tunnel object
> + *
> + * Disable the DP tunnel BW allocation mode on @tunnel.
> + *
> + * Returns 0 in case of success, negative error code otherwise.
> + */
> +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> +{
> +       int err = check_tunnel(tunnel);
> +
> +       if (err)
> +               return err;
> +
> +       err = set_bw_alloc_mode(tunnel, false);
> +
> +       tun_dbg_stat(tunnel, err, "Disabling BW alloc mode");
> +
> +       return err;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_disable_bw_alloc);
> +
> +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel
> *tunnel)
> +{
> +       return tunnel->bw_alloc_enabled;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_bw_alloc_is_enabled);
> +
> +static int bw_req_complete(struct drm_dp_aux *aux, bool
> *status_changed)
> +{
> +       u8 bw_req_mask = DP_BW_REQUEST_SUCCEEDED |
> DP_BW_REQUEST_FAILED;
> +       u8 status_change_mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED |
> DP_ESTIMATED_BW_CHANGED;
> +       u8 val;
> +
> +       if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> +               return -EIO;
> +
> +       *status_changed = val & status_change_mask;
> +
> +       val &= bw_req_mask;
> +
> +       if (!val)
> +               return -EAGAIN;
> +
> +       if (drm_dp_dpcd_writeb(aux, DP_TUNNELING_STATUS, val) < 0)
> +               return -EIO;
> +
> +       return val == DP_BW_REQUEST_SUCCEEDED ? 0 : -ENOSPC;
> +}
> +
> +static int allocate_tunnel_bw(struct drm_dp_tunnel *tunnel, int bw)
> +{
> +       struct drm_dp_tunnel_mgr *mgr = tunnel->group->mgr;
> +       int request_bw = DIV_ROUND_UP(bw, tunnel->bw_granularity);
> +       unsigned long wait_expires;
> +       DEFINE_WAIT(wait);
> +       int err;
> +
> +       /* Atomic check should prevent the following. */
> +       if (drm_WARN_ON(mgr->dev, request_bw > MAX_DP_REQUEST_BW)) {
> +               err = -EINVAL;
> +               goto out;
> +       }
> +
> +       if (drm_dp_dpcd_writeb(tunnel->aux, DP_REQUEST_BW,
> request_bw) < 0) {
> +               err = -EIO;
> +               goto out;
> +       }
> +
> +       wait_expires = jiffies + msecs_to_jiffies(3000);
> +
> +       for (;;) {
> +               bool status_changed;
> +
> +               err = bw_req_complete(tunnel->aux, &status_changed);
> +               if (err != -EAGAIN)
> +                       break;
> +
> +               if (status_changed) {
> +                       struct drm_dp_tunnel_regs regs;
> +
> +                       err = read_and_verify_tunnel_regs(tunnel,
> &regs,
> +                                                        
> ALLOW_ALLOCATED_BW_CHANGE);
> +                       if (err)
> +                               break;
> +               }
> +
> +               if (time_after(jiffies, wait_expires)) {
> +                       err = -ETIMEDOUT;
> +                       break;
> +               }
> +
> +               prepare_to_wait(&mgr->bw_req_queue, &wait,
> TASK_UNINTERRUPTIBLE);
> +               schedule_timeout(msecs_to_jiffies(200));
> +       };
> +
> +       finish_wait(&mgr->bw_req_queue, &wait);
> +
> +       if (err)
> +               goto out;
> +
> +       tunnel->allocated_bw = request_bw * tunnel->bw_granularity;
> +
> +out:
> +       tun_dbg_stat(tunnel, err, "Allocating %d/%d Mb/s for tunnel:
> Group alloc:%d/%d Mb/s",
> +                    DPTUN_BW_ARG(request_bw * tunnel-
> >bw_granularity),
> +                    DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> +                    DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> +                    DPTUN_BW_ARG(tunnel->group->available_bw));
> +
> +       if (err == -EIO)
> +               drm_dp_tunnel_set_io_error(tunnel);
> +
> +       return err;
> +}
> +
> +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> +{
> +       int err = check_tunnel(tunnel);
> +
> +       if (err)
> +               return err;
> +
> +       return allocate_tunnel_bw(tunnel, bw);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_alloc_bw);
> +
> +static int check_and_clear_status_change(struct drm_dp_tunnel
> *tunnel)
> +{
> +       u8 mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED |
> DP_ESTIMATED_BW_CHANGED;
> +       u8 val;
> +
> +       if (drm_dp_dpcd_readb(tunnel->aux, DP_TUNNELING_STATUS, &val)
> < 0)
> +               goto out_err;
> +
> +       val &= mask;
> +
> +       if (val) {
> +               if (drm_dp_dpcd_writeb(tunnel->aux,
> DP_TUNNELING_STATUS, val) < 0)
> +                       goto out_err;
> +
> +               return 1;
> +       }
> +
> +       if (!drm_dp_tunnel_bw_alloc_is_enabled(tunnel))
> +               return 0;
> +
> +       /*
> +        * Check for estimated BW changes explicitly to account for
> lost
> +        * BW change notifications.
> +        */
> +       if (drm_dp_dpcd_readb(tunnel->aux, DP_ESTIMATED_BW, &val) <
> 0)
> +               goto out_err;
> +
> +       if (val * tunnel->bw_granularity != tunnel->estimated_bw)
> +               return 1;
> +
> +       return 0;
> +
> +out_err:
> +       drm_dp_tunnel_set_io_error(tunnel);
> +
> +       return -EIO;
> +}
> +
> +/**
> + * drm_dp_tunnel_update_state: Update DP tunnel SW state with the HW
> state
> + * @tunnel: Tunnel object
> + *
> + * Update the SW state of @tunnel with the HW state.
> + *
> + * Returns 0 if the state has not changed, 1 if it has changed and
> got updated
> + * successfully and a negative error code otherwise.
> + */
> +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> +{
> +       struct drm_dp_tunnel_regs regs;
> +       bool changed = false;
> +       int ret = check_tunnel(tunnel);
> +
> +       if (ret < 0)
> +               return ret;
> +
> +       ret = check_and_clear_status_change(tunnel);
> +       if (ret < 0)
> +               goto out;
> +
> +       if (!ret)
> +               return 0;
> +
> +       ret = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> +       if (ret)
> +               goto out;
> +
> +       if (update_dprx_caps(tunnel, &regs))
> +               changed = true;
> +
> +       ret = update_group_available_bw(tunnel, &regs);
> +       if (ret == 1)
> +               changed = true;
> +
> +out:
> +       tun_dbg_stat(tunnel, ret < 0 ? ret : 0,
> +                    "State update: Changed:%c DPRX:%dx%d Tunnel
> alloc:%d/%d Group alloc:%d/%d Mb/s",
> +                    yes_no_chr(changed),
> +                    tunnel->max_dprx_rate / 100, tunnel-
> >max_dprx_lane_count,
> +                    DPTUN_BW_ARG(tunnel->allocated_bw),
> +                    DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> +                    DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> +                    DPTUN_BW_ARG(tunnel->group->available_bw));
> +
> +       if (ret < 0)
> +               return ret;
> +
> +       if (changed)
> +               return 1;
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_update_state);
> +
> +/*
> + * Returns 0 if no re-probe is needed, 1 if a re-probe is needed,
> + * a negative error code otherwise.
> + */
> +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, struct
> drm_dp_aux *aux)
> +{
> +       u8 val;
> +
> +       if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> +               return -EIO;
> +
> +       if (val & (DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED))
> +               wake_up_all(&mgr->bw_req_queue);
> +
> +       if (val & (DP_BW_ALLOCATION_CAPABILITY_CHANGED |
> DP_ESTIMATED_BW_CHANGED))
> +               return 1;
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_handle_irq);
> +
> +/**
> + * drm_dp_tunnel_max_dprx_rate - Query the maximum rate of the
> tunnel's DPRX
> + * @tunnel: Tunnel object
> + *
> + * The function is used to query the maximum link rate of the DPRX
> connected
> + * to @tunnel. Note that this rate will not be limited by the BW
> limit of the
> + * tunnel, as opposed to the standard and extended DP_MAX_LINK_RATE
> DPCD
> + * registers.
> + *
> + * Returns the maximum link rate in 10 kbit/s units.
> + */
> +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> +{
> +       return tunnel->max_dprx_rate;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_rate);
> +
> +/**
> + * drm_dp_tunnel_max_dprx_lane_count - Query the maximum lane count
> of the tunnel's DPRX
> + * @tunnel: Tunnel object
> + *
> + * The function is used to query the maximum lane count of the DPRX
> connected
> + * to @tunnel. Note that this lane count will not be limited by the
> BW limit of
> + * the tunnel, as opposed to the standard and extended
> DP_MAX_LANE_COUNT DPCD
> + * registers.
> + *
> + * Returns the maximum lane count.
> + */
> +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel
> *tunnel)
> +{
> +       return tunnel->max_dprx_lane_count;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_lane_count);
> +
> +/**
> + * drm_dp_tunnel_available_bw - Query the estimated total available
> BW of the tunnel
> + * @tunnel: Tunnel object
> + *
> + * This function is used to query the estimated total available BW
> of the
> + * tunnel. This includes the currently allocated and free BW for all
> the
> + * tunnels in @tunnel's group. The available BW is valid only after
> the BW
> + * allocation mode has been enabled for the tunnel and its state got
> updated
> + * calling drm_dp_tunnel_update_state().
> + *
> + * Returns the @tunnel group's estimated total available bandwidth
> in kB/s
> + * units, or -1 if the available BW isn't valid (the BW allocation
> mode is
> + * not enabled or the tunnel's state hasn't been updated).
> + */
> +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +       return tunnel->group->available_bw;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_available_bw);
> +
> +static struct drm_dp_tunnel_group_state *
> +drm_dp_tunnel_atomic_get_group_state(struct drm_atomic_state *state,
> +                                    const struct drm_dp_tunnel
> *tunnel)
> +{
> +       return (struct drm_dp_tunnel_group_state *)
> +               drm_atomic_get_private_obj_state(state,
> +                                                &tunnel->group-
> >base);
> +}
> +
> +static struct drm_dp_tunnel_state *
> +add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> +                struct drm_dp_tunnel *tunnel)
> +{
> +       struct drm_dp_tunnel_state *tunnel_state;
> +
> +       tun_dbg_atomic(tunnel,
> +                      "Adding state for tunnel %p to group state
> %p\n",
> +                      tunnel, group_state);
> +
> +       tunnel_state = kzalloc(sizeof(*tunnel_state), GFP_KERNEL);
> +       if (!tunnel_state)
> +               return NULL;
> +
> +       tunnel_state->group_state = group_state;
> +
> +       drm_dp_tunnel_ref_get(tunnel, &tunnel_state->tunnel_ref);
> +
> +       INIT_LIST_HEAD(&tunnel_state->node);
> +       list_add(&tunnel_state->node, &group_state->tunnel_states);
> +
> +       return tunnel_state;
> +}
> +
> +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state
> *tunnel_state)
> +{
> +       tun_dbg_atomic(tunnel_state->tunnel_ref.tunnel,
> +                      "Clearing state for tunnel %p\n",
> +                      tunnel_state->tunnel_ref.tunnel);
> +
> +       list_del(&tunnel_state->node);
> +
> +       kfree(tunnel_state->stream_bw);
> +       drm_dp_tunnel_ref_put(&tunnel_state->tunnel_ref);
> +
> +       kfree(tunnel_state);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_clear_state);
> +
> +static void clear_tunnel_group_state(struct
> drm_dp_tunnel_group_state *group_state)
> +{
> +       struct drm_dp_tunnel_state *tunnel_state;
> +       struct drm_dp_tunnel_state *tunnel_state_tmp;
> +
> +       for_each_tunnel_state_safe(group_state, tunnel_state,
> tunnel_state_tmp)
> +               drm_dp_tunnel_atomic_clear_state(tunnel_state);
> +}
> +
> +static struct drm_dp_tunnel_state *
> +get_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> +                const struct drm_dp_tunnel *tunnel)
> +{
> +       struct drm_dp_tunnel_state *tunnel_state;
> +
> +       for_each_tunnel_state(group_state, tunnel_state)
> +               if (tunnel_state->tunnel_ref.tunnel == tunnel)
> +                       return tunnel_state;
> +
> +       return NULL;
> +}
> +
> +static struct drm_dp_tunnel_state *
> +get_or_add_tunnel_state(struct drm_dp_tunnel_group_state
> *group_state,
> +                       struct drm_dp_tunnel *tunnel)
> +{
> +       struct drm_dp_tunnel_state *tunnel_state;
> +
> +       tunnel_state = get_tunnel_state(group_state, tunnel);
> +       if (tunnel_state)
> +               return tunnel_state;
> +
> +       return add_tunnel_state(group_state, tunnel);
> +}
> +
> +static struct drm_private_state *
> +tunnel_group_duplicate_state(struct drm_private_obj *obj)
> +{
> +       struct drm_dp_tunnel_group_state *group_state =
> to_group_state(obj->state);
> +       struct drm_dp_tunnel_state *tunnel_state;
> +
> +       group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> +       if (!group_state)
> +               return NULL;
> +
> +       INIT_LIST_HEAD(&group_state->tunnel_states);
> +
> +       __drm_atomic_helper_private_obj_duplicate_state(obj,
> &group_state->base);
> +
> +       for_each_tunnel_state(to_group_state(obj->state),
> tunnel_state) {
> +               struct drm_dp_tunnel_state *new_tunnel_state;
> +
> +               new_tunnel_state =
> get_or_add_tunnel_state(group_state,
> +                                                         
> tunnel_state->tunnel_ref.tunnel);
> +               if (!new_tunnel_state)
> +                       goto out_free_state;
> +
> +               new_tunnel_state->stream_mask = tunnel_state-
> >stream_mask;
> +               new_tunnel_state->stream_bw = kmemdup(tunnel_state-
> >stream_bw,
> +                                                    
> sizeof(*tunnel_state->stream_bw) *
> +                                                       hweight32(tun
> nel_state->stream_mask),
> +                                                     GFP_KERNEL);
> +
> +               if (!new_tunnel_state->stream_bw)
> +                       goto out_free_state;
> +       }
> +
> +       return &group_state->base;
> +
> +out_free_state:
> +       clear_tunnel_group_state(group_state);
> +       kfree(group_state);
> +
> +       return NULL;
> +}
> +
> +static void tunnel_group_destroy_state(struct drm_private_obj *obj,
> struct drm_private_state *state)
> +{
> +       struct drm_dp_tunnel_group_state *group_state =
> to_group_state(state);
> +
> +       clear_tunnel_group_state(group_state);
> +       kfree(group_state);
> +}
> +
> +static const struct drm_private_state_funcs tunnel_group_funcs = {
> +       .atomic_duplicate_state = tunnel_group_duplicate_state,
> +       .atomic_destroy_state = tunnel_group_destroy_state,
> +};
> +
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> +                              struct drm_dp_tunnel *tunnel)
> +{
> +       struct drm_dp_tunnel_group_state *group_state =
> +               drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> +       struct drm_dp_tunnel_state *tunnel_state;
> +
> +       if (IS_ERR(group_state))
> +               return ERR_CAST(group_state);
> +
> +       tunnel_state = get_or_add_tunnel_state(group_state, tunnel);
> +       if (!tunnel_state)
> +               return ERR_PTR(-ENOMEM);
> +
> +       return tunnel_state;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_state);
> +
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> +                                  const struct drm_dp_tunnel
> *tunnel)
> +{
> +       struct drm_dp_tunnel_group_state *new_group_state;
> +       int i;
> +
> +       for_each_new_group_in_state(state, new_group_state, i)
> +               if (to_group(new_group_state->base.obj) == tunnel-
> >group)
> +                       return get_tunnel_state(new_group_state,
> tunnel);
> +
> +       return NULL;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_new_state);
> +
> +static bool init_group(struct drm_dp_tunnel_mgr *mgr, struct
> drm_dp_tunnel_group *group)
> +{
> +       struct drm_dp_tunnel_group_state *group_state =
> kzalloc(sizeof(*group_state), GFP_KERNEL);
> +
> +       if (!group_state)
> +               return false;
> +
> +       INIT_LIST_HEAD(&group_state->tunnel_states);
> +
> +       group->mgr = mgr;
> +       group->available_bw = -1;
> +       INIT_LIST_HEAD(&group->tunnels);
> +
> +       drm_atomic_private_obj_init(mgr->dev, &group->base,
> &group_state->base,
> +                                   &tunnel_group_funcs);
> +
> +       return true;
> +}
> +
> +static void cleanup_group(struct drm_dp_tunnel_group *group)
> +{
> +       drm_atomic_private_obj_fini(&group->base);
> +}
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +static void check_unique_stream_ids(const struct
> drm_dp_tunnel_group_state *group_state)
> +{
> +       const struct drm_dp_tunnel_state *tunnel_state;
> +       u32 stream_mask = 0;
> +
> +       for_each_tunnel_state(group_state, tunnel_state) {
> +               drm_WARN(to_group(group_state->base.obj)->mgr->dev,
> +                        tunnel_state->stream_mask & stream_mask,
> +                        "[DPTUN %s]: conflicting stream IDs %x (IDs
> in other tunnels %x)\n",
> +                        tunnel_state->tunnel_ref.tunnel->name,
> +                        tunnel_state->stream_mask,
> +                        stream_mask);
> +
> +               stream_mask |= tunnel_state->stream_mask;
> +       }
> +}
> +#else
> +static void check_unique_stream_ids(const struct
> drm_dp_tunnel_group_state *group_state)
> +{
> +}
> +#endif
> +
> +static int stream_id_to_idx(u32 stream_mask, u8 stream_id)
> +{
> +       return hweight32(stream_mask & (BIT(stream_id) - 1));
> +}
> +
> +static int resize_bw_array(struct drm_dp_tunnel_state *tunnel_state,
> +                          unsigned long old_mask, unsigned long
> new_mask)
> +{
> +       unsigned long move_mask = old_mask & new_mask;
> +       int *new_bws = NULL;
> +       int id;
> +
> +       WARN_ON(!new_mask);
> +
> +       if (old_mask == new_mask)
> +               return 0;
> +
> +       new_bws = kcalloc(hweight32(new_mask), sizeof(*new_bws),
> GFP_KERNEL);
> +       if (!new_bws)
> +               return -ENOMEM;
> +
> +       for_each_set_bit(id, &move_mask, BITS_PER_TYPE(move_mask))
> +               new_bws[stream_id_to_idx(new_mask, id)] =
> +                       tunnel_state-
> >stream_bw[stream_id_to_idx(old_mask, id)];
> +
> +       kfree(tunnel_state->stream_bw);
> +       tunnel_state->stream_bw = new_bws;
> +       tunnel_state->stream_mask = new_mask;
> +
> +       return 0;
> +}
> +
> +static int set_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> +                        u8 stream_id, int bw)
> +{
> +       int err;
> +
> +       err = resize_bw_array(tunnel_state,
> +                             tunnel_state->stream_mask,
> +                             tunnel_state->stream_mask |
> BIT(stream_id));
> +       if (err)
> +               return err;
> +
> +       tunnel_state->stream_bw[stream_id_to_idx(tunnel_state-
> >stream_mask, stream_id)] = bw;
> +
> +       return 0;
> +}
> +
> +static int clear_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> +                          u8 stream_id)
> +{
> +       if (!(tunnel_state->stream_mask & ~BIT(stream_id))) {
> +               drm_dp_tunnel_atomic_clear_state(tunnel_state);
> +               return 0;
> +       }
> +
> +       return resize_bw_array(tunnel_state,
> +                              tunnel_state->stream_mask,
> +                              tunnel_state->stream_mask &
> ~BIT(stream_id));
> +}
> +
> +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state
> *state,
> +                                        struct drm_dp_tunnel
> *tunnel,
> +                                        u8 stream_id, int bw)
> +{
> +       struct drm_dp_tunnel_group_state *new_group_state =
> +               drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> +       struct drm_dp_tunnel_state *tunnel_state;
> +       int err;
> +
> +       if (drm_WARN_ON(tunnel->group->mgr->dev,
> +                       stream_id > BITS_PER_TYPE(tunnel_state-
> >stream_mask)))
> +               return -EINVAL;
> +
> +       tun_dbg(tunnel,
> +               "Setting %d Mb/s for stream %d\n",
> +               DPTUN_BW_ARG(bw), stream_id);
> +
> +       if (bw == 0) {
> +               tunnel_state = get_tunnel_state(new_group_state,
> tunnel);
> +               if (!tunnel_state)
> +                       return 0;
> +
> +               return clear_stream_bw(tunnel_state, stream_id);
> +       }
> +
> +       tunnel_state = get_or_add_tunnel_state(new_group_state,
> tunnel);
> +       if (drm_WARN_ON(state->dev, !tunnel_state))
> +               return -EINVAL;
> +
> +       err = set_stream_bw(tunnel_state, stream_id, bw);
> +       if (err)
> +               return err;
> +
> +       check_unique_stream_ids(new_group_state);
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_set_stream_bw);
> +
> +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct
> drm_dp_tunnel_state *tunnel_state)
> +{
> +       int tunnel_bw = 0;
> +       int i;
> +
> +       for (i = 0; i < hweight32(tunnel_state->stream_mask); i++)
> +               tunnel_bw += tunnel_state->stream_bw[i];
> +
> +       return tunnel_bw;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_tunnel_bw);
> +
> +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct
> drm_atomic_state *state,
> +                                                   const struct
> drm_dp_tunnel *tunnel,
> +                                                   u32 *stream_mask)
> +{
> +       struct drm_dp_tunnel_group_state *group_state =
> +               drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> +       struct drm_dp_tunnel_state *tunnel_state;
> +
> +       if (IS_ERR(group_state))
> +               return PTR_ERR(group_state);
> +
> +       *stream_mask = 0;
> +       for_each_tunnel_state(group_state, tunnel_state)
> +               *stream_mask |= tunnel_state->stream_mask;
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_group_streams_in_state);
> +
> +static int
> +drm_dp_tunnel_atomic_check_group_bw(struct drm_dp_tunnel_group_state
> *new_group_state,
> +                                   u32 *failed_stream_mask)
> +{
> +       struct drm_dp_tunnel_group *group = to_group(new_group_state-
> >base.obj);
> +       struct drm_dp_tunnel_state *new_tunnel_state;
> +       u32 group_stream_mask = 0;
> +       int group_bw = 0;
> +
> +       for_each_tunnel_state(new_group_state, new_tunnel_state) {
> +               struct drm_dp_tunnel *tunnel = new_tunnel_state-
> >tunnel_ref.tunnel;
> +               int max_dprx_bw = get_max_dprx_bw(tunnel);
> +               int tunnel_bw =
> drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state);
> +
> +               tun_dbg(tunnel,
> +                       "%sRequired %d/%d Mb/s total for tunnel.\n",
> +                       tunnel_bw > max_dprx_bw ? "Not enough BW: " :
> "",
> +                       DPTUN_BW_ARG(tunnel_bw),
> +                       DPTUN_BW_ARG(max_dprx_bw));
> +
> +               if (tunnel_bw > max_dprx_bw) {
> +                       *failed_stream_mask = new_tunnel_state-
> >stream_mask;
> +                       return -ENOSPC;
> +               }
> +
> +               group_bw += min(roundup(tunnel_bw, tunnel-
> >bw_granularity),
> +                               max_dprx_bw);
> +               group_stream_mask |= new_tunnel_state->stream_mask;
> +       }
> +
> +       tun_grp_dbg(group,
> +                   "%sRequired %d/%d Mb/s total for tunnel
> group.\n",
> +                   group_bw > group->available_bw ? "Not enough BW:
> " : "",
> +                   DPTUN_BW_ARG(group_bw),
> +                   DPTUN_BW_ARG(group->available_bw));
> +
> +       if (group_bw > group->available_bw) {
> +               *failed_stream_mask = group_stream_mask;
> +               return -ENOSPC;
> +       }
> +
> +       return 0;
> +}
> +
> +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state
> *state,
> +                                         u32 *failed_stream_mask)
> +{
> +       struct drm_dp_tunnel_group_state *new_group_state;
> +       int i;
> +
> +       for_each_new_group_in_state(state, new_group_state, i) {
> +               int ret;
> +
> +               ret =
> drm_dp_tunnel_atomic_check_group_bw(new_group_state,
> +                                                        
> failed_stream_mask);
> +               if (ret)
> +                       return ret;
> +       }
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_check_stream_bws);
> +
> +static void destroy_mgr(struct drm_dp_tunnel_mgr *mgr)
> +{
> +       int i;
> +
> +       for (i = 0; i < mgr->group_count; i++) {
> +               cleanup_group(&mgr->groups[i]);
> +               drm_WARN_ON(mgr->dev, !list_empty(&mgr-
> >groups[i].tunnels));
> +       }
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +       ref_tracker_dir_exit(&mgr->ref_tracker);
> +#endif
> +
> +       kfree(mgr->groups);
> +       kfree(mgr);
> +}
> +
> +/**
> + * drm_dp_tunnel_mgr_create - Create a DP tunnel manager
> + * @i915: i915 driver object
> + *
> + * Creates a DP tunnel manager.
> + *
> + * Returns a pointer to the tunnel manager if created successfully
> or NULL in
> + * case of an error.
> + */
> +struct drm_dp_tunnel_mgr *
> +drm_dp_tunnel_mgr_create(struct drm_device *dev, int
> max_group_count)
> +{
> +       struct drm_dp_tunnel_mgr *mgr = kzalloc(sizeof(*mgr),
> GFP_KERNEL);
> +       int i;
> +
> +       if (!mgr)
> +               return NULL;
> +
> +       mgr->dev = dev;
> +       init_waitqueue_head(&mgr->bw_req_queue);
> +
> +       mgr->groups = kcalloc(max_group_count, sizeof(*mgr->groups),
> GFP_KERNEL);
> +       if (!mgr->groups) {
> +               kfree(mgr);
> +
> +               return NULL;
> +       }
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +       ref_tracker_dir_init(&mgr->ref_tracker, 16, "dptun");
> +#endif
> +
> +       for (i = 0; i < max_group_count; i++) {
> +               if (!init_group(mgr, &mgr->groups[i])) {
> +                       destroy_mgr(mgr);
> +
> +                       return NULL;
> +               }
> +
> +               mgr->group_count++;
> +       }
> +
> +       return mgr;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_mgr_create);
> +
> +/**
> + * drm_dp_tunnel_mgr_destroy - Destroy DP tunnel manager
> + * @mgr: Tunnel manager object
> + *
> + * Destroy the tunnel manager.
> + */
> +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr)
> +{
> +       destroy_mgr(mgr);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_mgr_destroy);
> diff --git a/include/drm/display/drm_dp.h
> b/include/drm/display/drm_dp.h
> index 281afff6ee4e5..8bfd5d007be8d 100644
> --- a/include/drm/display/drm_dp.h
> +++ b/include/drm/display/drm_dp.h
> @@ -1382,6 +1382,66 @@
>  #define DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET     0x69494
>  #define DP_HDCP_2_2_REG_DBG_OFFSET             0x69518
>  
> +/* DP-tunneling */
> +#define DP_TUNNELING_OUI                               0xe0000
> +#define  DP_TUNNELING_OUI_BYTES                                3
> +
> +#define DP_TUNNELING_DEV_ID                            0xe0003
> +#define  DP_TUNNELING_DEV_ID_BYTES                     6
> +
> +#define DP_TUNNELING_HW_REV                            0xe0009
> +#define  DP_TUNNELING_HW_REV_MAJOR_SHIFT               4
> +#define  DP_TUNNELING_HW_REV_MAJOR_MASK                        (0xf
> << DP_TUNNELING_HW_REV_MAJOR_SHIFT)
> +#define  DP_TUNNELING_HW_REV_MINOR_SHIFT               0
> +#define  DP_TUNNELING_HW_REV_MINOR_MASK                        (0xf
> << DP_TUNNELING_HW_REV_MINOR_SHIFT)
> +
> +#define DP_TUNNELING_SW_REV_MAJOR                      0xe000a
> +#define DP_TUNNELING_SW_REV_MINOR                      0xe000b
> +
> +#define DP_TUNNELING_CAPABILITIES                      0xe000d
> +#define  DP_IN_BW_ALLOCATION_MODE_SUPPORT              (1 << 7)
> +#define  DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT          (1 << 6)
> +#define  DP_TUNNELING_SUPPORT                          (1 << 0)
> +
> +#define DP_IN_ADAPTER_INFO                             0xe000e
> +#define  DP_IN_ADAPTER_NUMBER_BITS                     7
> +#define  DP_IN_ADAPTER_NUMBER_MASK                     ((1 <<
> DP_IN_ADAPTER_NUMBER_BITS) - 1)
> +
> +#define DP_USB4_DRIVER_ID                              0xe000f
> +#define  DP_USB4_DRIVER_ID_BITS                                4
> +#define  DP_USB4_DRIVER_ID_MASK                                ((1
> << DP_USB4_DRIVER_ID_BITS) - 1)
> +
> +#define DP_USB4_DRIVER_BW_CAPABILITY                   0xe0020
> +#define  DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT     (1 << 7)
> +
> +#define DP_IN_ADAPTER_TUNNEL_INFORMATION               0xe0021
> +#define  DP_GROUP_ID_BITS                              3
> +#define  DP_GROUP_ID_MASK                              ((1 <<
> DP_GROUP_ID_BITS) - 1)
> +
> +#define DP_BW_GRANULARITY                              0xe0022
> +#define  DP_BW_GRANULARITY_MASK                                0x3
> +
> +#define
> DP_ESTIMATED_BW                                        0xe0023
> +#define
> DP_ALLOCATED_BW                                        0xe0024
> +
> +#define DP_TUNNELING_STATUS                            0xe0025
> +#define  DP_BW_ALLOCATION_CAPABILITY_CHANGED           (1 << 3)
> +#define  DP_ESTIMATED_BW_CHANGED                       (1 << 2)
> +#define  DP_BW_REQUEST_SUCCEEDED                       (1 << 1)
> +#define  DP_BW_REQUEST_FAILED                          (1 << 0)
> +
> +#define DP_TUNNELING_MAX_LINK_RATE                     0xe0028
> +
> +#define DP_TUNNELING_MAX_LANE_COUNT                    0xe0029
> +#define  DP_TUNNELING_MAX_LANE_COUNT_MASK              0x1f
> +
> +#define DP_DPTX_BW_ALLOCATION_MODE_CONTROL             0xe0030
> +#define  DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE   (1 << 7)
> +#define  DP_UNMASK_BW_ALLOCATION_IRQ                   (1 << 6)
> +
> +#define DP_REQUEST_BW                                  0xe0031
> +#define  MAX_DP_REQUEST_BW                             255
> +
>  /* LTTPR: Link Training (LT)-tunable PHY Repeaters */
>  #define DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV 0xf0000
> /* 1.3 */
>  #define DP_MAX_LINK_RATE_PHY_REPEATER                      0xf0001
> /* 1.4a */
> diff --git a/include/drm/display/drm_dp_tunnel.h
> b/include/drm/display/drm_dp_tunnel.h
> new file mode 100644
> index 0000000000000..f6449b1b4e6e9
> --- /dev/null
> +++ b/include/drm/display/drm_dp_tunnel.h
> @@ -0,0 +1,270 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#ifndef __DRM_DP_TUNNEL_H__
> +#define __DRM_DP_TUNNEL_H__
> +
> +#include <linux/err.h>
> +#include <linux/errno.h>
> +#include <linux/types.h>
> +
> +struct drm_dp_aux;
> +
> +struct drm_device;
> +
> +struct drm_atomic_state;
> +struct drm_dp_tunnel_mgr;
> +struct drm_dp_tunnel_state;
> +
> +struct ref_tracker;
> +
> +struct drm_dp_tunnel_ref {
> +       struct drm_dp_tunnel *tunnel;
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +       struct ref_tracker *tracker;
> +#endif
> +};
> +
> +#ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel);
> +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel);
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker
> **tracker);
> +
> +void
> +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker
> **tracker);
> +#else
> +#define drm_dp_tunnel_get(tunnel, tracker) \
> +       drm_dp_tunnel_get_untracked(tunnel)
> +
> +#define drm_dp_tunnel_put(tunnel, tracker) \
> +       drm_dp_tunnel_put_untracked(tunnel)
> +
> +#endif
> +
> +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel
> *tunnel,
> +                                          struct drm_dp_tunnel_ref
> *tunnel_ref)
> +{
> +       tunnel_ref->tunnel = drm_dp_tunnel_get(tunnel, &tunnel_ref-
> >tracker);
> +}
> +
> +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref
> *tunnel_ref)
> +{
> +       drm_dp_tunnel_put(tunnel_ref->tunnel, &tunnel_ref->tracker);
> +}
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> +                    struct drm_dp_aux *aux);
> +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel);
> +
> +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel);
> +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel
> *tunnel);
> +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw);
> +int drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel);
> +
> +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel);
> +
> +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> +                            struct drm_dp_aux *aux);
> +
> +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel
> *tunnel);
> +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel);
> +
> +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel);
> +
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> +                              struct drm_dp_tunnel *tunnel);
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> +                                  const struct drm_dp_tunnel
> *tunnel);
> +
> +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state
> *tunnel_state);
> +
> +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state
> *state,
> +                                      struct drm_dp_tunnel *tunnel,
> +                                      u8 stream_id, int bw);
> +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct
> drm_atomic_state *state,
> +                                                   const struct
> drm_dp_tunnel *tunnel,
> +                                                   u32
> *stream_mask);
> +
> +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state
> *state,
> +                                         u32 *failed_stream_mask);
> +
> +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct
> drm_dp_tunnel_state *tunnel_state);
> +
> +struct drm_dp_tunnel_mgr *
> +drm_dp_tunnel_mgr_create(struct drm_device *dev, int
> max_group_count);
> +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr);
> +
> +#else
> +
> +static inline struct drm_dp_tunnel *
> +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> +{
> +       return NULL;
> +}
> +
> +static inline void
> +drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel) {}
> +
> +static inline struct drm_dp_tunnel *
> +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker
> **tracker)
> +{
> +       return NULL;
> +}
> +
> +static inline void
> +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker
> **tracker) {}
> +
> +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel
> *tunnel,
> +                                          struct drm_dp_tunnel_ref
> *tunnel_ref) {}
> +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref
> *tunnel_ref) {}
> +
> +static inline struct drm_dp_tunnel *
> +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> +                    struct drm_dp_aux *aux)
> +{
> +       return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline int
> +drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> +{
> +       return 0;
> +}
> +
> +static inline int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel
> *tunnel)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
> +static inline int drm_dp_tunnel_disable_bw_alloc(struct
> drm_dp_tunnel *tunnel)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
> +static inline bool drm_dp_tunnel_bw_alloc_is_enabled(const struct
> drm_dp_tunnel *tunnel)
> +{
> +       return false;
> +}
> +
> +static inline int
> +drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
> +static inline void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel
> *tunnel) {}
> +static inline int
> +drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> +                        struct drm_dp_aux *aux)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> +{
> +       return 0;
> +}
> +
> +static inline int
> +drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel
> *tunnel)
> +{
> +       return 0;
> +}
> +
> +static inline int
> +drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +       return -1;
> +}
> +
> +static inline const char *
> +drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
> +{
> +       return NULL;
> +}
> +
> +static inline struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> +                              struct drm_dp_tunnel *tunnel)
> +{
> +       return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> +                                  const struct drm_dp_tunnel
> *tunnel)
> +{
> +       return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline void
> +drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state
> *tunnel_state) {}
> +
> +static inline int
> +drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> +                                  struct drm_dp_tunnel *tunnel,
> +                                  u8 stream_id, int bw)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_atomic_get_group_streams_in_state(struct
> drm_atomic_state *state,
> +                                               const struct
> drm_dp_tunnel *tunnel,
> +                                               u32 *stream_mask)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state
> *state,
> +                                     u32 *failed_stream_mask)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state
> *tunnel_state)
> +{
> +       return 0;
> +}
> +
> +static inline struct drm_dp_tunnel_mgr *
> +drm_dp_tunnel_mgr_create(struct drm_device *dev, int
> max_group_count)
> +{
> +       return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline
> +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr) {}
> +
> +
> +#endif /* CONFIG_DRM_DISPLAY_DP_TUNNEL */
> +
> +#endif /* __DRM_DP_TUNNEL_H__ */


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-01-31 12:50   ` Hogander, Jouni
@ 2024-01-31 13:58     ` Imre Deak
  0 siblings, 0 replies; 61+ messages in thread
From: Imre Deak @ 2024-01-31 13:58 UTC (permalink / raw)
  To: Hogander, Jouni; +Cc: intel-gfx, dri-devel

On Wed, Jan 31, 2024 at 02:50:16PM +0200, Hogander, Jouni wrote:
> [...]
> > +
> > +struct drm_dp_tunnel_group;
> > +
> > +struct drm_dp_tunnel {
> > +       struct drm_dp_tunnel_group *group;
> > +
> > +       struct list_head node;
> > +
> > +       struct kref kref;
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +       struct ref_tracker *tracker;
> > +#endif
> > +       struct drm_dp_aux *aux;
> > +       char name[8];
> > +
> > +       int bw_granularity;
> > +       int estimated_bw;
> > +       int allocated_bw;
> > +
> > +       int max_dprx_rate;
> > +       u8 max_dprx_lane_count;
> > +
> > +       u8 adapter_id;
> > +
> > +       bool bw_alloc_supported:1;
> > +       bool bw_alloc_enabled:1;
> > +       bool has_io_error:1;
> > +       bool destroyed:1;
> > +};
> > +
> > +struct drm_dp_tunnel_group_state;
> > +
> > +struct drm_dp_tunnel_state {
> > +       struct drm_dp_tunnel_group_state *group_state;
> > +
> > +       struct drm_dp_tunnel_ref tunnel_ref;
> > +
> > +       struct list_head node;
> > +
> > +       u32 stream_mask;
> 
> I'm wondering if drm_dp_tunnel_state can really contain several streams
> and what kind of scenario this would be? From i915 point of view I
> would understand that several pipes are routed to DP tunnel.

Yes, multiple pipes through the same tunnel and the use case for that is
MST with multiple streams. The "stream" term is only an abstraction
where it could be a different physical thing in various drivers, but for
i915 it just means pipes. Not 100% sure if that's the best mapping,
since in case of bigjoiner there would be multiple pipes, but possibly
(in the SST case) only one stream from the tunneling POV.

> Is it bigjoiner case?

IIUC in that (SST) case the streams would be joined already before going
to the TBT DP_IN adapter, so that's only one stream in stream_mask above
(unless MST + bigjoiner, where you could have 2 MST/DP tunnel streams
each consisting of 2 pipes).

> BR,
> 
> Jouni Högander
> 
> > +       int *stream_bw;
> > +};
> > +
> > +struct drm_dp_tunnel_group_state {
> > +       struct drm_private_state base;
> > +
> > +       struct list_head tunnel_states;
> > +};
> > +
> > +struct drm_dp_tunnel_group {
> > +       struct drm_private_obj base;
> > +       struct drm_dp_tunnel_mgr *mgr;
> > +
> > +       struct list_head tunnels;
> > +
> > +       int available_bw;       /* available BW including the
> > allocated_bw of all tunnels */
> > +       int drv_group_id;
> > +
> > +       char name[8];
> > +
> > +       bool active:1;
> > +};
> > +
> > +struct drm_dp_tunnel_mgr {
> > +       struct drm_device *dev;
> > +
> > +       int group_count;
> > +       struct drm_dp_tunnel_group *groups;
> > +       wait_queue_head_t bw_req_queue;
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +       struct ref_tracker_dir ref_tracker;
> > +#endif
> > +};
> > +
> > +static int next_reg_area(int *offset)
> > +{
> > +       *offset = find_next_bit(dptun_info_regs, 64, *offset);
> > +
> > +       return find_next_zero_bit(dptun_info_regs, 64, *offset + 1) -
> > *offset;
> > +}
> > +
> > +#define tunnel_reg_ptr(__regs, __address) ({ \
> > +       WARN_ON(!test_bit((__address) - DP_TUNNELING_BASE,
> > dptun_info_regs)); \
> > +       &(__regs)->buf[bitmap_weight(dptun_info_regs, (__address) -
> > DP_TUNNELING_BASE)]; \
> > +})
> > +
> > +static int read_tunnel_regs(struct drm_dp_aux *aux, struct
> > drm_dp_tunnel_regs *regs)
> > +{
> > +       int offset = 0;
> > +       int len;
> > +
> > +       while ((len = next_reg_area(&offset))) {
> > +               int address = DP_TUNNELING_BASE + offset;
> > +
> > +               if (drm_dp_dpcd_read(aux, address,
> > tunnel_reg_ptr(regs, address), len) < 0)
> > +                       return -EIO;
> > +
> > +               offset += len;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static u8 tunnel_reg(const struct drm_dp_tunnel_regs *regs, int
> > address)
> > +{
> > +       return *tunnel_reg_ptr(regs, address);
> > +}
> > +
> > +static int tunnel_reg_drv_group_id(const struct drm_dp_tunnel_regs
> > *regs)
> > +{
> > +       int drv_id = tunnel_reg(regs, DP_USB4_DRIVER_ID) &
> > DP_USB4_DRIVER_ID_MASK;
> > +       int group_id = tunnel_reg(regs,
> > DP_IN_ADAPTER_TUNNEL_INFORMATION) & DP_GROUP_ID_MASK;
> > +
> > +       if (!group_id)
> > +               return 0;
> > +
> > +       return (drv_id << DP_GROUP_ID_BITS) | group_id;
> > +}
> > +
> > +/* Return granularity in kB/s units */
> > +static int tunnel_reg_bw_granularity(const struct drm_dp_tunnel_regs
> > *regs)
> > +{
> > +       int gr = tunnel_reg(regs, DP_BW_GRANULARITY) &
> > DP_BW_GRANULARITY_MASK;
> > +
> > +       WARN_ON(gr > 2);
> > +
> > +       return (250000 << gr) / 8;
> > +}
> > +
> > +static int tunnel_reg_max_dprx_rate(const struct drm_dp_tunnel_regs
> > *regs)
> > +{
> > +       u8 bw_code = tunnel_reg(regs, DP_TUNNELING_MAX_LINK_RATE);
> > +
> > +       return drm_dp_bw_code_to_link_rate(bw_code);
> > +}
> > +
> > +static int tunnel_reg_max_dprx_lane_count(const struct
> > drm_dp_tunnel_regs *regs)
> > +{
> > +       u8 lane_count = tunnel_reg(regs, DP_TUNNELING_MAX_LANE_COUNT)
> > &
> > +                       DP_TUNNELING_MAX_LANE_COUNT_MASK;
> > +
> > +       return lane_count;
> > +}
> > +
> > +static bool tunnel_reg_bw_alloc_supported(const struct
> > drm_dp_tunnel_regs *regs)
> > +{
> > +       u8 cap_mask = DP_TUNNELING_SUPPORT |
> > DP_IN_BW_ALLOCATION_MODE_SUPPORT;
> > +
> > +       if ((tunnel_reg(regs, DP_TUNNELING_CAPABILITIES) & cap_mask)
> > != cap_mask)
> > +               return false;
> > +
> > +       return tunnel_reg(regs, DP_USB4_DRIVER_BW_CAPABILITY) &
> > +              DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT;
> > +}
> > +
> > +static bool tunnel_reg_bw_alloc_enabled(const struct
> > drm_dp_tunnel_regs *regs)
> > +{
> > +       return tunnel_reg(regs, DP_DPTX_BW_ALLOCATION_MODE_CONTROL) &
> > +               DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE;
> > +}
> > +
> > +static int tunnel_group_drv_id(int drv_group_id)
> > +{
> > +       return drv_group_id >> DP_GROUP_ID_BITS;
> > +}
> > +
> > +static int tunnel_group_id(int drv_group_id)
> > +{
> > +       return drv_group_id & DP_GROUP_ID_MASK;
> > +}
> > +
> > +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
> > +{
> > +       return tunnel->name;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_name);
> > +
> > +static const char *drm_dp_tunnel_group_name(const struct
> > drm_dp_tunnel_group *group)
> > +{
> > +       return group->name;
> > +}
> > +
> > +static struct drm_dp_tunnel_group *
> > +lookup_or_alloc_group(struct drm_dp_tunnel_mgr *mgr, int
> > drv_group_id)
> > +{
> > +       struct drm_dp_tunnel_group *group = NULL;
> > +       int i;
> > +
> > +       for (i = 0; i < mgr->group_count; i++) {
> > +               /*
> > +                * A tunnel group with 0 group ID shouldn't have more
> > than one
> > +                * tunnels.
> > +                */
> > +               if (tunnel_group_id(drv_group_id) &&
> > +                   mgr->groups[i].drv_group_id == drv_group_id)
> > +                       return &mgr->groups[i];
> > +
> > +               if (!group && !mgr->groups[i].active)
> > +                       group = &mgr->groups[i];
> > +       }
> > +
> > +       if (!group) {
> > +               drm_dbg_kms(mgr->dev,
> > +                           "DPTUN: Can't allocate more tunnel
> > groups\n");
> > +               return NULL;
> > +       }
> > +
> > +       group->drv_group_id = drv_group_id;
> > +       group->active = true;
> > +
> > +       snprintf(group->name, sizeof(group->name), "%d:%d:*",
> > +                tunnel_group_drv_id(drv_group_id) & ((1 <<
> > DP_GROUP_ID_BITS) - 1),
> > +                tunnel_group_id(drv_group_id) & ((1 <<
> > DP_USB4_DRIVER_ID_BITS) - 1));
> > +
> > +       return group;
> > +}
> > +
> > +static void free_group(struct drm_dp_tunnel_group *group)
> > +{
> > +       struct drm_dp_tunnel_mgr *mgr = group->mgr;
> > +
> > +       if (drm_WARN_ON(mgr->dev, !list_empty(&group->tunnels)))
> > +               return;
> > +
> > +       group->drv_group_id = 0;
> > +       group->available_bw = -1;
> > +       group->active = false;
> > +}
> > +
> > +static struct drm_dp_tunnel *
> > +tunnel_get(struct drm_dp_tunnel *tunnel)
> > +{
> > +       kref_get(&tunnel->kref);
> > +
> > +       return tunnel;
> > +}
> > +
> > +static void free_tunnel(struct kref *kref)
> > +{
> > +       struct drm_dp_tunnel *tunnel = container_of(kref,
> > typeof(*tunnel), kref);
> > +       struct drm_dp_tunnel_group *group = tunnel->group;
> > +
> > +       list_del(&tunnel->node);
> > +       if (list_empty(&group->tunnels))
> > +               free_group(group);
> > +
> > +       kfree(tunnel);
> > +}
> > +
> > +static void tunnel_put(struct drm_dp_tunnel *tunnel)
> > +{
> > +       kref_put(&tunnel->kref, free_tunnel);
> > +}
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +static void track_tunnel_ref(struct drm_dp_tunnel *tunnel,
> > +                            struct ref_tracker **tracker)
> > +{
> > +       ref_tracker_alloc(&tunnel->group->mgr->ref_tracker,
> > +                         tracker, GFP_KERNEL);
> > +}
> > +
> > +static void untrack_tunnel_ref(struct drm_dp_tunnel *tunnel,
> > +                              struct ref_tracker **tracker)
> > +{
> > +       ref_tracker_free(&tunnel->group->mgr->ref_tracker,
> > +                        tracker);
> > +}
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> > +{
> > +       track_tunnel_ref(tunnel, NULL);
> > +
> > +       return tunnel_get(tunnel);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
> > +
> > +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel)
> > +{
> > +       tunnel_put(tunnel);
> > +       untrack_tunnel_ref(tunnel, NULL);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_put_untracked);
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel,
> > +                   struct ref_tracker **tracker)
> > +{
> > +       track_tunnel_ref(tunnel, tracker);
> > +
> > +       return tunnel_get(tunnel);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_get);
> > +
> > +void drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel,
> > +                        struct ref_tracker **tracker)
> > +{
> > +       untrack_tunnel_ref(tunnel, tracker);
> > +       tunnel_put(tunnel);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_put);
> > +#else
> > +#define track_tunnel_ref(tunnel, tracker) do {} while (0)
> > +#define untrack_tunnel_ref(tunnel, tracker) do {} while (0)
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> > +{
> > +       return tunnel_get(tunnel);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
> > +
> > +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel)
> > +{
> > +       tunnel_put(tunnel);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_put_untracked);
> > +#endif
> > +
> > +static bool add_tunnel_to_group(struct drm_dp_tunnel_mgr *mgr,
> > +                               int drv_group_id,
> > +                               struct drm_dp_tunnel *tunnel)
> > +{
> > +       struct drm_dp_tunnel_group *group =
> > +               lookup_or_alloc_group(mgr, drv_group_id);
> > +
> > +       if (!group)
> > +               return false;
> > +
> > +       tunnel->group = group;
> > +       list_add(&tunnel->node, &group->tunnels);
> > +
> > +       return true;
> > +}
> > +
> > +static struct drm_dp_tunnel *
> > +create_tunnel(struct drm_dp_tunnel_mgr *mgr,
> > +             struct drm_dp_aux *aux,
> > +             const struct drm_dp_tunnel_regs *regs)
> > +{
> > +       int drv_group_id = tunnel_reg_drv_group_id(regs);
> > +       struct drm_dp_tunnel *tunnel;
> > +
> > +       tunnel = kzalloc(sizeof(*tunnel), GFP_KERNEL);
> > +       if (!tunnel)
> > +               return NULL;
> > +
> > +       INIT_LIST_HEAD(&tunnel->node);
> > +
> > +       kref_init(&tunnel->kref);
> > +
> > +       tunnel->aux = aux;
> > +
> > +       tunnel->adapter_id = tunnel_reg(regs, DP_IN_ADAPTER_INFO) &
> > DP_IN_ADAPTER_NUMBER_MASK;
> > +
> > +       snprintf(tunnel->name, sizeof(tunnel->name), "%d:%d:%d",
> > +                tunnel_group_drv_id(drv_group_id) & ((1 <<
> > DP_GROUP_ID_BITS) - 1),
> > +                tunnel_group_id(drv_group_id) & ((1 <<
> > DP_USB4_DRIVER_ID_BITS) - 1),
> > +                tunnel->adapter_id & ((1 <<
> > DP_IN_ADAPTER_NUMBER_BITS) - 1));
> > +
> > +       tunnel->bw_granularity = tunnel_reg_bw_granularity(regs);
> > +       tunnel->allocated_bw = tunnel_reg(regs, DP_ALLOCATED_BW) *
> > +                              tunnel->bw_granularity;
> > +
> > +       tunnel->bw_alloc_supported =
> > tunnel_reg_bw_alloc_supported(regs);
> > +       tunnel->bw_alloc_enabled = tunnel_reg_bw_alloc_enabled(regs);
> > +
> > +       if (!add_tunnel_to_group(mgr, drv_group_id, tunnel)) {
> > +               kfree(tunnel);
> > +
> > +               return NULL;
> > +       }
> > +
> > +       track_tunnel_ref(tunnel, &tunnel->tracker);
> > +
> > +       return tunnel;
> > +}
> > +
> > +static void destroy_tunnel(struct drm_dp_tunnel *tunnel)
> > +{
> > +       untrack_tunnel_ref(tunnel, &tunnel->tracker);
> > +       tunnel_put(tunnel);
> > +}
> > +
> > +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel)
> > +{
> > +       tunnel->has_io_error = true;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_set_io_error);
> > +
> > +static char yes_no_chr(int val)
> > +{
> > +       return val ? 'Y' : 'N';
> > +}
> > +
> > +#define SKIP_DPRX_CAPS_CHECK           BIT(0)
> > +#define ALLOW_ALLOCATED_BW_CHANGE      BIT(1)
> > +
> > +static bool tunnel_regs_are_valid(struct drm_dp_tunnel_mgr *mgr,
> > +                                 const struct drm_dp_tunnel_regs
> > *regs,
> > +                                 unsigned int flags)
> > +{
> > +       int drv_group_id = tunnel_reg_drv_group_id(regs);
> > +       bool check_dprx = !(flags & SKIP_DPRX_CAPS_CHECK);
> > +       bool ret = true;
> > +
> > +       if (!tunnel_reg_bw_alloc_supported(regs)) {
> > +               if (tunnel_group_id(drv_group_id)) {
> > +                       drm_dbg_kms(mgr->dev,
> > +                                   "DPTUN: A non-zero group ID is
> > only allowed with BWA support\n");
> > +                       ret = false;
> > +               }
> > +
> > +               if (tunnel_reg(regs, DP_ALLOCATED_BW)) {
> > +                       drm_dbg_kms(mgr->dev,
> > +                                   "DPTUN: BW is allocated without
> > BWA support\n");
> > +                       ret = false;
> > +               }
> > +
> > +               return ret;
> > +       }
> > +
> > +       if (!tunnel_group_id(drv_group_id)) {
> > +               drm_dbg_kms(mgr->dev,
> > +                           "DPTUN: BWA support requires a non-zero
> > group ID\n");
> > +               ret = false;
> > +       }
> > +
> > +       if (check_dprx &&
> > hweight8(tunnel_reg_max_dprx_lane_count(regs)) != 1) {
> > +               drm_dbg_kms(mgr->dev,
> > +                           "DPTUN: Invalid DPRX lane count: %d\n",
> > +                           tunnel_reg_max_dprx_lane_count(regs));
> > +
> > +               ret = false;
> > +       }
> > +
> > +       if (check_dprx && !tunnel_reg_max_dprx_rate(regs)) {
> > +               drm_dbg_kms(mgr->dev,
> > +                           "DPTUN: DPRX rate is 0\n");
> > +
> > +               ret = false;
> > +       }
> > +
> > +       if (tunnel_reg(regs, DP_ALLOCATED_BW) > tunnel_reg(regs,
> > DP_ESTIMATED_BW)) {
> > +               drm_dbg_kms(mgr->dev,
> > +                           "DPTUN: Allocated BW %d > estimated BW %d
> > Mb/s\n",
> > +                           DPTUN_BW_ARG(tunnel_reg(regs,
> > DP_ALLOCATED_BW) *
> > +
> > tunnel_reg_bw_granularity(regs)),
> > +                           DPTUN_BW_ARG(tunnel_reg(regs,
> > DP_ESTIMATED_BW) *
> > +
> > tunnel_reg_bw_granularity(regs)));
> > +
> > +               ret = false;
> > +       }
> > +
> > +       return ret;
> > +}
> > +
> > +static bool tunnel_info_changes_are_valid(struct drm_dp_tunnel
> > *tunnel,
> > +                                         const struct
> > drm_dp_tunnel_regs *regs,
> > +                                         unsigned int flags)
> > +{
> > +       int new_drv_group_id = tunnel_reg_drv_group_id(regs);
> > +       bool ret = true;
> > +
> > +       if (tunnel->bw_alloc_supported !=
> > tunnel_reg_bw_alloc_supported(regs)) {
> > +               tun_dbg(tunnel,
> > +                       "BW alloc support has changed %c -> %c\n",
> > +                       yes_no_chr(tunnel->bw_alloc_supported),
> > +                       yes_no_chr(tunnel_reg_bw_alloc_supported(regs
> > )));
> > +
> > +               ret = false;
> > +       }
> > +
> > +       if (tunnel->group->drv_group_id != new_drv_group_id) {
> > +               tun_dbg(tunnel,
> > +                       "Driver/group ID has changed %d:%d:* ->
> > %d:%d:*\n",
> > +                       tunnel_group_drv_id(tunnel->group-
> > >drv_group_id),
> > +                       tunnel_group_id(tunnel->group->drv_group_id),
> > +                       tunnel_group_drv_id(new_drv_group_id),
> > +                       tunnel_group_id(new_drv_group_id));
> > +
> > +               ret = false;
> > +       }
> > +
> > +       if (!tunnel->bw_alloc_supported)
> > +               return ret;
> > +
> > +       if (tunnel->bw_granularity !=
> > tunnel_reg_bw_granularity(regs)) {
> > +               tun_dbg(tunnel,
> > +                       "BW granularity has changed: %d -> %d
> > Mb/s\n",
> > +                       DPTUN_BW_ARG(tunnel->bw_granularity),
> > +                       DPTUN_BW_ARG(tunnel_reg_bw_granularity(regs))
> > );
> > +
> > +               ret = false;
> > +       }
> > +
> > +       /*
> > +        * On some devices at least the BW alloc mode enabled status
> > is always
> > +        * reported as 0, so skip checking that here.
> > +        */
> > +
> > +       if (!(flags & ALLOW_ALLOCATED_BW_CHANGE) &&
> > +           tunnel->allocated_bw !=
> > +           tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel-
> > >bw_granularity) {
> > +               tun_dbg(tunnel,
> > +                       "Allocated BW has changed: %d -> %d Mb/s\n",
> > +                       DPTUN_BW_ARG(tunnel->allocated_bw),
> > +                       DPTUN_BW_ARG(tunnel_reg(regs,
> > DP_ALLOCATED_BW) * tunnel->bw_granularity));
> > +
> > +               ret = false;
> > +       }
> > +
> > +       return ret;
> > +}
> > +
> > +static int
> > +read_and_verify_tunnel_regs(struct drm_dp_tunnel *tunnel,
> > +                           struct drm_dp_tunnel_regs *regs,
> > +                           unsigned int flags)
> > +{
> > +       int err;
> > +
> > +       err = read_tunnel_regs(tunnel->aux, regs);
> > +       if (err < 0) {
> > +               drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +               return err;
> > +       }
> > +
> > +       if (!tunnel_regs_are_valid(tunnel->group->mgr, regs, flags))
> > +               return -EINVAL;
> > +
> > +       if (!tunnel_info_changes_are_valid(tunnel, regs, flags))
> > +               return -EINVAL;
> > +
> > +       return 0;
> > +}
> > +
> > +static bool update_dprx_caps(struct drm_dp_tunnel *tunnel, const
> > struct drm_dp_tunnel_regs *regs)
> > +{
> > +       bool changed = false;
> > +
> > +       if (tunnel_reg_max_dprx_rate(regs) != tunnel->max_dprx_rate)
> > {
> > +               tunnel->max_dprx_rate =
> > tunnel_reg_max_dprx_rate(regs);
> > +               changed = true;
> > +       }
> > +
> > +       if (tunnel_reg_max_dprx_lane_count(regs) != tunnel-
> > >max_dprx_lane_count) {
> > +               tunnel->max_dprx_lane_count =
> > tunnel_reg_max_dprx_lane_count(regs);
> > +               changed = true;
> > +       }
> > +
> > +       return changed;
> > +}
> > +
> > +static int dev_id_len(const u8 *dev_id, int max_len)
> > +{
> > +       while (max_len && dev_id[max_len - 1] == '\0')
> > +               max_len--;
> > +
> > +       return max_len;
> > +}
> > +
> > +static int get_max_dprx_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +       int bw = drm_dp_max_dprx_data_rate(tunnel->max_dprx_rate,
> > +                                          tunnel-
> > >max_dprx_lane_count);
> > +
> > +       return min(roundup(bw, tunnel->bw_granularity),
> > +                  MAX_DP_REQUEST_BW * tunnel->bw_granularity);
> > +}
> > +
> > +static int get_max_tunnel_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +       return min(get_max_dprx_bw(tunnel), tunnel->group-
> > >available_bw);
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_detect - Detect DP tunnel on the link
> > + * @mgr: Tunnel manager
> > + * @aux: DP AUX on which the tunnel will be detected
> > + *
> > + * Detect if there is any DP tunnel on the link and add it to the
> > tunnel
> > + * group's tunnel list.
> > + *
> > + * Returns 0 on success, negative error code on failure.
> > + */
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> > +                      struct drm_dp_aux *aux)
> > +{
> > +       struct drm_dp_tunnel_regs regs;
> > +       struct drm_dp_tunnel *tunnel;
> > +       int err;
> > +
> > +       err = read_tunnel_regs(aux, &regs);
> > +       if (err)
> > +               return ERR_PTR(err);
> > +
> > +       if (!(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
> > +             DP_TUNNELING_SUPPORT))
> > +               return ERR_PTR(-ENODEV);
> > +
> > +       /* The DPRX caps are valid only after enabling BW alloc mode.
> > */
> > +       if (!tunnel_regs_are_valid(mgr, &regs, SKIP_DPRX_CAPS_CHECK))
> > +               return ERR_PTR(-EINVAL);
> > +
> > +       tunnel = create_tunnel(mgr, aux, &regs);
> > +       if (!tunnel)
> > +               return ERR_PTR(-ENOMEM);
> > +
> > +       tun_dbg(tunnel,
> > +               "OUI:%*phD DevID:%*pE Rev-HW:%d.%d SW:%d.%d PR-Sup:%c
> > BWA-Sup:%c BWA-En:%c\n",
> > +               DP_TUNNELING_OUI_BYTES,
> > +                       tunnel_reg_ptr(&regs, DP_TUNNELING_OUI),
> > +               dev_id_len(tunnel_reg_ptr(&regs,
> > DP_TUNNELING_DEV_ID), DP_TUNNELING_DEV_ID_BYTES),
> > +                       tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID),
> > +               (tunnel_reg(&regs, DP_TUNNELING_HW_REV) &
> > DP_TUNNELING_HW_REV_MAJOR_MASK) >>
> > +                       DP_TUNNELING_HW_REV_MAJOR_SHIFT,
> > +               (tunnel_reg(&regs, DP_TUNNELING_HW_REV) &
> > DP_TUNNELING_HW_REV_MINOR_MASK) >>
> > +                       DP_TUNNELING_HW_REV_MINOR_SHIFT,
> > +               tunnel_reg(&regs, DP_TUNNELING_SW_REV_MAJOR),
> > +               tunnel_reg(&regs, DP_TUNNELING_SW_REV_MINOR),
> > +               yes_no_chr(tunnel_reg(&regs,
> > DP_TUNNELING_CAPABILITIES) &
> > +                          DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT),
> > +               yes_no_chr(tunnel->bw_alloc_supported),
> > +               yes_no_chr(tunnel->bw_alloc_enabled));
> > +
> > +       return tunnel;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_detect);
> > +
> > +/**
> > + * drm_dp_tunnel_destroy - Destroy tunnel object
> > + * @tunnel: Tunnel object
> > + *
> > + * Remove the tunnel from the tunnel topology and destroy it.
> > + */
> > +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> > +{
> > +       if (drm_WARN_ON(tunnel->group->mgr->dev, tunnel->destroyed))
> > +               return -ENODEV;
> > +
> > +       tun_dbg(tunnel, "destroying\n");
> > +
> > +       tunnel->destroyed = true;
> > +       destroy_tunnel(tunnel);
> > +
> > +       return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_destroy);
> > +
> > +static int check_tunnel(const struct drm_dp_tunnel *tunnel)
> > +{
> > +       if (tunnel->destroyed)
> > +               return -ENODEV;
> > +
> > +       if (tunnel->has_io_error)
> > +               return -EIO;
> > +
> > +       return 0;
> > +}
> > +
> > +static int group_allocated_bw(struct drm_dp_tunnel_group *group)
> > +{
> > +       struct drm_dp_tunnel *tunnel;
> > +       int group_allocated_bw = 0;
> > +
> > +       for_each_tunnel_in_group(group, tunnel) {
> > +               if (check_tunnel(tunnel) == 0 &&
> > +                   tunnel->bw_alloc_enabled)
> > +                       group_allocated_bw += tunnel->allocated_bw;
> > +       }
> > +
> > +       return group_allocated_bw;
> > +}
> > +
> > +static int calc_group_available_bw(const struct drm_dp_tunnel
> > *tunnel)
> > +{
> > +       return group_allocated_bw(tunnel->group) -
> > +              tunnel->allocated_bw +
> > +              tunnel->estimated_bw;
> > +}
> > +
> > +static int update_group_available_bw(struct drm_dp_tunnel *tunnel,
> > +                                    const struct drm_dp_tunnel_regs
> > *regs)
> > +{
> > +       struct drm_dp_tunnel *tunnel_iter;
> > +       int group_available_bw;
> > +       bool changed;
> > +
> > +       tunnel->estimated_bw = tunnel_reg(regs, DP_ESTIMATED_BW) *
> > tunnel->bw_granularity;
> > +
> > +       if (calc_group_available_bw(tunnel) == tunnel->group-
> > >available_bw)
> > +               return 0;
> > +
> > +       for_each_tunnel_in_group(tunnel->group, tunnel_iter) {
> > +               int err;
> > +
> > +               if (tunnel_iter == tunnel)
> > +                       continue;
> > +
> > +               if (check_tunnel(tunnel_iter) != 0 ||
> > +                   !tunnel_iter->bw_alloc_enabled)
> > +                       continue;
> > +
> > +               err = drm_dp_dpcd_probe(tunnel_iter->aux,
> > DP_DPCD_REV);
> > +               if (err) {
> > +                       tun_dbg(tunnel_iter,
> > +                               "Probe failed, assume disconnected
> > (err %pe)\n",
> > +                               ERR_PTR(err));
> > +                       drm_dp_tunnel_set_io_error(tunnel_iter);
> > +               }
> > +       }
> > +
> > +       group_available_bw = calc_group_available_bw(tunnel);
> > +
> > +       tun_dbg(tunnel, "Updated group available BW: %d->%d\n",
> > +               DPTUN_BW_ARG(tunnel->group->available_bw),
> > +               DPTUN_BW_ARG(group_available_bw));
> > +
> > +       changed = tunnel->group->available_bw != group_available_bw;
> > +
> > +       tunnel->group->available_bw = group_available_bw;
> > +
> > +       return changed ? 1 : 0;
> > +}
> > +
> > +static int set_bw_alloc_mode(struct drm_dp_tunnel *tunnel, bool
> > enable)
> > +{
> > +       u8 mask = DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE |
> > DP_UNMASK_BW_ALLOCATION_IRQ;
> > +       u8 val;
> > +
> > +       if (drm_dp_dpcd_readb(tunnel->aux,
> > DP_DPTX_BW_ALLOCATION_MODE_CONTROL, &val) < 0)
> > +               goto out_err;
> > +
> > +       if (enable)
> > +               val |= mask;
> > +       else
> > +               val &= ~mask;
> > +
> > +       if (drm_dp_dpcd_writeb(tunnel->aux,
> > DP_DPTX_BW_ALLOCATION_MODE_CONTROL, val) < 0)
> > +               goto out_err;
> > +
> > +       tunnel->bw_alloc_enabled = enable;
> > +
> > +       return 0;
> > +
> > +out_err:
> > +       drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +       return -EIO;
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_enable_bw_alloc: Enable DP tunnel BW allocation
> > mode
> > + * @tunnel: Tunnel object
> > + *
> > + * Enable the DP tunnel BW allocation mode on @tunnel if it supports
> > it.
> > + *
> > + * Returns 0 in case of success, negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > +{
> > +       struct drm_dp_tunnel_regs regs;
> > +       int err = check_tunnel(tunnel);
> > +
> > +       if (err)
> > +               return err;
> > +
> > +       if (!tunnel->bw_alloc_supported)
> > +               return -EOPNOTSUPP;
> > +
> > +       if (!tunnel_group_id(tunnel->group->drv_group_id))
> > +               return -EINVAL;
> > +
> > +       err = set_bw_alloc_mode(tunnel, true);
> > +       if (err)
> > +               goto out;
> > +
> > +       err = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> > +       if (err) {
> > +               set_bw_alloc_mode(tunnel, false);
> > +
> > +               goto out;
> > +       }
> > +
> > +       if (!tunnel->max_dprx_rate)
> > +               update_dprx_caps(tunnel, &regs);
> > +
> > +       if (tunnel->group->available_bw == -1) {
> > +               err = update_group_available_bw(tunnel, &regs);
> > +               if (err > 0)
> > +                       err = 0;
> > +       }
> > +out:
> > +       tun_dbg_stat(tunnel, err,
> > +                    "Enabling BW alloc mode: DPRX:%dx%d Group
> > alloc:%d/%d Mb/s",
> > +                    tunnel->max_dprx_rate / 100, tunnel-
> > >max_dprx_lane_count,
> > +                    DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > +                    DPTUN_BW_ARG(tunnel->group->available_bw));
> > +
> > +       return err;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_enable_bw_alloc);
> > +
> > +/**
> > + * drm_dp_tunnel_disable_bw_alloc: Disable DP tunnel BW allocation
> > mode
> > + * @tunnel: Tunnel object
> > + *
> > + * Disable the DP tunnel BW allocation mode on @tunnel.
> > + *
> > + * Returns 0 in case of success, negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > +{
> > +       int err = check_tunnel(tunnel);
> > +
> > +       if (err)
> > +               return err;
> > +
> > +       err = set_bw_alloc_mode(tunnel, false);
> > +
> > +       tun_dbg_stat(tunnel, err, "Disabling BW alloc mode");
> > +
> > +       return err;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_disable_bw_alloc);
> > +
> > +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel
> > *tunnel)
> > +{
> > +       return tunnel->bw_alloc_enabled;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_bw_alloc_is_enabled);
> > +
> > +static int bw_req_complete(struct drm_dp_aux *aux, bool
> > *status_changed)
> > +{
> > +       u8 bw_req_mask = DP_BW_REQUEST_SUCCEEDED |
> > DP_BW_REQUEST_FAILED;
> > +       u8 status_change_mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED |
> > DP_ESTIMATED_BW_CHANGED;
> > +       u8 val;
> > +
> > +       if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> > +               return -EIO;
> > +
> > +       *status_changed = val & status_change_mask;
> > +
> > +       val &= bw_req_mask;
> > +
> > +       if (!val)
> > +               return -EAGAIN;
> > +
> > +       if (drm_dp_dpcd_writeb(aux, DP_TUNNELING_STATUS, val) < 0)
> > +               return -EIO;
> > +
> > +       return val == DP_BW_REQUEST_SUCCEEDED ? 0 : -ENOSPC;
> > +}
> > +
> > +static int allocate_tunnel_bw(struct drm_dp_tunnel *tunnel, int bw)
> > +{
> > +       struct drm_dp_tunnel_mgr *mgr = tunnel->group->mgr;
> > +       int request_bw = DIV_ROUND_UP(bw, tunnel->bw_granularity);
> > +       unsigned long wait_expires;
> > +       DEFINE_WAIT(wait);
> > +       int err;
> > +
> > +       /* Atomic check should prevent the following. */
> > +       if (drm_WARN_ON(mgr->dev, request_bw > MAX_DP_REQUEST_BW)) {
> > +               err = -EINVAL;
> > +               goto out;
> > +       }
> > +
> > +       if (drm_dp_dpcd_writeb(tunnel->aux, DP_REQUEST_BW,
> > request_bw) < 0) {
> > +               err = -EIO;
> > +               goto out;
> > +       }
> > +
> > +       wait_expires = jiffies + msecs_to_jiffies(3000);
> > +
> > +       for (;;) {
> > +               bool status_changed;
> > +
> > +               err = bw_req_complete(tunnel->aux, &status_changed);
> > +               if (err != -EAGAIN)
> > +                       break;
> > +
> > +               if (status_changed) {
> > +                       struct drm_dp_tunnel_regs regs;
> > +
> > +                       err = read_and_verify_tunnel_regs(tunnel,
> > &regs,
> > +
> > ALLOW_ALLOCATED_BW_CHANGE);
> > +                       if (err)
> > +                               break;
> > +               }
> > +
> > +               if (time_after(jiffies, wait_expires)) {
> > +                       err = -ETIMEDOUT;
> > +                       break;
> > +               }
> > +
> > +               prepare_to_wait(&mgr->bw_req_queue, &wait,
> > TASK_UNINTERRUPTIBLE);
> > +               schedule_timeout(msecs_to_jiffies(200));
> > +       };
> > +
> > +       finish_wait(&mgr->bw_req_queue, &wait);
> > +
> > +       if (err)
> > +               goto out;
> > +
> > +       tunnel->allocated_bw = request_bw * tunnel->bw_granularity;
> > +
> > +out:
> > +       tun_dbg_stat(tunnel, err, "Allocating %d/%d Mb/s for tunnel:
> > Group alloc:%d/%d Mb/s",
> > +                    DPTUN_BW_ARG(request_bw * tunnel-
> > >bw_granularity),
> > +                    DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> > +                    DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > +                    DPTUN_BW_ARG(tunnel->group->available_bw));
> > +
> > +       if (err == -EIO)
> > +               drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +       return err;
> > +}
> > +
> > +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> > +{
> > +       int err = check_tunnel(tunnel);
> > +
> > +       if (err)
> > +               return err;
> > +
> > +       return allocate_tunnel_bw(tunnel, bw);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_alloc_bw);
> > +
> > +static int check_and_clear_status_change(struct drm_dp_tunnel
> > *tunnel)
> > +{
> > +       u8 mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED |
> > DP_ESTIMATED_BW_CHANGED;
> > +       u8 val;
> > +
> > +       if (drm_dp_dpcd_readb(tunnel->aux, DP_TUNNELING_STATUS, &val)
> > < 0)
> > +               goto out_err;
> > +
> > +       val &= mask;
> > +
> > +       if (val) {
> > +               if (drm_dp_dpcd_writeb(tunnel->aux,
> > DP_TUNNELING_STATUS, val) < 0)
> > +                       goto out_err;
> > +
> > +               return 1;
> > +       }
> > +
> > +       if (!drm_dp_tunnel_bw_alloc_is_enabled(tunnel))
> > +               return 0;
> > +
> > +       /*
> > +        * Check for estimated BW changes explicitly to account for
> > lost
> > +        * BW change notifications.
> > +        */
> > +       if (drm_dp_dpcd_readb(tunnel->aux, DP_ESTIMATED_BW, &val) <
> > 0)
> > +               goto out_err;
> > +
> > +       if (val * tunnel->bw_granularity != tunnel->estimated_bw)
> > +               return 1;
> > +
> > +       return 0;
> > +
> > +out_err:
> > +       drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +       return -EIO;
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_update_state: Update DP tunnel SW state with the HW
> > state
> > + * @tunnel: Tunnel object
> > + *
> > + * Update the SW state of @tunnel with the HW state.
> > + *
> > + * Returns 0 if the state has not changed, 1 if it has changed and
> > got updated
> > + * successfully and a negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> > +{
> > +       struct drm_dp_tunnel_regs regs;
> > +       bool changed = false;
> > +       int ret = check_tunnel(tunnel);
> > +
> > +       if (ret < 0)
> > +               return ret;
> > +
> > +       ret = check_and_clear_status_change(tunnel);
> > +       if (ret < 0)
> > +               goto out;
> > +
> > +       if (!ret)
> > +               return 0;
> > +
> > +       ret = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> > +       if (ret)
> > +               goto out;
> > +
> > +       if (update_dprx_caps(tunnel, &regs))
> > +               changed = true;
> > +
> > +       ret = update_group_available_bw(tunnel, &regs);
> > +       if (ret == 1)
> > +               changed = true;
> > +
> > +out:
> > +       tun_dbg_stat(tunnel, ret < 0 ? ret : 0,
> > +                    "State update: Changed:%c DPRX:%dx%d Tunnel
> > alloc:%d/%d Group alloc:%d/%d Mb/s",
> > +                    yes_no_chr(changed),
> > +                    tunnel->max_dprx_rate / 100, tunnel-
> > >max_dprx_lane_count,
> > +                    DPTUN_BW_ARG(tunnel->allocated_bw),
> > +                    DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> > +                    DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > +                    DPTUN_BW_ARG(tunnel->group->available_bw));
> > +
> > +       if (ret < 0)
> > +               return ret;
> > +
> > +       if (changed)
> > +               return 1;
> > +
> > +       return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_update_state);
> > +
> > +/*
> > + * Returns 0 if no re-probe is needed, 1 if a re-probe is needed,
> > + * a negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, struct
> > drm_dp_aux *aux)
> > +{
> > +       u8 val;
> > +
> > +       if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> > +               return -EIO;
> > +
> > +       if (val & (DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED))
> > +               wake_up_all(&mgr->bw_req_queue);
> > +
> > +       if (val & (DP_BW_ALLOCATION_CAPABILITY_CHANGED |
> > DP_ESTIMATED_BW_CHANGED))
> > +               return 1;
> > +
> > +       return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_handle_irq);
> > +
> > +/**
> > + * drm_dp_tunnel_max_dprx_rate - Query the maximum rate of the
> > tunnel's DPRX
> > + * @tunnel: Tunnel object
> > + *
> > + * The function is used to query the maximum link rate of the DPRX
> > connected
> > + * to @tunnel. Note that this rate will not be limited by the BW
> > limit of the
> > + * tunnel, as opposed to the standard and extended DP_MAX_LINK_RATE
> > DPCD
> > + * registers.
> > + *
> > + * Returns the maximum link rate in 10 kbit/s units.
> > + */
> > +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> > +{
> > +       return tunnel->max_dprx_rate;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_rate);
> > +
> > +/**
> > + * drm_dp_tunnel_max_dprx_lane_count - Query the maximum lane count
> > of the tunnel's DPRX
> > + * @tunnel: Tunnel object
> > + *
> > + * The function is used to query the maximum lane count of the DPRX
> > connected
> > + * to @tunnel. Note that this lane count will not be limited by the
> > BW limit of
> > + * the tunnel, as opposed to the standard and extended
> > DP_MAX_LANE_COUNT DPCD
> > + * registers.
> > + *
> > + * Returns the maximum lane count.
> > + */
> > +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel
> > *tunnel)
> > +{
> > +       return tunnel->max_dprx_lane_count;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_lane_count);
> > +
> > +/**
> > + * drm_dp_tunnel_available_bw - Query the estimated total available
> > BW of the tunnel
> > + * @tunnel: Tunnel object
> > + *
> > + * This function is used to query the estimated total available BW
> > of the
> > + * tunnel. This includes the currently allocated and free BW for all
> > the
> > + * tunnels in @tunnel's group. The available BW is valid only after
> > the BW
> > + * allocation mode has been enabled for the tunnel and its state got
> > updated
> > + * calling drm_dp_tunnel_update_state().
> > + *
> > + * Returns the @tunnel group's estimated total available bandwidth
> > in kB/s
> > + * units, or -1 if the available BW isn't valid (the BW allocation
> > mode is
> > + * not enabled or the tunnel's state hasn't been updated).
> > + */
> > +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +       return tunnel->group->available_bw;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_available_bw);
> > +
> > +static struct drm_dp_tunnel_group_state *
> > +drm_dp_tunnel_atomic_get_group_state(struct drm_atomic_state *state,
> > +                                    const struct drm_dp_tunnel
> > *tunnel)
> > +{
> > +       return (struct drm_dp_tunnel_group_state *)
> > +               drm_atomic_get_private_obj_state(state,
> > +                                                &tunnel->group-
> > >base);
> > +}
> > +
> > +static struct drm_dp_tunnel_state *
> > +add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> > +                struct drm_dp_tunnel *tunnel)
> > +{
> > +       struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +       tun_dbg_atomic(tunnel,
> > +                      "Adding state for tunnel %p to group state
> > %p\n",
> > +                      tunnel, group_state);
> > +
> > +       tunnel_state = kzalloc(sizeof(*tunnel_state), GFP_KERNEL);
> > +       if (!tunnel_state)
> > +               return NULL;
> > +
> > +       tunnel_state->group_state = group_state;
> > +
> > +       drm_dp_tunnel_ref_get(tunnel, &tunnel_state->tunnel_ref);
> > +
> > +       INIT_LIST_HEAD(&tunnel_state->node);
> > +       list_add(&tunnel_state->node, &group_state->tunnel_states);
> > +
> > +       return tunnel_state;
> > +}
> > +
> > +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state
> > *tunnel_state)
> > +{
> > +       tun_dbg_atomic(tunnel_state->tunnel_ref.tunnel,
> > +                      "Clearing state for tunnel %p\n",
> > +                      tunnel_state->tunnel_ref.tunnel);
> > +
> > +       list_del(&tunnel_state->node);
> > +
> > +       kfree(tunnel_state->stream_bw);
> > +       drm_dp_tunnel_ref_put(&tunnel_state->tunnel_ref);
> > +
> > +       kfree(tunnel_state);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_clear_state);
> > +
> > +static void clear_tunnel_group_state(struct
> > drm_dp_tunnel_group_state *group_state)
> > +{
> > +       struct drm_dp_tunnel_state *tunnel_state;
> > +       struct drm_dp_tunnel_state *tunnel_state_tmp;
> > +
> > +       for_each_tunnel_state_safe(group_state, tunnel_state,
> > tunnel_state_tmp)
> > +               drm_dp_tunnel_atomic_clear_state(tunnel_state);
> > +}
> > +
> > +static struct drm_dp_tunnel_state *
> > +get_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> > +                const struct drm_dp_tunnel *tunnel)
> > +{
> > +       struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +       for_each_tunnel_state(group_state, tunnel_state)
> > +               if (tunnel_state->tunnel_ref.tunnel == tunnel)
> > +                       return tunnel_state;
> > +
> > +       return NULL;
> > +}
> > +
> > +static struct drm_dp_tunnel_state *
> > +get_or_add_tunnel_state(struct drm_dp_tunnel_group_state
> > *group_state,
> > +                       struct drm_dp_tunnel *tunnel)
> > +{
> > +       struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +       tunnel_state = get_tunnel_state(group_state, tunnel);
> > +       if (tunnel_state)
> > +               return tunnel_state;
> > +
> > +       return add_tunnel_state(group_state, tunnel);
> > +}
> > +
> > +static struct drm_private_state *
> > +tunnel_group_duplicate_state(struct drm_private_obj *obj)
> > +{
> > +       struct drm_dp_tunnel_group_state *group_state =
> > to_group_state(obj->state);
> > +       struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +       group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> > +       if (!group_state)
> > +               return NULL;
> > +
> > +       INIT_LIST_HEAD(&group_state->tunnel_states);
> > +
> > +       __drm_atomic_helper_private_obj_duplicate_state(obj,
> > &group_state->base);
> > +
> > +       for_each_tunnel_state(to_group_state(obj->state),
> > tunnel_state) {
> > +               struct drm_dp_tunnel_state *new_tunnel_state;
> > +
> > +               new_tunnel_state =
> > get_or_add_tunnel_state(group_state,
> > +
> > tunnel_state->tunnel_ref.tunnel);
> > +               if (!new_tunnel_state)
> > +                       goto out_free_state;
> > +
> > +               new_tunnel_state->stream_mask = tunnel_state-
> > >stream_mask;
> > +               new_tunnel_state->stream_bw = kmemdup(tunnel_state-
> > >stream_bw,
> > +
> > sizeof(*tunnel_state->stream_bw) *
> > +                                                       hweight32(tun
> > nel_state->stream_mask),
> > +                                                     GFP_KERNEL);
> > +
> > +               if (!new_tunnel_state->stream_bw)
> > +                       goto out_free_state;
> > +       }
> > +
> > +       return &group_state->base;
> > +
> > +out_free_state:
> > +       clear_tunnel_group_state(group_state);
> > +       kfree(group_state);
> > +
> > +       return NULL;
> > +}
> > +
> > +static void tunnel_group_destroy_state(struct drm_private_obj *obj,
> > struct drm_private_state *state)
> > +{
> > +       struct drm_dp_tunnel_group_state *group_state =
> > to_group_state(state);
> > +
> > +       clear_tunnel_group_state(group_state);
> > +       kfree(group_state);
> > +}
> > +
> > +static const struct drm_private_state_funcs tunnel_group_funcs = {
> > +       .atomic_duplicate_state = tunnel_group_duplicate_state,
> > +       .atomic_destroy_state = tunnel_group_destroy_state,
> > +};
> > +
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > +                              struct drm_dp_tunnel *tunnel)
> > +{
> > +       struct drm_dp_tunnel_group_state *group_state =
> > +               drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > +       struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +       if (IS_ERR(group_state))
> > +               return ERR_CAST(group_state);
> > +
> > +       tunnel_state = get_or_add_tunnel_state(group_state, tunnel);
> > +       if (!tunnel_state)
> > +               return ERR_PTR(-ENOMEM);
> > +
> > +       return tunnel_state;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_state);
> > +
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > +                                  const struct drm_dp_tunnel
> > *tunnel)
> > +{
> > +       struct drm_dp_tunnel_group_state *new_group_state;
> > +       int i;
> > +
> > +       for_each_new_group_in_state(state, new_group_state, i)
> > +               if (to_group(new_group_state->base.obj) == tunnel-
> > >group)
> > +                       return get_tunnel_state(new_group_state,
> > tunnel);
> > +
> > +       return NULL;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_new_state);
> > +
> > +static bool init_group(struct drm_dp_tunnel_mgr *mgr, struct
> > drm_dp_tunnel_group *group)
> > +{
> > +       struct drm_dp_tunnel_group_state *group_state =
> > kzalloc(sizeof(*group_state), GFP_KERNEL);
> > +
> > +       if (!group_state)
> > +               return false;
> > +
> > +       INIT_LIST_HEAD(&group_state->tunnel_states);
> > +
> > +       group->mgr = mgr;
> > +       group->available_bw = -1;
> > +       INIT_LIST_HEAD(&group->tunnels);
> > +
> > +       drm_atomic_private_obj_init(mgr->dev, &group->base,
> > &group_state->base,
> > +                                   &tunnel_group_funcs);
> > +
> > +       return true;
> > +}
> > +
> > +static void cleanup_group(struct drm_dp_tunnel_group *group)
> > +{
> > +       drm_atomic_private_obj_fini(&group->base);
> > +}
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +static void check_unique_stream_ids(const struct
> > drm_dp_tunnel_group_state *group_state)
> > +{
> > +       const struct drm_dp_tunnel_state *tunnel_state;
> > +       u32 stream_mask = 0;
> > +
> > +       for_each_tunnel_state(group_state, tunnel_state) {
> > +               drm_WARN(to_group(group_state->base.obj)->mgr->dev,
> > +                        tunnel_state->stream_mask & stream_mask,
> > +                        "[DPTUN %s]: conflicting stream IDs %x (IDs
> > in other tunnels %x)\n",
> > +                        tunnel_state->tunnel_ref.tunnel->name,
> > +                        tunnel_state->stream_mask,
> > +                        stream_mask);
> > +
> > +               stream_mask |= tunnel_state->stream_mask;
> > +       }
> > +}
> > +#else
> > +static void check_unique_stream_ids(const struct
> > drm_dp_tunnel_group_state *group_state)
> > +{
> > +}
> > +#endif
> > +
> > +static int stream_id_to_idx(u32 stream_mask, u8 stream_id)
> > +{
> > +       return hweight32(stream_mask & (BIT(stream_id) - 1));
> > +}
> > +
> > +static int resize_bw_array(struct drm_dp_tunnel_state *tunnel_state,
> > +                          unsigned long old_mask, unsigned long
> > new_mask)
> > +{
> > +       unsigned long move_mask = old_mask & new_mask;
> > +       int *new_bws = NULL;
> > +       int id;
> > +
> > +       WARN_ON(!new_mask);
> > +
> > +       if (old_mask == new_mask)
> > +               return 0;
> > +
> > +       new_bws = kcalloc(hweight32(new_mask), sizeof(*new_bws),
> > GFP_KERNEL);
> > +       if (!new_bws)
> > +               return -ENOMEM;
> > +
> > +       for_each_set_bit(id, &move_mask, BITS_PER_TYPE(move_mask))
> > +               new_bws[stream_id_to_idx(new_mask, id)] =
> > +                       tunnel_state-
> > >stream_bw[stream_id_to_idx(old_mask, id)];
> > +
> > +       kfree(tunnel_state->stream_bw);
> > +       tunnel_state->stream_bw = new_bws;
> > +       tunnel_state->stream_mask = new_mask;
> > +
> > +       return 0;
> > +}
> > +
> > +static int set_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> > +                        u8 stream_id, int bw)
> > +{
> > +       int err;
> > +
> > +       err = resize_bw_array(tunnel_state,
> > +                             tunnel_state->stream_mask,
> > +                             tunnel_state->stream_mask |
> > BIT(stream_id));
> > +       if (err)
> > +               return err;
> > +
> > +       tunnel_state->stream_bw[stream_id_to_idx(tunnel_state-
> > >stream_mask, stream_id)] = bw;
> > +
> > +       return 0;
> > +}
> > +
> > +static int clear_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> > +                          u8 stream_id)
> > +{
> > +       if (!(tunnel_state->stream_mask & ~BIT(stream_id))) {
> > +               drm_dp_tunnel_atomic_clear_state(tunnel_state);
> > +               return 0;
> > +       }
> > +
> > +       return resize_bw_array(tunnel_state,
> > +                              tunnel_state->stream_mask,
> > +                              tunnel_state->stream_mask &
> > ~BIT(stream_id));
> > +}
> > +
> > +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state
> > *state,
> > +                                        struct drm_dp_tunnel
> > *tunnel,
> > +                                        u8 stream_id, int bw)
> > +{
> > +       struct drm_dp_tunnel_group_state *new_group_state =
> > +               drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > +       struct drm_dp_tunnel_state *tunnel_state;
> > +       int err;
> > +
> > +       if (drm_WARN_ON(tunnel->group->mgr->dev,
> > +                       stream_id > BITS_PER_TYPE(tunnel_state-
> > >stream_mask)))
> > +               return -EINVAL;
> > +
> > +       tun_dbg(tunnel,
> > +               "Setting %d Mb/s for stream %d\n",
> > +               DPTUN_BW_ARG(bw), stream_id);
> > +
> > +       if (bw == 0) {
> > +               tunnel_state = get_tunnel_state(new_group_state,
> > tunnel);
> > +               if (!tunnel_state)
> > +                       return 0;
> > +
> > +               return clear_stream_bw(tunnel_state, stream_id);
> > +       }
> > +
> > +       tunnel_state = get_or_add_tunnel_state(new_group_state,
> > tunnel);
> > +       if (drm_WARN_ON(state->dev, !tunnel_state))
> > +               return -EINVAL;
> > +
> > +       err = set_stream_bw(tunnel_state, stream_id, bw);
> > +       if (err)
> > +               return err;
> > +
> > +       check_unique_stream_ids(new_group_state);
> > +
> > +       return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_set_stream_bw);
> > +
> > +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct
> > drm_dp_tunnel_state *tunnel_state)
> > +{
> > +       int tunnel_bw = 0;
> > +       int i;
> > +
> > +       for (i = 0; i < hweight32(tunnel_state->stream_mask); i++)
> > +               tunnel_bw += tunnel_state->stream_bw[i];
> > +
> > +       return tunnel_bw;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_tunnel_bw);
> > +
> > +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct
> > drm_atomic_state *state,
> > +                                                   const struct
> > drm_dp_tunnel *tunnel,
> > +                                                   u32 *stream_mask)
> > +{
> > +       struct drm_dp_tunnel_group_state *group_state =
> > +               drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > +       struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +       if (IS_ERR(group_state))
> > +               return PTR_ERR(group_state);
> > +
> > +       *stream_mask = 0;
> > +       for_each_tunnel_state(group_state, tunnel_state)
> > +               *stream_mask |= tunnel_state->stream_mask;
> > +
> > +       return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_group_streams_in_state);
> > +
> > +static int
> > +drm_dp_tunnel_atomic_check_group_bw(struct drm_dp_tunnel_group_state
> > *new_group_state,
> > +                                   u32 *failed_stream_mask)
> > +{
> > +       struct drm_dp_tunnel_group *group = to_group(new_group_state-
> > >base.obj);
> > +       struct drm_dp_tunnel_state *new_tunnel_state;
> > +       u32 group_stream_mask = 0;
> > +       int group_bw = 0;
> > +
> > +       for_each_tunnel_state(new_group_state, new_tunnel_state) {
> > +               struct drm_dp_tunnel *tunnel = new_tunnel_state-
> > >tunnel_ref.tunnel;
> > +               int max_dprx_bw = get_max_dprx_bw(tunnel);
> > +               int tunnel_bw =
> > drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state);
> > +
> > +               tun_dbg(tunnel,
> > +                       "%sRequired %d/%d Mb/s total for tunnel.\n",
> > +                       tunnel_bw > max_dprx_bw ? "Not enough BW: " :
> > "",
> > +                       DPTUN_BW_ARG(tunnel_bw),
> > +                       DPTUN_BW_ARG(max_dprx_bw));
> > +
> > +               if (tunnel_bw > max_dprx_bw) {
> > +                       *failed_stream_mask = new_tunnel_state-
> > >stream_mask;
> > +                       return -ENOSPC;
> > +               }
> > +
> > +               group_bw += min(roundup(tunnel_bw, tunnel-
> > >bw_granularity),
> > +                               max_dprx_bw);
> > +               group_stream_mask |= new_tunnel_state->stream_mask;
> > +       }
> > +
> > +       tun_grp_dbg(group,
> > +                   "%sRequired %d/%d Mb/s total for tunnel
> > group.\n",
> > +                   group_bw > group->available_bw ? "Not enough BW:
> > " : "",
> > +                   DPTUN_BW_ARG(group_bw),
> > +                   DPTUN_BW_ARG(group->available_bw));
> > +
> > +       if (group_bw > group->available_bw) {
> > +               *failed_stream_mask = group_stream_mask;
> > +               return -ENOSPC;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state
> > *state,
> > +                                         u32 *failed_stream_mask)
> > +{
> > +       struct drm_dp_tunnel_group_state *new_group_state;
> > +       int i;
> > +
> > +       for_each_new_group_in_state(state, new_group_state, i) {
> > +               int ret;
> > +
> > +               ret =
> > drm_dp_tunnel_atomic_check_group_bw(new_group_state,
> > +
> > failed_stream_mask);
> > +               if (ret)
> > +                       return ret;
> > +       }
> > +
> > +       return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_check_stream_bws);
> > +
> > +static void destroy_mgr(struct drm_dp_tunnel_mgr *mgr)
> > +{
> > +       int i;
> > +
> > +       for (i = 0; i < mgr->group_count; i++) {
> > +               cleanup_group(&mgr->groups[i]);
> > +               drm_WARN_ON(mgr->dev, !list_empty(&mgr-
> > >groups[i].tunnels));
> > +       }
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +       ref_tracker_dir_exit(&mgr->ref_tracker);
> > +#endif
> > +
> > +       kfree(mgr->groups);
> > +       kfree(mgr);
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_mgr_create - Create a DP tunnel manager
> > + * @i915: i915 driver object
> > + *
> > + * Creates a DP tunnel manager.
> > + *
> > + * Returns a pointer to the tunnel manager if created successfully
> > or NULL in
> > + * case of an error.
> > + */
> > +struct drm_dp_tunnel_mgr *
> > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int
> > max_group_count)
> > +{
> > +       struct drm_dp_tunnel_mgr *mgr = kzalloc(sizeof(*mgr),
> > GFP_KERNEL);
> > +       int i;
> > +
> > +       if (!mgr)
> > +               return NULL;
> > +
> > +       mgr->dev = dev;
> > +       init_waitqueue_head(&mgr->bw_req_queue);
> > +
> > +       mgr->groups = kcalloc(max_group_count, sizeof(*mgr->groups),
> > GFP_KERNEL);
> > +       if (!mgr->groups) {
> > +               kfree(mgr);
> > +
> > +               return NULL;
> > +       }
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +       ref_tracker_dir_init(&mgr->ref_tracker, 16, "dptun");
> > +#endif
> > +
> > +       for (i = 0; i < max_group_count; i++) {
> > +               if (!init_group(mgr, &mgr->groups[i])) {
> > +                       destroy_mgr(mgr);
> > +
> > +                       return NULL;
> > +               }
> > +
> > +               mgr->group_count++;
> > +       }
> > +
> > +       return mgr;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_mgr_create);
> > +
> > +/**
> > + * drm_dp_tunnel_mgr_destroy - Destroy DP tunnel manager
> > + * @mgr: Tunnel manager object
> > + *
> > + * Destroy the tunnel manager.
> > + */
> > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr)
> > +{
> > +       destroy_mgr(mgr);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_mgr_destroy);
> > diff --git a/include/drm/display/drm_dp.h
> > b/include/drm/display/drm_dp.h
> > index 281afff6ee4e5..8bfd5d007be8d 100644
> > --- a/include/drm/display/drm_dp.h
> > +++ b/include/drm/display/drm_dp.h
> > @@ -1382,6 +1382,66 @@
> >  #define DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET     0x69494
> >  #define DP_HDCP_2_2_REG_DBG_OFFSET             0x69518
> >
> > +/* DP-tunneling */
> > +#define DP_TUNNELING_OUI                               0xe0000
> > +#define  DP_TUNNELING_OUI_BYTES                                3
> > +
> > +#define DP_TUNNELING_DEV_ID                            0xe0003
> > +#define  DP_TUNNELING_DEV_ID_BYTES                     6
> > +
> > +#define DP_TUNNELING_HW_REV                            0xe0009
> > +#define  DP_TUNNELING_HW_REV_MAJOR_SHIFT               4
> > +#define  DP_TUNNELING_HW_REV_MAJOR_MASK                        (0xf
> > << DP_TUNNELING_HW_REV_MAJOR_SHIFT)
> > +#define  DP_TUNNELING_HW_REV_MINOR_SHIFT               0
> > +#define  DP_TUNNELING_HW_REV_MINOR_MASK                        (0xf
> > << DP_TUNNELING_HW_REV_MINOR_SHIFT)
> > +
> > +#define DP_TUNNELING_SW_REV_MAJOR                      0xe000a
> > +#define DP_TUNNELING_SW_REV_MINOR                      0xe000b
> > +
> > +#define DP_TUNNELING_CAPABILITIES                      0xe000d
> > +#define  DP_IN_BW_ALLOCATION_MODE_SUPPORT              (1 << 7)
> > +#define  DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT          (1 << 6)
> > +#define  DP_TUNNELING_SUPPORT                          (1 << 0)
> > +
> > +#define DP_IN_ADAPTER_INFO                             0xe000e
> > +#define  DP_IN_ADAPTER_NUMBER_BITS                     7
> > +#define  DP_IN_ADAPTER_NUMBER_MASK                     ((1 <<
> > DP_IN_ADAPTER_NUMBER_BITS) - 1)
> > +
> > +#define DP_USB4_DRIVER_ID                              0xe000f
> > +#define  DP_USB4_DRIVER_ID_BITS                                4
> > +#define  DP_USB4_DRIVER_ID_MASK                                ((1
> > << DP_USB4_DRIVER_ID_BITS) - 1)
> > +
> > +#define DP_USB4_DRIVER_BW_CAPABILITY                   0xe0020
> > +#define  DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT     (1 << 7)
> > +
> > +#define DP_IN_ADAPTER_TUNNEL_INFORMATION               0xe0021
> > +#define  DP_GROUP_ID_BITS                              3
> > +#define  DP_GROUP_ID_MASK                              ((1 <<
> > DP_GROUP_ID_BITS) - 1)
> > +
> > +#define DP_BW_GRANULARITY                              0xe0022
> > +#define  DP_BW_GRANULARITY_MASK                                0x3
> > +
> > +#define
> > DP_ESTIMATED_BW                                        0xe0023
> > +#define
> > DP_ALLOCATED_BW                                        0xe0024
> > +
> > +#define DP_TUNNELING_STATUS                            0xe0025
> > +#define  DP_BW_ALLOCATION_CAPABILITY_CHANGED           (1 << 3)
> > +#define  DP_ESTIMATED_BW_CHANGED                       (1 << 2)
> > +#define  DP_BW_REQUEST_SUCCEEDED                       (1 << 1)
> > +#define  DP_BW_REQUEST_FAILED                          (1 << 0)
> > +
> > +#define DP_TUNNELING_MAX_LINK_RATE                     0xe0028
> > +
> > +#define DP_TUNNELING_MAX_LANE_COUNT                    0xe0029
> > +#define  DP_TUNNELING_MAX_LANE_COUNT_MASK              0x1f
> > +
> > +#define DP_DPTX_BW_ALLOCATION_MODE_CONTROL             0xe0030
> > +#define  DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE   (1 << 7)
> > +#define  DP_UNMASK_BW_ALLOCATION_IRQ                   (1 << 6)
> > +
> > +#define DP_REQUEST_BW                                  0xe0031
> > +#define  MAX_DP_REQUEST_BW                             255
> > +
> >  /* LTTPR: Link Training (LT)-tunable PHY Repeaters */
> >  #define DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV 0xf0000
> > /* 1.3 */
> >  #define DP_MAX_LINK_RATE_PHY_REPEATER                      0xf0001
> > /* 1.4a */
> > diff --git a/include/drm/display/drm_dp_tunnel.h
> > b/include/drm/display/drm_dp_tunnel.h
> > new file mode 100644
> > index 0000000000000..f6449b1b4e6e9
> > --- /dev/null
> > +++ b/include/drm/display/drm_dp_tunnel.h
> > @@ -0,0 +1,270 @@
> > +/* SPDX-License-Identifier: MIT */
> > +/*
> > + * Copyright © 2023 Intel Corporation
> > + */
> > +
> > +#ifndef __DRM_DP_TUNNEL_H__
> > +#define __DRM_DP_TUNNEL_H__
> > +
> > +#include <linux/err.h>
> > +#include <linux/errno.h>
> > +#include <linux/types.h>
> > +
> > +struct drm_dp_aux;
> > +
> > +struct drm_device;
> > +
> > +struct drm_atomic_state;
> > +struct drm_dp_tunnel_mgr;
> > +struct drm_dp_tunnel_state;
> > +
> > +struct ref_tracker;
> > +
> > +struct drm_dp_tunnel_ref {
> > +       struct drm_dp_tunnel *tunnel;
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +       struct ref_tracker *tracker;
> > +#endif
> > +};
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel);
> > +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel);
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker
> > **tracker);
> > +
> > +void
> > +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker
> > **tracker);
> > +#else
> > +#define drm_dp_tunnel_get(tunnel, tracker) \
> > +       drm_dp_tunnel_get_untracked(tunnel)
> > +
> > +#define drm_dp_tunnel_put(tunnel, tracker) \
> > +       drm_dp_tunnel_put_untracked(tunnel)
> > +
> > +#endif
> > +
> > +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel
> > *tunnel,
> > +                                          struct drm_dp_tunnel_ref
> > *tunnel_ref)
> > +{
> > +       tunnel_ref->tunnel = drm_dp_tunnel_get(tunnel, &tunnel_ref-
> > >tracker);
> > +}
> > +
> > +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref
> > *tunnel_ref)
> > +{
> > +       drm_dp_tunnel_put(tunnel_ref->tunnel, &tunnel_ref->tracker);
> > +}
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> > +                    struct drm_dp_aux *aux);
> > +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel);
> > +
> > +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel);
> > +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel
> > *tunnel);
> > +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw);
> > +int drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel);
> > +
> > +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel);
> > +
> > +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> > +                            struct drm_dp_aux *aux);
> > +
> > +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel
> > *tunnel);
> > +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel);
> > +
> > +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel);
> > +
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > +                              struct drm_dp_tunnel *tunnel);
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > +                                  const struct drm_dp_tunnel
> > *tunnel);
> > +
> > +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state
> > *tunnel_state);
> > +
> > +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state
> > *state,
> > +                                      struct drm_dp_tunnel *tunnel,
> > +                                      u8 stream_id, int bw);
> > +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct
> > drm_atomic_state *state,
> > +                                                   const struct
> > drm_dp_tunnel *tunnel,
> > +                                                   u32
> > *stream_mask);
> > +
> > +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state
> > *state,
> > +                                         u32 *failed_stream_mask);
> > +
> > +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct
> > drm_dp_tunnel_state *tunnel_state);
> > +
> > +struct drm_dp_tunnel_mgr *
> > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int
> > max_group_count);
> > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr);
> > +
> > +#else
> > +
> > +static inline struct drm_dp_tunnel *
> > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> > +{
> > +       return NULL;
> > +}
> > +
> > +static inline void
> > +drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel) {}
> > +
> > +static inline struct drm_dp_tunnel *
> > +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker
> > **tracker)
> > +{
> > +       return NULL;
> > +}
> > +
> > +static inline void
> > +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker
> > **tracker) {}
> > +
> > +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel
> > *tunnel,
> > +                                          struct drm_dp_tunnel_ref
> > *tunnel_ref) {}
> > +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref
> > *tunnel_ref) {}
> > +
> > +static inline struct drm_dp_tunnel *
> > +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> > +                    struct drm_dp_aux *aux)
> > +{
> > +       return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> > +{
> > +       return 0;
> > +}
> > +
> > +static inline int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel
> > *tunnel)
> > +{
> > +       return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int drm_dp_tunnel_disable_bw_alloc(struct
> > drm_dp_tunnel *tunnel)
> > +{
> > +       return -EOPNOTSUPP;
> > +}
> > +
> > +static inline bool drm_dp_tunnel_bw_alloc_is_enabled(const struct
> > drm_dp_tunnel *tunnel)
> > +{
> > +       return false;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> > +{
> > +       return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel)
> > +{
> > +       return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> > +{
> > +       return -EOPNOTSUPP;
> > +}
> > +
> > +static inline void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel
> > *tunnel) {}
> > +static inline int
> > +drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> > +                        struct drm_dp_aux *aux)
> > +{
> > +       return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> > +{
> > +       return 0;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel
> > *tunnel)
> > +{
> > +       return 0;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +       return -1;
> > +}
> > +
> > +static inline const char *
> > +drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
> > +{
> > +       return NULL;
> > +}
> > +
> > +static inline struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > +                              struct drm_dp_tunnel *tunnel)
> > +{
> > +       return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > +                                  const struct drm_dp_tunnel
> > *tunnel)
> > +{
> > +       return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline void
> > +drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state
> > *tunnel_state) {}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> > +                                  struct drm_dp_tunnel *tunnel,
> > +                                  u8 stream_id, int bw)
> > +{
> > +       return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_get_group_streams_in_state(struct
> > drm_atomic_state *state,
> > +                                               const struct
> > drm_dp_tunnel *tunnel,
> > +                                               u32 *stream_mask)
> > +{
> > +       return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state
> > *state,
> > +                                     u32 *failed_stream_mask)
> > +{
> > +       return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state
> > *tunnel_state)
> > +{
> > +       return 0;
> > +}
> > +
> > +static inline struct drm_dp_tunnel_mgr *
> > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int
> > max_group_count)
> > +{
> > +       return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline
> > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr) {}
> > +
> > +
> > +#endif /* CONFIG_DRM_DISPLAY_DP_TUNNEL */
> > +
> > +#endif /* __DRM_DP_TUNNEL_H__ */
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-01-23 10:28 ` [PATCH 02/19] drm/dp: Add support for DP tunneling Imre Deak
  2024-01-31 12:50   ` Hogander, Jouni
@ 2024-01-31 16:09   ` Ville Syrjälä
  2024-01-31 18:49     ` Imre Deak
  2024-02-07 20:02   ` Ville Syrjälä
  2 siblings, 1 reply; 61+ messages in thread
From: Ville Syrjälä @ 2024-01-31 16:09 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx, dri-devel

On Tue, Jan 23, 2024 at 12:28:33PM +0200, Imre Deak wrote:
> Add support for Display Port DP tunneling. For now this includes the
> support for Bandwidth Allocation Mode, leaving adding Panel Replay
> support for later.
> 
> BWA allows using displays that share the same (Thunderbolt) link with
> their maximum resolution. Atm, this may not be possible due to the
> coarse granularity of partitioning the link BW among the displays on the
> link: the BW allocation policy is in a SW/FW/HW component on the link
> (on Thunderbolt it's the SW or FW Connection Manager), independent of
> the driver. This policy will set the DPRX maximum rate and lane count
> DPCD registers the GFX driver will see (0x00000, 0x00001, 0x02200,
> 0x02201) based on the available link BW.
> 
> The granularity of the current BW allocation policy is course, based on
> the required link rate in the 1.62Gbs..8.1Gbps range and it may prevent
> using higher resolutions all together: the display connected first will
> get a share of the link BW which corresponds to its full DPRX capability
> (regardless of the actual mode it uses). A subsequent display connected
> will only get the remaining BW, which could be well below its full
> capability.
> 
> BWA solves the above course granularity (reducing it to a 250Mbs..1Gps
> range) and first-come/first-served issues by letting the driver request
> the BW for each display on a link which reflects the actual modes the
> displays use.
> 
> This patch adds the DRM core helper functions, while a follow-up change
> in the patchset takes them into use in the i915 driver.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/display/Kconfig         |   17 +
>  drivers/gpu/drm/display/Makefile        |    2 +
>  drivers/gpu/drm/display/drm_dp_tunnel.c | 1715 +++++++++++++++++++++++
>  include/drm/display/drm_dp.h            |   60 +
>  include/drm/display/drm_dp_tunnel.h     |  270 ++++
>  5 files changed, 2064 insertions(+)
>  create mode 100644 drivers/gpu/drm/display/drm_dp_tunnel.c
>  create mode 100644 include/drm/display/drm_dp_tunnel.h
> 
> diff --git a/drivers/gpu/drm/display/Kconfig b/drivers/gpu/drm/display/Kconfig
> index 09712b88a5b83..b024a84b94c1c 100644
> --- a/drivers/gpu/drm/display/Kconfig
> +++ b/drivers/gpu/drm/display/Kconfig
> @@ -17,6 +17,23 @@ config DRM_DISPLAY_DP_HELPER
>  	help
>  	  DRM display helpers for DisplayPort.
>  
> +config DRM_DISPLAY_DP_TUNNEL
> +	bool
> +	select DRM_DISPLAY_DP_HELPER
> +	help
> +	  Enable support for DisplayPort tunnels.
> +
> +config DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +	bool "Enable debugging the DP tunnel state"
> +	depends on REF_TRACKER
> +	depends on DRM_DISPLAY_DP_TUNNEL
> +	depends on DEBUG_KERNEL
> +	depends on EXPERT
> +	help
> +	  Enables debugging the DP tunnel manager's status.
> +
> +	  If in doubt, say "N".

It's not exactly clear what a "DP tunnel" is.
Shouldn't thunderbolt be mentioned here somewhere?

> +
>  config DRM_DISPLAY_HDCP_HELPER
>  	bool
>  	depends on DRM_DISPLAY_HELPER
> diff --git a/drivers/gpu/drm/display/Makefile b/drivers/gpu/drm/display/Makefile
> index 17ac4a1006a80..7ca61333c6696 100644
> --- a/drivers/gpu/drm/display/Makefile
> +++ b/drivers/gpu/drm/display/Makefile
> @@ -8,6 +8,8 @@ drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_HELPER) += \
>  	drm_dp_helper.o \
>  	drm_dp_mst_topology.o \
>  	drm_dsc_helper.o
> +drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_TUNNEL) += \
> +	drm_dp_tunnel.o
>  drm_display_helper-$(CONFIG_DRM_DISPLAY_HDCP_HELPER) += drm_hdcp_helper.o
>  drm_display_helper-$(CONFIG_DRM_DISPLAY_HDMI_HELPER) += \
>  	drm_hdmi_helper.o \
> diff --git a/drivers/gpu/drm/display/drm_dp_tunnel.c b/drivers/gpu/drm/display/drm_dp_tunnel.c
> new file mode 100644
> index 0000000000000..58f6330db7d9d
> --- /dev/null
> +++ b/drivers/gpu/drm/display/drm_dp_tunnel.c
> @@ -0,0 +1,1715 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#include <linux/ref_tracker.h>
> +#include <linux/types.h>
> +
> +#include <drm/drm_atomic_state_helper.h>
> +
> +#include <drm/drm_atomic.h>
> +#include <drm/drm_print.h>
> +#include <drm/display/drm_dp.h>
> +#include <drm/display/drm_dp_helper.h>
> +#include <drm/display/drm_dp_tunnel.h>
> +
> +#define to_group(__private_obj) \
> +	container_of(__private_obj, struct drm_dp_tunnel_group, base)
> +
> +#define to_group_state(__private_state) \
> +	container_of(__private_state, struct drm_dp_tunnel_group_state, base)
> +
> +#define is_dp_tunnel_private_obj(__obj) \
> +	((__obj)->funcs == &tunnel_group_funcs)
> +
> +#define for_each_new_group_in_state(__state, __new_group_state, __i) \
> +	for ((__i) = 0; \
> +	     (__i) < (__state)->num_private_objs; \
> +	     (__i)++) \
> +		for_each_if ((__state)->private_objs[__i].ptr && \
> +			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
> +			     ((__new_group_state) = \
> +				to_group_state((__state)->private_objs[__i].new_state), 1))
> +
> +#define for_each_old_group_in_state(__state, __old_group_state, __i) \
> +	for ((__i) = 0; \
> +	     (__i) < (__state)->num_private_objs; \
> +	     (__i)++) \
> +		for_each_if ((__state)->private_objs[__i].ptr && \
> +			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
> +			     ((__old_group_state) = \
> +				to_group_state((__state)->private_objs[__i].old_state), 1))
> +
> +#define for_each_tunnel_in_group(__group, __tunnel) \
> +	list_for_each_entry(__tunnel, &(__group)->tunnels, node)
> +
> +#define for_each_tunnel_state(__group_state, __tunnel_state) \
> +	list_for_each_entry(__tunnel_state, &(__group_state)->tunnel_states, node)
> +
> +#define for_each_tunnel_state_safe(__group_state, __tunnel_state, __tunnel_state_tmp) \
> +	list_for_each_entry_safe(__tunnel_state, __tunnel_state_tmp, \
> +				 &(__group_state)->tunnel_states, node)
> +
> +#define kbytes_to_mbits(__kbytes) \
> +	DIV_ROUND_UP((__kbytes) * 8, 1000)
> +
> +#define DPTUN_BW_ARG(__bw) ((__bw) < 0 ? (__bw) : kbytes_to_mbits(__bw))
> +
> +#define __tun_prn(__tunnel, __level, __type, __fmt, ...) \
> +	drm_##__level##__type((__tunnel)->group->mgr->dev, \
> +			      "[DPTUN %s][%s] " __fmt, \
> +			      drm_dp_tunnel_name(__tunnel), \
> +			      (__tunnel)->aux->name, ## \
> +			      __VA_ARGS__)
> +
> +#define tun_dbg(__tunnel, __fmt, ...) \
> +	__tun_prn(__tunnel, dbg, _kms, __fmt, ## __VA_ARGS__)
> +
> +#define tun_dbg_stat(__tunnel, __err, __fmt, ...) do { \
> +	if (__err) \
> +		__tun_prn(__tunnel, dbg, _kms, __fmt " (Failed, err: %pe)\n", \
> +			  ## __VA_ARGS__, ERR_PTR(__err)); \
> +	else \
> +		__tun_prn(__tunnel, dbg, _kms, __fmt " (Ok)\n", \
> +			  ## __VA_ARGS__); \
> +} while (0)
> +
> +#define tun_dbg_atomic(__tunnel, __fmt, ...) \
> +	__tun_prn(__tunnel, dbg, _atomic, __fmt, ## __VA_ARGS__)
> +
> +#define tun_grp_dbg(__group, __fmt, ...) \
> +	drm_dbg_kms((__group)->mgr->dev, \
> +		    "[DPTUN %s] " __fmt, \
> +		    drm_dp_tunnel_group_name(__group), ## \
> +		    __VA_ARGS__)
> +
> +#define DP_TUNNELING_BASE DP_TUNNELING_OUI
> +
> +#define __DPTUN_REG_RANGE(start, size) \
> +	GENMASK_ULL(start + size - 1, start)
> +
> +#define DPTUN_REG_RANGE(addr, size) \
> +	__DPTUN_REG_RANGE((addr) - DP_TUNNELING_BASE, size)
> +
> +#define DPTUN_REG(addr) DPTUN_REG_RANGE(addr, 1)
> +
> +#define DPTUN_INFO_REG_MASK ( \
> +	DPTUN_REG_RANGE(DP_TUNNELING_OUI, DP_TUNNELING_OUI_BYTES) | \
> +	DPTUN_REG_RANGE(DP_TUNNELING_DEV_ID, DP_TUNNELING_DEV_ID_BYTES) | \
> +	DPTUN_REG(DP_TUNNELING_HW_REV) | \
> +	DPTUN_REG(DP_TUNNELING_SW_REV_MAJOR) | \
> +	DPTUN_REG(DP_TUNNELING_SW_REV_MINOR) | \
> +	DPTUN_REG(DP_TUNNELING_CAPABILITIES) | \
> +	DPTUN_REG(DP_IN_ADAPTER_INFO) | \
> +	DPTUN_REG(DP_USB4_DRIVER_ID) | \
> +	DPTUN_REG(DP_USB4_DRIVER_BW_CAPABILITY) | \
> +	DPTUN_REG(DP_IN_ADAPTER_TUNNEL_INFORMATION) | \
> +	DPTUN_REG(DP_BW_GRANULARITY) | \
> +	DPTUN_REG(DP_ESTIMATED_BW) | \
> +	DPTUN_REG(DP_ALLOCATED_BW) | \
> +	DPTUN_REG(DP_TUNNELING_MAX_LINK_RATE) | \
> +	DPTUN_REG(DP_TUNNELING_MAX_LANE_COUNT) | \
> +	DPTUN_REG(DP_DPTX_BW_ALLOCATION_MODE_CONTROL))
> +
> +static const DECLARE_BITMAP(dptun_info_regs, 64) = {
> +	DPTUN_INFO_REG_MASK & -1UL,
> +#if BITS_PER_LONG == 32
> +	DPTUN_INFO_REG_MASK >> 32,
> +#endif
> +};
> +
> +struct drm_dp_tunnel_regs {
> +	u8 buf[HWEIGHT64(DPTUN_INFO_REG_MASK)];
> +};

That seems to be some kind of thing to allow us to store
the values for non-consecutive DPCD registers in a
contiguous non-sparse array? How much memory are we
actually saving here as opposed to just using the
full sized array?

Wasn't really expecting this kind of thing in here...

> +
> +struct drm_dp_tunnel_group;
> +
> +struct drm_dp_tunnel {
> +	struct drm_dp_tunnel_group *group;
> +
> +	struct list_head node;
> +
> +	struct kref kref;
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +	struct ref_tracker *tracker;
> +#endif
> +	struct drm_dp_aux *aux;
> +	char name[8];
> +
> +	int bw_granularity;
> +	int estimated_bw;
> +	int allocated_bw;
> +
> +	int max_dprx_rate;
> +	u8 max_dprx_lane_count;
> +
> +	u8 adapter_id;
> +
> +	bool bw_alloc_supported:1;
> +	bool bw_alloc_enabled:1;
> +	bool has_io_error:1;
> +	bool destroyed:1;
> +};
> +
> +struct drm_dp_tunnel_group_state;
> +
> +struct drm_dp_tunnel_state {
> +	struct drm_dp_tunnel_group_state *group_state;
> +
> +	struct drm_dp_tunnel_ref tunnel_ref;
> +
> +	struct list_head node;
> +
> +	u32 stream_mask;
> +	int *stream_bw;
> +};
> +
> +struct drm_dp_tunnel_group_state {
> +	struct drm_private_state base;
> +
> +	struct list_head tunnel_states;
> +};
> +
> +struct drm_dp_tunnel_group {
> +	struct drm_private_obj base;
> +	struct drm_dp_tunnel_mgr *mgr;
> +
> +	struct list_head tunnels;
> +
> +	int available_bw;	/* available BW including the allocated_bw of all tunnels */
> +	int drv_group_id;
> +
> +	char name[8];
> +
> +	bool active:1;
> +};
> +
> +struct drm_dp_tunnel_mgr {
> +	struct drm_device *dev;
> +
> +	int group_count;
> +	struct drm_dp_tunnel_group *groups;
> +	wait_queue_head_t bw_req_queue;
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +	struct ref_tracker_dir ref_tracker;
> +#endif
> +};
> +
> +static int next_reg_area(int *offset)
> +{
> +	*offset = find_next_bit(dptun_info_regs, 64, *offset);
> +
> +	return find_next_zero_bit(dptun_info_regs, 64, *offset + 1) - *offset;
> +}
> +
> +#define tunnel_reg_ptr(__regs, __address) ({ \
> +	WARN_ON(!test_bit((__address) - DP_TUNNELING_BASE, dptun_info_regs)); \
> +	&(__regs)->buf[bitmap_weight(dptun_info_regs, (__address) - DP_TUNNELING_BASE)]; \
> +})
> +
> +static int read_tunnel_regs(struct drm_dp_aux *aux, struct drm_dp_tunnel_regs *regs)
> +{
> +	int offset = 0;
> +	int len;
> +
> +	while ((len = next_reg_area(&offset))) {
> +		int address = DP_TUNNELING_BASE + offset;
> +
> +		if (drm_dp_dpcd_read(aux, address, tunnel_reg_ptr(regs, address), len) < 0)
> +			return -EIO;
> +
> +		offset += len;
> +	}
> +
> +	return 0;
> +}
> +
> +static u8 tunnel_reg(const struct drm_dp_tunnel_regs *regs, int address)
> +{
> +	return *tunnel_reg_ptr(regs, address);
> +}
> +
> +static int tunnel_reg_drv_group_id(const struct drm_dp_tunnel_regs *regs)
> +{
> +	int drv_id = tunnel_reg(regs, DP_USB4_DRIVER_ID) & DP_USB4_DRIVER_ID_MASK;
> +	int group_id = tunnel_reg(regs, DP_IN_ADAPTER_TUNNEL_INFORMATION) & DP_GROUP_ID_MASK;

Maybe these things should be u8/etc. everywhere? Would at least
indicate that I don't need to look for where negative values
are handled...

> +
> +	if (!group_id)
> +		return 0;
> +
> +	return (drv_id << DP_GROUP_ID_BITS) | group_id;
> +}
> +
> +/* Return granularity in kB/s units */
> +static int tunnel_reg_bw_granularity(const struct drm_dp_tunnel_regs *regs)
> +{
> +	int gr = tunnel_reg(regs, DP_BW_GRANULARITY) & DP_BW_GRANULARITY_MASK;
> +
> +	WARN_ON(gr > 2);
> +
> +	return (250000 << gr) / 8;
> +}
> +
> +static int tunnel_reg_max_dprx_rate(const struct drm_dp_tunnel_regs *regs)
> +{
> +	u8 bw_code = tunnel_reg(regs, DP_TUNNELING_MAX_LINK_RATE);
> +
> +	return drm_dp_bw_code_to_link_rate(bw_code);
> +}
> +
> +static int tunnel_reg_max_dprx_lane_count(const struct drm_dp_tunnel_regs *regs)
> +{
> +	u8 lane_count = tunnel_reg(regs, DP_TUNNELING_MAX_LANE_COUNT) &
> +			DP_TUNNELING_MAX_LANE_COUNT_MASK;
> +
> +	return lane_count;
> +}
> +
> +static bool tunnel_reg_bw_alloc_supported(const struct drm_dp_tunnel_regs *regs)
> +{
> +	u8 cap_mask = DP_TUNNELING_SUPPORT | DP_IN_BW_ALLOCATION_MODE_SUPPORT;
> +
> +	if ((tunnel_reg(regs, DP_TUNNELING_CAPABILITIES) & cap_mask) != cap_mask)
> +		return false;
> +
> +	return tunnel_reg(regs, DP_USB4_DRIVER_BW_CAPABILITY) &
> +	       DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT;
> +}
> +
> +static bool tunnel_reg_bw_alloc_enabled(const struct drm_dp_tunnel_regs *regs)
> +{
> +	return tunnel_reg(regs, DP_DPTX_BW_ALLOCATION_MODE_CONTROL) &
> +		DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE;
> +}
> +
> +static int tunnel_group_drv_id(int drv_group_id)
> +{
> +	return drv_group_id >> DP_GROUP_ID_BITS;
> +}
> +
> +static int tunnel_group_id(int drv_group_id)
> +{
> +	return drv_group_id & DP_GROUP_ID_MASK;
> +}
> +
> +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
> +{
> +	return tunnel->name;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_name);
> +
> +static const char *drm_dp_tunnel_group_name(const struct drm_dp_tunnel_group *group)
> +{
> +	return group->name;
> +}
> +
> +static struct drm_dp_tunnel_group *
> +lookup_or_alloc_group(struct drm_dp_tunnel_mgr *mgr, int drv_group_id)
> +{
> +	struct drm_dp_tunnel_group *group = NULL;
> +	int i;
> +
> +	for (i = 0; i < mgr->group_count; i++) {
> +		/*
> +		 * A tunnel group with 0 group ID shouldn't have more than one
> +		 * tunnels.
> +		 */
> +		if (tunnel_group_id(drv_group_id) &&
> +		    mgr->groups[i].drv_group_id == drv_group_id)
> +			return &mgr->groups[i];
> +
> +		if (!group && !mgr->groups[i].active)
> +			group = &mgr->groups[i];
> +	}
> +
> +	if (!group) {
> +		drm_dbg_kms(mgr->dev,
> +			    "DPTUN: Can't allocate more tunnel groups\n");
> +		return NULL;
> +	}
> +
> +	group->drv_group_id = drv_group_id;
> +	group->active = true;
> +
> +	snprintf(group->name, sizeof(group->name), "%d:%d:*",

What does the '*' indicate?

> +		 tunnel_group_drv_id(drv_group_id) & ((1 << DP_GROUP_ID_BITS) - 1),
> +		 tunnel_group_id(drv_group_id) & ((1 << DP_USB4_DRIVER_ID_BITS) - 1));
> +
> +	return group;
> +}
> +
> +static void free_group(struct drm_dp_tunnel_group *group)
> +{
> +	struct drm_dp_tunnel_mgr *mgr = group->mgr;
> +
> +	if (drm_WARN_ON(mgr->dev, !list_empty(&group->tunnels)))
> +		return;
> +
> +	group->drv_group_id = 0;
> +	group->available_bw = -1;
> +	group->active = false;
> +}
> +
> +static struct drm_dp_tunnel *
> +tunnel_get(struct drm_dp_tunnel *tunnel)
> +{
> +	kref_get(&tunnel->kref);
> +
> +	return tunnel;
> +}
> +
> +static void free_tunnel(struct kref *kref)
> +{
> +	struct drm_dp_tunnel *tunnel = container_of(kref, typeof(*tunnel), kref);
> +	struct drm_dp_tunnel_group *group = tunnel->group;
> +
> +	list_del(&tunnel->node);
> +	if (list_empty(&group->tunnels))
> +		free_group(group);
> +
> +	kfree(tunnel);
> +}
> +
> +static void tunnel_put(struct drm_dp_tunnel *tunnel)
> +{
> +	kref_put(&tunnel->kref, free_tunnel);
> +}
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +static void track_tunnel_ref(struct drm_dp_tunnel *tunnel,
> +			     struct ref_tracker **tracker)
> +{
> +	ref_tracker_alloc(&tunnel->group->mgr->ref_tracker,
> +			  tracker, GFP_KERNEL);
> +}
> +
> +static void untrack_tunnel_ref(struct drm_dp_tunnel *tunnel,
> +			       struct ref_tracker **tracker)
> +{
> +	ref_tracker_free(&tunnel->group->mgr->ref_tracker,
> +			 tracker);
> +}
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> +{
> +	track_tunnel_ref(tunnel, NULL);
> +
> +	return tunnel_get(tunnel);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);

Why do these exist?

> +
> +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel)
> +{
> +	tunnel_put(tunnel);
> +	untrack_tunnel_ref(tunnel, NULL);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_put_untracked);
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel,
> +		    struct ref_tracker **tracker)
> +{
> +	track_tunnel_ref(tunnel, tracker);
> +
> +	return tunnel_get(tunnel);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_get);
> +
> +void drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel,
> +			 struct ref_tracker **tracker)
> +{
> +	untrack_tunnel_ref(tunnel, tracker);
> +	tunnel_put(tunnel);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_put);
> +#else
> +#define track_tunnel_ref(tunnel, tracker) do {} while (0)
> +#define untrack_tunnel_ref(tunnel, tracker) do {} while (0)
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> +{
> +	return tunnel_get(tunnel);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
> +
> +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel)
> +{
> +	tunnel_put(tunnel);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_put_untracked);
> +#endif
> +
> +static bool add_tunnel_to_group(struct drm_dp_tunnel_mgr *mgr,
> +				int drv_group_id,
> +				struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_group *group =
> +		lookup_or_alloc_group(mgr, drv_group_id);
> +
> +	if (!group)
> +		return false;
> +
> +	tunnel->group = group;
> +	list_add(&tunnel->node, &group->tunnels);
> +
> +	return true;
> +}
> +
> +static struct drm_dp_tunnel *
> +create_tunnel(struct drm_dp_tunnel_mgr *mgr,
> +	      struct drm_dp_aux *aux,
> +	      const struct drm_dp_tunnel_regs *regs)
> +{
> +	int drv_group_id = tunnel_reg_drv_group_id(regs);
> +	struct drm_dp_tunnel *tunnel;
> +
> +	tunnel = kzalloc(sizeof(*tunnel), GFP_KERNEL);
> +	if (!tunnel)
> +		return NULL;
> +
> +	INIT_LIST_HEAD(&tunnel->node);
> +
> +	kref_init(&tunnel->kref);
> +
> +	tunnel->aux = aux;
> +
> +	tunnel->adapter_id = tunnel_reg(regs, DP_IN_ADAPTER_INFO) & DP_IN_ADAPTER_NUMBER_MASK;
> +
> +	snprintf(tunnel->name, sizeof(tunnel->name), "%d:%d:%d",
> +		 tunnel_group_drv_id(drv_group_id) & ((1 << DP_GROUP_ID_BITS) - 1),
> +		 tunnel_group_id(drv_group_id) & ((1 << DP_USB4_DRIVER_ID_BITS) - 1),
> +		 tunnel->adapter_id & ((1 << DP_IN_ADAPTER_NUMBER_BITS) - 1));
> +
> +	tunnel->bw_granularity = tunnel_reg_bw_granularity(regs);
> +	tunnel->allocated_bw = tunnel_reg(regs, DP_ALLOCATED_BW) *
> +			       tunnel->bw_granularity;
> +
> +	tunnel->bw_alloc_supported = tunnel_reg_bw_alloc_supported(regs);
> +	tunnel->bw_alloc_enabled = tunnel_reg_bw_alloc_enabled(regs);
> +
> +	if (!add_tunnel_to_group(mgr, drv_group_id, tunnel)) {
> +		kfree(tunnel);
> +
> +		return NULL;
> +	}
> +
> +	track_tunnel_ref(tunnel, &tunnel->tracker);
> +
> +	return tunnel;
> +}
> +
> +static void destroy_tunnel(struct drm_dp_tunnel *tunnel)
> +{
> +	untrack_tunnel_ref(tunnel, &tunnel->tracker);
> +	tunnel_put(tunnel);
> +}
> +
> +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel)
> +{
> +	tunnel->has_io_error = true;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_set_io_error);
> +
> +static char yes_no_chr(int val)
> +{
> +	return val ? 'Y' : 'N';
> +}
> +
> +#define SKIP_DPRX_CAPS_CHECK		BIT(0)
> +#define ALLOW_ALLOCATED_BW_CHANGE	BIT(1)
> +
> +static bool tunnel_regs_are_valid(struct drm_dp_tunnel_mgr *mgr,
> +				  const struct drm_dp_tunnel_regs *regs,
> +				  unsigned int flags)
> +{
> +	int drv_group_id = tunnel_reg_drv_group_id(regs);
> +	bool check_dprx = !(flags & SKIP_DPRX_CAPS_CHECK);
> +	bool ret = true;
> +
> +	if (!tunnel_reg_bw_alloc_supported(regs)) {
> +		if (tunnel_group_id(drv_group_id)) {
> +			drm_dbg_kms(mgr->dev,
> +				    "DPTUN: A non-zero group ID is only allowed with BWA support\n");
> +			ret = false;
> +		}
> +
> +		if (tunnel_reg(regs, DP_ALLOCATED_BW)) {
> +			drm_dbg_kms(mgr->dev,
> +				    "DPTUN: BW is allocated without BWA support\n");
> +			ret = false;
> +		}
> +
> +		return ret;
> +	}
> +
> +	if (!tunnel_group_id(drv_group_id)) {
> +		drm_dbg_kms(mgr->dev,
> +			    "DPTUN: BWA support requires a non-zero group ID\n");
> +		ret = false;
> +	}
> +
> +	if (check_dprx && hweight8(tunnel_reg_max_dprx_lane_count(regs)) != 1) {
> +		drm_dbg_kms(mgr->dev,
> +			    "DPTUN: Invalid DPRX lane count: %d\n",
> +			    tunnel_reg_max_dprx_lane_count(regs));
> +
> +		ret = false;
> +	}
> +
> +	if (check_dprx && !tunnel_reg_max_dprx_rate(regs)) {
> +		drm_dbg_kms(mgr->dev,
> +			    "DPTUN: DPRX rate is 0\n");
> +
> +		ret = false;
> +	}
> +
> +	if (tunnel_reg(regs, DP_ALLOCATED_BW) > tunnel_reg(regs, DP_ESTIMATED_BW)) {
> +		drm_dbg_kms(mgr->dev,
> +			    "DPTUN: Allocated BW %d > estimated BW %d Mb/s\n",
> +			    DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) *
> +					 tunnel_reg_bw_granularity(regs)),
> +			    DPTUN_BW_ARG(tunnel_reg(regs, DP_ESTIMATED_BW) *
> +					 tunnel_reg_bw_granularity(regs)));
> +
> +		ret = false;
> +	}
> +
> +	return ret;
> +}
> +
> +static bool tunnel_info_changes_are_valid(struct drm_dp_tunnel *tunnel,
> +					  const struct drm_dp_tunnel_regs *regs,
> +					  unsigned int flags)
> +{
> +	int new_drv_group_id = tunnel_reg_drv_group_id(regs);
> +	bool ret = true;
> +
> +	if (tunnel->bw_alloc_supported != tunnel_reg_bw_alloc_supported(regs)) {
> +		tun_dbg(tunnel,
> +			"BW alloc support has changed %c -> %c\n",
> +			yes_no_chr(tunnel->bw_alloc_supported),
> +			yes_no_chr(tunnel_reg_bw_alloc_supported(regs)));
> +
> +		ret = false;
> +	}
> +
> +	if (tunnel->group->drv_group_id != new_drv_group_id) {
> +		tun_dbg(tunnel,
> +			"Driver/group ID has changed %d:%d:* -> %d:%d:*\n",
> +			tunnel_group_drv_id(tunnel->group->drv_group_id),
> +			tunnel_group_id(tunnel->group->drv_group_id),
> +			tunnel_group_drv_id(new_drv_group_id),
> +			tunnel_group_id(new_drv_group_id));
> +
> +		ret = false;
> +	}
> +
> +	if (!tunnel->bw_alloc_supported)
> +		return ret;
> +
> +	if (tunnel->bw_granularity != tunnel_reg_bw_granularity(regs)) {
> +		tun_dbg(tunnel,
> +			"BW granularity has changed: %d -> %d Mb/s\n",
> +			DPTUN_BW_ARG(tunnel->bw_granularity),
> +			DPTUN_BW_ARG(tunnel_reg_bw_granularity(regs)));
> +
> +		ret = false;
> +	}
> +
> +	/*
> +	 * On some devices at least the BW alloc mode enabled status is always
> +	 * reported as 0, so skip checking that here.
> +	 */
> +
> +	if (!(flags & ALLOW_ALLOCATED_BW_CHANGE) &&
> +	    tunnel->allocated_bw !=
> +	    tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity) {
> +		tun_dbg(tunnel,
> +			"Allocated BW has changed: %d -> %d Mb/s\n",
> +			DPTUN_BW_ARG(tunnel->allocated_bw),
> +			DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity));
> +
> +		ret = false;
> +	}
> +
> +	return ret;
> +}
> +
> +static int
> +read_and_verify_tunnel_regs(struct drm_dp_tunnel *tunnel,
> +			    struct drm_dp_tunnel_regs *regs,
> +			    unsigned int flags)
> +{
> +	int err;
> +
> +	err = read_tunnel_regs(tunnel->aux, regs);
> +	if (err < 0) {
> +		drm_dp_tunnel_set_io_error(tunnel);
> +
> +		return err;
> +	}
> +
> +	if (!tunnel_regs_are_valid(tunnel->group->mgr, regs, flags))
> +		return -EINVAL;
> +
> +	if (!tunnel_info_changes_are_valid(tunnel, regs, flags))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static bool update_dprx_caps(struct drm_dp_tunnel *tunnel, const struct drm_dp_tunnel_regs *regs)
> +{
> +	bool changed = false;
> +
> +	if (tunnel_reg_max_dprx_rate(regs) != tunnel->max_dprx_rate) {
> +		tunnel->max_dprx_rate = tunnel_reg_max_dprx_rate(regs);
> +		changed = true;
> +	}
> +
> +	if (tunnel_reg_max_dprx_lane_count(regs) != tunnel->max_dprx_lane_count) {
> +		tunnel->max_dprx_lane_count = tunnel_reg_max_dprx_lane_count(regs);
> +		changed = true;
> +	}
> +
> +	return changed;
> +}
> +
> +static int dev_id_len(const u8 *dev_id, int max_len)
> +{
> +	while (max_len && dev_id[max_len - 1] == '\0')
> +		max_len--;
> +
> +	return max_len;
> +}
> +
> +static int get_max_dprx_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +	int bw = drm_dp_max_dprx_data_rate(tunnel->max_dprx_rate,
> +					   tunnel->max_dprx_lane_count);
> +
> +	return min(roundup(bw, tunnel->bw_granularity),
> +		   MAX_DP_REQUEST_BW * tunnel->bw_granularity);
> +}
> +
> +static int get_max_tunnel_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +	return min(get_max_dprx_bw(tunnel), tunnel->group->available_bw);
> +}
> +
> +/**
> + * drm_dp_tunnel_detect - Detect DP tunnel on the link
> + * @mgr: Tunnel manager
> + * @aux: DP AUX on which the tunnel will be detected
> + *
> + * Detect if there is any DP tunnel on the link and add it to the tunnel
> + * group's tunnel list.
> + *
> + * Returns 0 on success, negative error code on failure.
> + */
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> +		       struct drm_dp_aux *aux)
> +{
> +	struct drm_dp_tunnel_regs regs;
> +	struct drm_dp_tunnel *tunnel;
> +	int err;
> +
> +	err = read_tunnel_regs(aux, &regs);
> +	if (err)
> +		return ERR_PTR(err);
> +
> +	if (!(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
> +	      DP_TUNNELING_SUPPORT))
> +		return ERR_PTR(-ENODEV);
> +
> +	/* The DPRX caps are valid only after enabling BW alloc mode. */
> +	if (!tunnel_regs_are_valid(mgr, &regs, SKIP_DPRX_CAPS_CHECK))
> +		return ERR_PTR(-EINVAL);
> +
> +	tunnel = create_tunnel(mgr, aux, &regs);
> +	if (!tunnel)
> +		return ERR_PTR(-ENOMEM);
> +
> +	tun_dbg(tunnel,
> +		"OUI:%*phD DevID:%*pE Rev-HW:%d.%d SW:%d.%d PR-Sup:%c BWA-Sup:%c BWA-En:%c\n",
> +		DP_TUNNELING_OUI_BYTES,
> +			tunnel_reg_ptr(&regs, DP_TUNNELING_OUI),
> +		dev_id_len(tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID), DP_TUNNELING_DEV_ID_BYTES),
> +			tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID),
> +		(tunnel_reg(&regs, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MAJOR_MASK) >>
> +			DP_TUNNELING_HW_REV_MAJOR_SHIFT,
> +		(tunnel_reg(&regs, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MINOR_MASK) >>
> +			DP_TUNNELING_HW_REV_MINOR_SHIFT,
> +		tunnel_reg(&regs, DP_TUNNELING_SW_REV_MAJOR),
> +		tunnel_reg(&regs, DP_TUNNELING_SW_REV_MINOR),
> +		yes_no_chr(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
> +			   DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT),
> +		yes_no_chr(tunnel->bw_alloc_supported),
> +		yes_no_chr(tunnel->bw_alloc_enabled));
> +
> +	return tunnel;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_detect);
> +
> +/**
> + * drm_dp_tunnel_destroy - Destroy tunnel object
> + * @tunnel: Tunnel object
> + *
> + * Remove the tunnel from the tunnel topology and destroy it.
> + */
> +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> +{
> +	if (drm_WARN_ON(tunnel->group->mgr->dev, tunnel->destroyed))
> +		return -ENODEV;
> +
> +	tun_dbg(tunnel, "destroying\n");
> +
> +	tunnel->destroyed = true;
> +	destroy_tunnel(tunnel);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_destroy);
> +
> +static int check_tunnel(const struct drm_dp_tunnel *tunnel)
> +{
> +	if (tunnel->destroyed)
> +		return -ENODEV;
> +
> +	if (tunnel->has_io_error)
> +		return -EIO;
> +
> +	return 0;
> +}
> +
> +static int group_allocated_bw(struct drm_dp_tunnel_group *group)
> +{
> +	struct drm_dp_tunnel *tunnel;
> +	int group_allocated_bw = 0;
> +
> +	for_each_tunnel_in_group(group, tunnel) {
> +		if (check_tunnel(tunnel) == 0 &&
> +		    tunnel->bw_alloc_enabled)
> +			group_allocated_bw += tunnel->allocated_bw;
> +	}
> +
> +	return group_allocated_bw;
> +}
> +
> +static int calc_group_available_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +	return group_allocated_bw(tunnel->group) -
> +	       tunnel->allocated_bw +
> +	       tunnel->estimated_bw;
> +}
> +
> +static int update_group_available_bw(struct drm_dp_tunnel *tunnel,
> +				     const struct drm_dp_tunnel_regs *regs)
> +{
> +	struct drm_dp_tunnel *tunnel_iter;
> +	int group_available_bw;
> +	bool changed;
> +
> +	tunnel->estimated_bw = tunnel_reg(regs, DP_ESTIMATED_BW) * tunnel->bw_granularity;
> +
> +	if (calc_group_available_bw(tunnel) == tunnel->group->available_bw)
> +		return 0;
> +
> +	for_each_tunnel_in_group(tunnel->group, tunnel_iter) {
> +		int err;
> +
> +		if (tunnel_iter == tunnel)
> +			continue;
> +
> +		if (check_tunnel(tunnel_iter) != 0 ||
> +		    !tunnel_iter->bw_alloc_enabled)
> +			continue;
> +
> +		err = drm_dp_dpcd_probe(tunnel_iter->aux, DP_DPCD_REV);
> +		if (err) {
> +			tun_dbg(tunnel_iter,
> +				"Probe failed, assume disconnected (err %pe)\n",
> +				ERR_PTR(err));
> +			drm_dp_tunnel_set_io_error(tunnel_iter);
> +		}
> +	}
> +
> +	group_available_bw = calc_group_available_bw(tunnel);
> +
> +	tun_dbg(tunnel, "Updated group available BW: %d->%d\n",
> +		DPTUN_BW_ARG(tunnel->group->available_bw),
> +		DPTUN_BW_ARG(group_available_bw));
> +
> +	changed = tunnel->group->available_bw != group_available_bw;
> +
> +	tunnel->group->available_bw = group_available_bw;
> +
> +	return changed ? 1 : 0;
> +}
> +
> +static int set_bw_alloc_mode(struct drm_dp_tunnel *tunnel, bool enable)
> +{
> +	u8 mask = DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE | DP_UNMASK_BW_ALLOCATION_IRQ;
> +	u8 val;
> +
> +	if (drm_dp_dpcd_readb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, &val) < 0)
> +		goto out_err;
> +
> +	if (enable)
> +		val |= mask;
> +	else
> +		val &= ~mask;
> +
> +	if (drm_dp_dpcd_writeb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, val) < 0)
> +		goto out_err;
> +
> +	tunnel->bw_alloc_enabled = enable;
> +
> +	return 0;
> +
> +out_err:
> +	drm_dp_tunnel_set_io_error(tunnel);
> +
> +	return -EIO;
> +}
> +
> +/**
> + * drm_dp_tunnel_enable_bw_alloc: Enable DP tunnel BW allocation mode
> + * @tunnel: Tunnel object
> + *
> + * Enable the DP tunnel BW allocation mode on @tunnel if it supports it.
> + *
> + * Returns 0 in case of success, negative error code otherwise.
> + */
> +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_regs regs;
> +	int err = check_tunnel(tunnel);
> +
> +	if (err)
> +		return err;
> +
> +	if (!tunnel->bw_alloc_supported)
> +		return -EOPNOTSUPP;
> +
> +	if (!tunnel_group_id(tunnel->group->drv_group_id))
> +		return -EINVAL;
> +
> +	err = set_bw_alloc_mode(tunnel, true);
> +	if (err)
> +		goto out;
> +
> +	err = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> +	if (err) {
> +		set_bw_alloc_mode(tunnel, false);
> +
> +		goto out;
> +	}
> +
> +	if (!tunnel->max_dprx_rate)
> +		update_dprx_caps(tunnel, &regs);
> +
> +	if (tunnel->group->available_bw == -1) {
> +		err = update_group_available_bw(tunnel, &regs);
> +		if (err > 0)
> +			err = 0;
> +	}
> +out:
> +	tun_dbg_stat(tunnel, err,
> +		     "Enabling BW alloc mode: DPRX:%dx%d Group alloc:%d/%d Mb/s",
> +		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
> +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> +
> +	return err;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_enable_bw_alloc);
> +
> +/**
> + * drm_dp_tunnel_disable_bw_alloc: Disable DP tunnel BW allocation mode
> + * @tunnel: Tunnel object
> + *
> + * Disable the DP tunnel BW allocation mode on @tunnel.
> + *
> + * Returns 0 in case of success, negative error code otherwise.
> + */
> +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> +{
> +	int err = check_tunnel(tunnel);
> +
> +	if (err)
> +		return err;
> +
> +	err = set_bw_alloc_mode(tunnel, false);
> +
> +	tun_dbg_stat(tunnel, err, "Disabling BW alloc mode");
> +
> +	return err;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_disable_bw_alloc);
> +
> +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
> +{
> +	return tunnel->bw_alloc_enabled;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_bw_alloc_is_enabled);
> +
> +static int bw_req_complete(struct drm_dp_aux *aux, bool *status_changed)
> +{
> +	u8 bw_req_mask = DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED;
> +	u8 status_change_mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
> +	u8 val;
> +
> +	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> +		return -EIO;
> +
> +	*status_changed = val & status_change_mask;
> +
> +	val &= bw_req_mask;
> +
> +	if (!val)
> +		return -EAGAIN;
> +
> +	if (drm_dp_dpcd_writeb(aux, DP_TUNNELING_STATUS, val) < 0)
> +		return -EIO;
> +
> +	return val == DP_BW_REQUEST_SUCCEEDED ? 0 : -ENOSPC;
> +}
> +
> +static int allocate_tunnel_bw(struct drm_dp_tunnel *tunnel, int bw)
> +{
> +	struct drm_dp_tunnel_mgr *mgr = tunnel->group->mgr;
> +	int request_bw = DIV_ROUND_UP(bw, tunnel->bw_granularity);
> +	unsigned long wait_expires;
> +	DEFINE_WAIT(wait);
> +	int err;
> +
> +	/* Atomic check should prevent the following. */
> +	if (drm_WARN_ON(mgr->dev, request_bw > MAX_DP_REQUEST_BW)) {
> +		err = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (drm_dp_dpcd_writeb(tunnel->aux, DP_REQUEST_BW, request_bw) < 0) {
> +		err = -EIO;
> +		goto out;
> +	}
> +
> +	wait_expires = jiffies + msecs_to_jiffies(3000);
> +
> +	for (;;) {
> +		bool status_changed;
> +
> +		err = bw_req_complete(tunnel->aux, &status_changed);
> +		if (err != -EAGAIN)
> +			break;
> +
> +		if (status_changed) {
> +			struct drm_dp_tunnel_regs regs;
> +
> +			err = read_and_verify_tunnel_regs(tunnel, &regs,
> +							  ALLOW_ALLOCATED_BW_CHANGE);
> +			if (err)
> +				break;
> +		}
> +
> +		if (time_after(jiffies, wait_expires)) {
> +			err = -ETIMEDOUT;
> +			break;
> +		}
> +
> +		prepare_to_wait(&mgr->bw_req_queue, &wait, TASK_UNINTERRUPTIBLE);

Shouldn't the prepare_to_wait() be done before checking the
condition?

> +		schedule_timeout(msecs_to_jiffies(200));

I guess the timeout here saves us, even if we race with the wakeup
due to the above.

> +	};
> +
> +	finish_wait(&mgr->bw_req_queue, &wait);
> +
> +	if (err)
> +		goto out;
> +
> +	tunnel->allocated_bw = request_bw * tunnel->bw_granularity;
> +
> +out:
> +	tun_dbg_stat(tunnel, err, "Allocating %d/%d Mb/s for tunnel: Group alloc:%d/%d Mb/s",
> +		     DPTUN_BW_ARG(request_bw * tunnel->bw_granularity),
> +		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> +
> +	if (err == -EIO)
> +		drm_dp_tunnel_set_io_error(tunnel);
> +
> +	return err;
> +}
> +
> +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> +{
> +	int err = check_tunnel(tunnel);
> +
> +	if (err)
> +		return err;
> +
> +	return allocate_tunnel_bw(tunnel, bw);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_alloc_bw);
> +
> +static int check_and_clear_status_change(struct drm_dp_tunnel *tunnel)
> +{
> +	u8 mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
> +	u8 val;
> +
> +	if (drm_dp_dpcd_readb(tunnel->aux, DP_TUNNELING_STATUS, &val) < 0)
> +		goto out_err;
> +
> +	val &= mask;
> +
> +	if (val) {
> +		if (drm_dp_dpcd_writeb(tunnel->aux, DP_TUNNELING_STATUS, val) < 0)
> +			goto out_err;
> +
> +		return 1;
> +	}
> +
> +	if (!drm_dp_tunnel_bw_alloc_is_enabled(tunnel))
> +		return 0;
> +
> +	/*
> +	 * Check for estimated BW changes explicitly to account for lost
> +	 * BW change notifications.
> +	 */
> +	if (drm_dp_dpcd_readb(tunnel->aux, DP_ESTIMATED_BW, &val) < 0)
> +		goto out_err;
> +
> +	if (val * tunnel->bw_granularity != tunnel->estimated_bw)
> +		return 1;
> +
> +	return 0;
> +
> +out_err:
> +	drm_dp_tunnel_set_io_error(tunnel);
> +
> +	return -EIO;
> +}
> +
> +/**
> + * drm_dp_tunnel_update_state: Update DP tunnel SW state with the HW state
> + * @tunnel: Tunnel object
> + *
> + * Update the SW state of @tunnel with the HW state.
> + *
> + * Returns 0 if the state has not changed, 1 if it has changed and got updated
> + * successfully and a negative error code otherwise.
> + */
> +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_regs regs;
> +	bool changed = false;
> +	int ret = check_tunnel(tunnel);
> +
> +	if (ret < 0)
> +		return ret;
> +
> +	ret = check_and_clear_status_change(tunnel);
> +	if (ret < 0)
> +		goto out;
> +
> +	if (!ret)
> +		return 0;
> +
> +	ret = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> +	if (ret)
> +		goto out;
> +
> +	if (update_dprx_caps(tunnel, &regs))
> +		changed = true;
> +
> +	ret = update_group_available_bw(tunnel, &regs);
> +	if (ret == 1)
> +		changed = true;
> +
> +out:
> +	tun_dbg_stat(tunnel, ret < 0 ? ret : 0,
> +		     "State update: Changed:%c DPRX:%dx%d Tunnel alloc:%d/%d Group alloc:%d/%d Mb/s",
> +		     yes_no_chr(changed),
> +		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
> +		     DPTUN_BW_ARG(tunnel->allocated_bw),
> +		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> +
> +	if (ret < 0)
> +		return ret;
> +
> +	if (changed)
> +		return 1;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_update_state);
> +
> +/*
> + * Returns 0 if no re-probe is needed, 1 if a re-probe is needed,
> + * a negative error code otherwise.
> + */
> +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_aux *aux)
> +{
> +	u8 val;
> +
> +	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> +		return -EIO;
> +
> +	if (val & (DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED))
> +		wake_up_all(&mgr->bw_req_queue);
> +
> +	if (val & (DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED))
> +		return 1;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_handle_irq);
> +
> +/**
> + * drm_dp_tunnel_max_dprx_rate - Query the maximum rate of the tunnel's DPRX
> + * @tunnel: Tunnel object
> + *
> + * The function is used to query the maximum link rate of the DPRX connected
> + * to @tunnel. Note that this rate will not be limited by the BW limit of the
> + * tunnel, as opposed to the standard and extended DP_MAX_LINK_RATE DPCD
> + * registers.
> + *
> + * Returns the maximum link rate in 10 kbit/s units.
> + */
> +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> +{
> +	return tunnel->max_dprx_rate;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_rate);
> +
> +/**
> + * drm_dp_tunnel_max_dprx_lane_count - Query the maximum lane count of the tunnel's DPRX
> + * @tunnel: Tunnel object
> + *
> + * The function is used to query the maximum lane count of the DPRX connected
> + * to @tunnel. Note that this lane count will not be limited by the BW limit of
> + * the tunnel, as opposed to the standard and extended DP_MAX_LANE_COUNT DPCD
> + * registers.
> + *
> + * Returns the maximum lane count.
> + */
> +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
> +{
> +	return tunnel->max_dprx_lane_count;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_lane_count);
> +
> +/**
> + * drm_dp_tunnel_available_bw - Query the estimated total available BW of the tunnel
> + * @tunnel: Tunnel object
> + *
> + * This function is used to query the estimated total available BW of the
> + * tunnel. This includes the currently allocated and free BW for all the
> + * tunnels in @tunnel's group. The available BW is valid only after the BW
> + * allocation mode has been enabled for the tunnel and its state got updated
> + * calling drm_dp_tunnel_update_state().
> + *
> + * Returns the @tunnel group's estimated total available bandwidth in kB/s
> + * units, or -1 if the available BW isn't valid (the BW allocation mode is
> + * not enabled or the tunnel's state hasn't been updated).
> + */
> +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +	return tunnel->group->available_bw;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_available_bw);
> +
> +static struct drm_dp_tunnel_group_state *
> +drm_dp_tunnel_atomic_get_group_state(struct drm_atomic_state *state,
> +				     const struct drm_dp_tunnel *tunnel)
> +{
> +	return (struct drm_dp_tunnel_group_state *)
> +		drm_atomic_get_private_obj_state(state,
> +						 &tunnel->group->base);
> +}
> +
> +static struct drm_dp_tunnel_state *
> +add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> +		 struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	tun_dbg_atomic(tunnel,
> +		       "Adding state for tunnel %p to group state %p\n",
> +		       tunnel, group_state);
> +
> +	tunnel_state = kzalloc(sizeof(*tunnel_state), GFP_KERNEL);
> +	if (!tunnel_state)
> +		return NULL;
> +
> +	tunnel_state->group_state = group_state;
> +
> +	drm_dp_tunnel_ref_get(tunnel, &tunnel_state->tunnel_ref);
> +
> +	INIT_LIST_HEAD(&tunnel_state->node);
> +	list_add(&tunnel_state->node, &group_state->tunnel_states);
> +
> +	return tunnel_state;
> +}
> +
> +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state)
> +{
> +	tun_dbg_atomic(tunnel_state->tunnel_ref.tunnel,
> +		       "Clearing state for tunnel %p\n",
> +		       tunnel_state->tunnel_ref.tunnel);
> +
> +	list_del(&tunnel_state->node);
> +
> +	kfree(tunnel_state->stream_bw);
> +	drm_dp_tunnel_ref_put(&tunnel_state->tunnel_ref);
> +
> +	kfree(tunnel_state);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_clear_state);

That looks like some kind of destructor so the function name doesn't
seem to fit.

Is there even any need to export that since it doesn't look like
any kind of high level thing, and it's called from a static function
below?

> +
> +static void clear_tunnel_group_state(struct drm_dp_tunnel_group_state *group_state)
> +{
> +	struct drm_dp_tunnel_state *tunnel_state;
> +	struct drm_dp_tunnel_state *tunnel_state_tmp;
> +
> +	for_each_tunnel_state_safe(group_state, tunnel_state, tunnel_state_tmp)
> +		drm_dp_tunnel_atomic_clear_state(tunnel_state);
> +}
> +
> +static struct drm_dp_tunnel_state *
> +get_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> +		 const struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	for_each_tunnel_state(group_state, tunnel_state)
> +		if (tunnel_state->tunnel_ref.tunnel == tunnel)
> +			return tunnel_state;
> +
> +	return NULL;
> +}
> +
> +static struct drm_dp_tunnel_state *
> +get_or_add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> +			struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	tunnel_state = get_tunnel_state(group_state, tunnel);
> +	if (tunnel_state)
> +		return tunnel_state;
> +
> +	return add_tunnel_state(group_state, tunnel);
> +}
> +
> +static struct drm_private_state *
> +tunnel_group_duplicate_state(struct drm_private_obj *obj)
> +{
> +	struct drm_dp_tunnel_group_state *group_state = to_group_state(obj->state);
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> +	if (!group_state)
> +		return NULL;
> +
> +	INIT_LIST_HEAD(&group_state->tunnel_states);
> +
> +	__drm_atomic_helper_private_obj_duplicate_state(obj, &group_state->base);
> +
> +	for_each_tunnel_state(to_group_state(obj->state), tunnel_state) {
> +		struct drm_dp_tunnel_state *new_tunnel_state;
> +
> +		new_tunnel_state = get_or_add_tunnel_state(group_state,
> +							   tunnel_state->tunnel_ref.tunnel);
> +		if (!new_tunnel_state)
> +			goto out_free_state;
> +
> +		new_tunnel_state->stream_mask = tunnel_state->stream_mask;
> +		new_tunnel_state->stream_bw = kmemdup(tunnel_state->stream_bw,
> +						      sizeof(*tunnel_state->stream_bw) *
> +							hweight32(tunnel_state->stream_mask),
> +						      GFP_KERNEL);
> +
> +		if (!new_tunnel_state->stream_bw)
> +			goto out_free_state;
> +	}
> +
> +	return &group_state->base;
> +
> +out_free_state:
> +	clear_tunnel_group_state(group_state);
> +	kfree(group_state);
> +
> +	return NULL;
> +}
> +
> +static void tunnel_group_destroy_state(struct drm_private_obj *obj, struct drm_private_state *state)
> +{
> +	struct drm_dp_tunnel_group_state *group_state = to_group_state(state);
> +
> +	clear_tunnel_group_state(group_state);
> +	kfree(group_state);
> +}
> +
> +static const struct drm_private_state_funcs tunnel_group_funcs = {
> +	.atomic_duplicate_state = tunnel_group_duplicate_state,
> +	.atomic_destroy_state = tunnel_group_destroy_state,
> +};
> +
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> +			       struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_group_state *group_state =
> +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	if (IS_ERR(group_state))
> +		return ERR_CAST(group_state);
> +
> +	tunnel_state = get_or_add_tunnel_state(group_state, tunnel);
> +	if (!tunnel_state)
> +		return ERR_PTR(-ENOMEM);
> +
> +	return tunnel_state;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_state);
> +
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> +				   const struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_group_state *new_group_state;
> +	int i;
> +
> +	for_each_new_group_in_state(state, new_group_state, i)
> +		if (to_group(new_group_state->base.obj) == tunnel->group)
> +			return get_tunnel_state(new_group_state, tunnel);
> +
> +	return NULL;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_new_state);
> +
> +static bool init_group(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_tunnel_group *group)
> +{
> +	struct drm_dp_tunnel_group_state *group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> +
> +	if (!group_state)
> +		return false;
> +
> +	INIT_LIST_HEAD(&group_state->tunnel_states);
> +
> +	group->mgr = mgr;
> +	group->available_bw = -1;
> +	INIT_LIST_HEAD(&group->tunnels);
> +
> +	drm_atomic_private_obj_init(mgr->dev, &group->base, &group_state->base,
> +				    &tunnel_group_funcs);
> +
> +	return true;
> +}
> +
> +static void cleanup_group(struct drm_dp_tunnel_group *group)
> +{
> +	drm_atomic_private_obj_fini(&group->base);
> +}
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
> +{
> +	const struct drm_dp_tunnel_state *tunnel_state;
> +	u32 stream_mask = 0;
> +
> +	for_each_tunnel_state(group_state, tunnel_state) {
> +		drm_WARN(to_group(group_state->base.obj)->mgr->dev,
> +			 tunnel_state->stream_mask & stream_mask,
> +			 "[DPTUN %s]: conflicting stream IDs %x (IDs in other tunnels %x)\n",
> +			 tunnel_state->tunnel_ref.tunnel->name,
> +			 tunnel_state->stream_mask,
> +			 stream_mask);
> +
> +		stream_mask |= tunnel_state->stream_mask;
> +	}
> +}
> +#else
> +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
> +{
> +}
> +#endif
> +
> +static int stream_id_to_idx(u32 stream_mask, u8 stream_id)
> +{
> +	return hweight32(stream_mask & (BIT(stream_id) - 1));
> +}
> +
> +static int resize_bw_array(struct drm_dp_tunnel_state *tunnel_state,
> +			   unsigned long old_mask, unsigned long new_mask)
> +{
> +	unsigned long move_mask = old_mask & new_mask;
> +	int *new_bws = NULL;
> +	int id;
> +
> +	WARN_ON(!new_mask);
> +
> +	if (old_mask == new_mask)
> +		return 0;
> +
> +	new_bws = kcalloc(hweight32(new_mask), sizeof(*new_bws), GFP_KERNEL);
> +	if (!new_bws)
> +		return -ENOMEM;
> +
> +	for_each_set_bit(id, &move_mask, BITS_PER_TYPE(move_mask))
> +		new_bws[stream_id_to_idx(new_mask, id)] =
> +			tunnel_state->stream_bw[stream_id_to_idx(old_mask, id)];
> +
> +	kfree(tunnel_state->stream_bw);
> +	tunnel_state->stream_bw = new_bws;
> +	tunnel_state->stream_mask = new_mask;
> +
> +	return 0;
> +}
> +
> +static int set_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> +			 u8 stream_id, int bw)
> +{
> +	int err;
> +
> +	err = resize_bw_array(tunnel_state,
> +			      tunnel_state->stream_mask,
> +			      tunnel_state->stream_mask | BIT(stream_id));
> +	if (err)
> +		return err;
> +
> +	tunnel_state->stream_bw[stream_id_to_idx(tunnel_state->stream_mask, stream_id)] = bw;
> +
> +	return 0;
> +}
> +
> +static int clear_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> +			   u8 stream_id)
> +{
> +	if (!(tunnel_state->stream_mask & ~BIT(stream_id))) {
> +		drm_dp_tunnel_atomic_clear_state(tunnel_state);
> +		return 0;
> +	}
> +
> +	return resize_bw_array(tunnel_state,
> +			       tunnel_state->stream_mask,
> +			       tunnel_state->stream_mask & ~BIT(stream_id));
> +}
> +
> +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> +					 struct drm_dp_tunnel *tunnel,
> +					 u8 stream_id, int bw)
> +{
> +	struct drm_dp_tunnel_group_state *new_group_state =
> +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> +	struct drm_dp_tunnel_state *tunnel_state;
> +	int err;
> +
> +	if (drm_WARN_ON(tunnel->group->mgr->dev,
> +			stream_id > BITS_PER_TYPE(tunnel_state->stream_mask)))
> +		return -EINVAL;
> +
> +	tun_dbg(tunnel,
> +		"Setting %d Mb/s for stream %d\n",
> +		DPTUN_BW_ARG(bw), stream_id);
> +
> +	if (bw == 0) {
> +		tunnel_state = get_tunnel_state(new_group_state, tunnel);
> +		if (!tunnel_state)
> +			return 0;
> +
> +		return clear_stream_bw(tunnel_state, stream_id);
> +	}
> +
> +	tunnel_state = get_or_add_tunnel_state(new_group_state, tunnel);
> +	if (drm_WARN_ON(state->dev, !tunnel_state))
> +		return -EINVAL;
> +
> +	err = set_stream_bw(tunnel_state, stream_id, bw);
> +	if (err)
> +		return err;
> +
> +	check_unique_stream_ids(new_group_state);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_set_stream_bw);
> +
> +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
> +{
> +	int tunnel_bw = 0;
> +	int i;
> +
> +	for (i = 0; i < hweight32(tunnel_state->stream_mask); i++)
> +		tunnel_bw += tunnel_state->stream_bw[i];
> +
> +	return tunnel_bw;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_tunnel_bw);
> +
> +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> +						    const struct drm_dp_tunnel *tunnel,
> +						    u32 *stream_mask)
> +{
> +	struct drm_dp_tunnel_group_state *group_state =
> +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	if (IS_ERR(group_state))
> +		return PTR_ERR(group_state);
> +
> +	*stream_mask = 0;
> +	for_each_tunnel_state(group_state, tunnel_state)
> +		*stream_mask |= tunnel_state->stream_mask;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_group_streams_in_state);
> +
> +static int
> +drm_dp_tunnel_atomic_check_group_bw(struct drm_dp_tunnel_group_state *new_group_state,
> +				    u32 *failed_stream_mask)
> +{
> +	struct drm_dp_tunnel_group *group = to_group(new_group_state->base.obj);
> +	struct drm_dp_tunnel_state *new_tunnel_state;
> +	u32 group_stream_mask = 0;
> +	int group_bw = 0;
> +
> +	for_each_tunnel_state(new_group_state, new_tunnel_state) {
> +		struct drm_dp_tunnel *tunnel = new_tunnel_state->tunnel_ref.tunnel;
> +		int max_dprx_bw = get_max_dprx_bw(tunnel);
> +		int tunnel_bw = drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state);
> +
> +		tun_dbg(tunnel,
> +			"%sRequired %d/%d Mb/s total for tunnel.\n",
> +			tunnel_bw > max_dprx_bw ? "Not enough BW: " : "",
> +			DPTUN_BW_ARG(tunnel_bw),
> +			DPTUN_BW_ARG(max_dprx_bw));
> +
> +		if (tunnel_bw > max_dprx_bw) {
> +			*failed_stream_mask = new_tunnel_state->stream_mask;
> +			return -ENOSPC;
> +		}
> +
> +		group_bw += min(roundup(tunnel_bw, tunnel->bw_granularity),
> +				max_dprx_bw);
> +		group_stream_mask |= new_tunnel_state->stream_mask;
> +	}
> +
> +	tun_grp_dbg(group,
> +		    "%sRequired %d/%d Mb/s total for tunnel group.\n",
> +		    group_bw > group->available_bw ? "Not enough BW: " : "",
> +		    DPTUN_BW_ARG(group_bw),
> +		    DPTUN_BW_ARG(group->available_bw));
> +
> +	if (group_bw > group->available_bw) {
> +		*failed_stream_mask = group_stream_mask;
> +		return -ENOSPC;
> +	}
> +
> +	return 0;
> +}
> +
> +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> +					  u32 *failed_stream_mask)
> +{
> +	struct drm_dp_tunnel_group_state *new_group_state;
> +	int i;
> +
> +	for_each_new_group_in_state(state, new_group_state, i) {
> +		int ret;
> +
> +		ret = drm_dp_tunnel_atomic_check_group_bw(new_group_state,
> +							  failed_stream_mask);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_check_stream_bws);
> +
> +static void destroy_mgr(struct drm_dp_tunnel_mgr *mgr)
> +{
> +	int i;
> +
> +	for (i = 0; i < mgr->group_count; i++) {
> +		cleanup_group(&mgr->groups[i]);
> +		drm_WARN_ON(mgr->dev, !list_empty(&mgr->groups[i].tunnels));
> +	}
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +	ref_tracker_dir_exit(&mgr->ref_tracker);
> +#endif
> +
> +	kfree(mgr->groups);
> +	kfree(mgr);
> +}
> +
> +/**
> + * drm_dp_tunnel_mgr_create - Create a DP tunnel manager
> + * @i915: i915 driver object
> + *
> + * Creates a DP tunnel manager.
> + *
> + * Returns a pointer to the tunnel manager if created successfully or NULL in
> + * case of an error.
> + */
> +struct drm_dp_tunnel_mgr *
> +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
> +{
> +	struct drm_dp_tunnel_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);

I dislike it when functions that can fail or that have side effects
are called from the variable declaration block. There's quite a bit
of that in this patch. IMO it's far too easy to overlook such function
calls.

> +	int i;
> +

ie. the kzalloc() should be here IMO.

> +	if (!mgr)
> +		return NULL;
> +
> +	mgr->dev = dev;
> +	init_waitqueue_head(&mgr->bw_req_queue);
> +
> +	mgr->groups = kcalloc(max_group_count, sizeof(*mgr->groups), GFP_KERNEL);
> +	if (!mgr->groups) {
> +		kfree(mgr);
> +
> +		return NULL;
> +	}
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +	ref_tracker_dir_init(&mgr->ref_tracker, 16, "dptun");
> +#endif
> +
> +	for (i = 0; i < max_group_count; i++) {
> +		if (!init_group(mgr, &mgr->groups[i])) {
> +			destroy_mgr(mgr);
> +
> +			return NULL;
> +		}
> +
> +		mgr->group_count++;
> +	}
> +
> +	return mgr;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_mgr_create);
> +
> +/**
> + * drm_dp_tunnel_mgr_destroy - Destroy DP tunnel manager
> + * @mgr: Tunnel manager object
> + *
> + * Destroy the tunnel manager.
> + */
> +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr)
> +{
> +	destroy_mgr(mgr);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_mgr_destroy);
> diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
> index 281afff6ee4e5..8bfd5d007be8d 100644
> --- a/include/drm/display/drm_dp.h
> +++ b/include/drm/display/drm_dp.h
> @@ -1382,6 +1382,66 @@
>  #define DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET	0x69494
>  #define DP_HDCP_2_2_REG_DBG_OFFSET		0x69518
>  
> +/* DP-tunneling */
> +#define DP_TUNNELING_OUI				0xe0000
> +#define  DP_TUNNELING_OUI_BYTES				3
> +
> +#define DP_TUNNELING_DEV_ID				0xe0003
> +#define  DP_TUNNELING_DEV_ID_BYTES			6
> +
> +#define DP_TUNNELING_HW_REV				0xe0009
> +#define  DP_TUNNELING_HW_REV_MAJOR_SHIFT		4
> +#define  DP_TUNNELING_HW_REV_MAJOR_MASK			(0xf << DP_TUNNELING_HW_REV_MAJOR_SHIFT)
> +#define  DP_TUNNELING_HW_REV_MINOR_SHIFT		0
> +#define  DP_TUNNELING_HW_REV_MINOR_MASK			(0xf << DP_TUNNELING_HW_REV_MINOR_SHIFT)
> +
> +#define DP_TUNNELING_SW_REV_MAJOR			0xe000a
> +#define DP_TUNNELING_SW_REV_MINOR			0xe000b
> +
> +#define DP_TUNNELING_CAPABILITIES			0xe000d
> +#define  DP_IN_BW_ALLOCATION_MODE_SUPPORT		(1 << 7)
> +#define  DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT		(1 << 6)
> +#define  DP_TUNNELING_SUPPORT				(1 << 0)
> +
> +#define DP_IN_ADAPTER_INFO				0xe000e
> +#define  DP_IN_ADAPTER_NUMBER_BITS			7
> +#define  DP_IN_ADAPTER_NUMBER_MASK			((1 << DP_IN_ADAPTER_NUMBER_BITS) - 1)
> +
> +#define DP_USB4_DRIVER_ID				0xe000f
> +#define  DP_USB4_DRIVER_ID_BITS				4
> +#define  DP_USB4_DRIVER_ID_MASK				((1 << DP_USB4_DRIVER_ID_BITS) - 1)
> +
> +#define DP_USB4_DRIVER_BW_CAPABILITY			0xe0020
> +#define  DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT	(1 << 7)
> +
> +#define DP_IN_ADAPTER_TUNNEL_INFORMATION		0xe0021
> +#define  DP_GROUP_ID_BITS				3
> +#define  DP_GROUP_ID_MASK				((1 << DP_GROUP_ID_BITS) - 1)
> +
> +#define DP_BW_GRANULARITY				0xe0022
> +#define  DP_BW_GRANULARITY_MASK				0x3
> +
> +#define DP_ESTIMATED_BW					0xe0023
> +#define DP_ALLOCATED_BW					0xe0024
> +
> +#define DP_TUNNELING_STATUS				0xe0025
> +#define  DP_BW_ALLOCATION_CAPABILITY_CHANGED		(1 << 3)
> +#define  DP_ESTIMATED_BW_CHANGED			(1 << 2)
> +#define  DP_BW_REQUEST_SUCCEEDED			(1 << 1)
> +#define  DP_BW_REQUEST_FAILED				(1 << 0)
> +
> +#define DP_TUNNELING_MAX_LINK_RATE			0xe0028
> +
> +#define DP_TUNNELING_MAX_LANE_COUNT			0xe0029
> +#define  DP_TUNNELING_MAX_LANE_COUNT_MASK		0x1f
> +
> +#define DP_DPTX_BW_ALLOCATION_MODE_CONTROL		0xe0030
> +#define  DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE	(1 << 7)
> +#define  DP_UNMASK_BW_ALLOCATION_IRQ			(1 << 6)
> +
> +#define DP_REQUEST_BW					0xe0031
> +#define  MAX_DP_REQUEST_BW				255
> +
>  /* LTTPR: Link Training (LT)-tunable PHY Repeaters */
>  #define DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV 0xf0000 /* 1.3 */
>  #define DP_MAX_LINK_RATE_PHY_REPEATER			    0xf0001 /* 1.4a */
> diff --git a/include/drm/display/drm_dp_tunnel.h b/include/drm/display/drm_dp_tunnel.h
> new file mode 100644
> index 0000000000000..f6449b1b4e6e9
> --- /dev/null
> +++ b/include/drm/display/drm_dp_tunnel.h
> @@ -0,0 +1,270 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#ifndef __DRM_DP_TUNNEL_H__
> +#define __DRM_DP_TUNNEL_H__
> +
> +#include <linux/err.h>
> +#include <linux/errno.h>
> +#include <linux/types.h>
> +
> +struct drm_dp_aux;
> +
> +struct drm_device;
> +
> +struct drm_atomic_state;
> +struct drm_dp_tunnel_mgr;
> +struct drm_dp_tunnel_state;
> +
> +struct ref_tracker;
> +
> +struct drm_dp_tunnel_ref {
> +	struct drm_dp_tunnel *tunnel;
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +	struct ref_tracker *tracker;
> +#endif
> +};
> +
> +#ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel);
> +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel);
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
> +
> +void
> +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
> +#else
> +#define drm_dp_tunnel_get(tunnel, tracker) \
> +	drm_dp_tunnel_get_untracked(tunnel)
> +
> +#define drm_dp_tunnel_put(tunnel, tracker) \
> +	drm_dp_tunnel_put_untracked(tunnel)
> +
> +#endif
> +
> +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
> +					   struct drm_dp_tunnel_ref *tunnel_ref)
> +{
> +	tunnel_ref->tunnel = drm_dp_tunnel_get(tunnel, &tunnel_ref->tracker);
> +}
> +
> +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref)
> +{
> +	drm_dp_tunnel_put(tunnel_ref->tunnel, &tunnel_ref->tracker);
> +}
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> +		     struct drm_dp_aux *aux);
> +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel);
> +
> +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel);
> +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw);
> +int drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel);
> +
> +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel);
> +
> +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> +			     struct drm_dp_aux *aux);
> +
> +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel);
> +
> +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel);
> +
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> +			       struct drm_dp_tunnel *tunnel);
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> +				   const struct drm_dp_tunnel *tunnel);
> +
> +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state);
> +
> +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> +				       struct drm_dp_tunnel *tunnel,
> +				       u8 stream_id, int bw);
> +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> +						    const struct drm_dp_tunnel *tunnel,
> +						    u32 *stream_mask);
> +
> +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> +					  u32 *failed_stream_mask);
> +
> +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state);
> +
> +struct drm_dp_tunnel_mgr *
> +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count);
> +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr);
> +
> +#else
> +
> +static inline struct drm_dp_tunnel *
> +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> +{
> +	return NULL;
> +}
> +
> +static inline void
> +drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel) {}
> +
> +static inline struct drm_dp_tunnel *
> +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker)
> +{
> +	return NULL;
> +}
> +
> +static inline void
> +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker) {}
> +
> +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
> +					   struct drm_dp_tunnel_ref *tunnel_ref) {}
> +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref) {}
> +
> +static inline struct drm_dp_tunnel *
> +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> +		     struct drm_dp_aux *aux)
> +{
> +	return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline int
> +drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> +{
> +	return 0;
> +}
> +
> +static inline int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
> +{
> +	return false;
> +}
> +
> +static inline int
> +drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel) {}
> +static inline int
> +drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> +			 struct drm_dp_aux *aux)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> +{
> +	return 0;
> +}
> +
> +static inline int
> +drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
> +{
> +	return 0;
> +}
> +
> +static inline int
> +drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +	return -1;
> +}
> +
> +static inline const char *
> +drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
> +{
> +	return NULL;
> +}
> +
> +static inline struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> +			       struct drm_dp_tunnel *tunnel)
> +{
> +	return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> +				   const struct drm_dp_tunnel *tunnel)
> +{
> +	return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline void
> +drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state) {}
> +
> +static inline int
> +drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> +				   struct drm_dp_tunnel *tunnel,
> +				   u8 stream_id, int bw)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> +						const struct drm_dp_tunnel *tunnel,
> +						u32 *stream_mask)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> +				      u32 *failed_stream_mask)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
> +{
> +	return 0;
> +}
> +
> +static inline struct drm_dp_tunnel_mgr *
> +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
> +{
> +	return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline
> +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr) {}
> +
> +
> +#endif /* CONFIG_DRM_DISPLAY_DP_TUNNEL */
> +
> +#endif /* __DRM_DP_TUNNEL_H__ */
> -- 
> 2.39.2

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 18/19] drm/i915/dp: Suspend/resume DP tunnels
  2024-01-23 10:28 ` [PATCH 18/19] drm/i915/dp: Suspend/resume DP tunnels Imre Deak
@ 2024-01-31 16:18   ` Ville Syrjälä
  2024-01-31 16:59     ` Imre Deak
  0 siblings, 1 reply; 61+ messages in thread
From: Ville Syrjälä @ 2024-01-31 16:18 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx, dri-devel

On Tue, Jan 23, 2024 at 12:28:49PM +0200, Imre Deak wrote:
> Suspend and resume DP tunnels during system suspend/resume, disabling
> the BW allocation mode during suspend, re-enabling it after resume. This
> reflects the link's BW management component (Thunderbolt CM) disabling
> BWA during suspend. Before any BW requests the driver must read the
> sink's DPRX capabilities (since the BW manager requires this
> information, so snoops for it on AUX), so ensure this read takes place.

Isn't that going to screw up the age old problem of .compute_config()
potentially failing during the resume modeset if we no longer have
the same amount of bandwidth available as we had before suspend?
So far we've been getting away with this exactly by not updating 
the dpcd stuff before the modeset during resume.

> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 16 +++++++++++-----
>  1 file changed, 11 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 8ebfb039000f6..bc138a54f8d7b 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -36,6 +36,7 @@
>  #include <asm/byteorder.h>
>  
>  #include <drm/display/drm_dp_helper.h>
> +#include <drm/display/drm_dp_tunnel.h>
>  #include <drm/display/drm_dsc_helper.h>
>  #include <drm/display/drm_hdmi_helper.h>
>  #include <drm/drm_atomic_helper.h>
> @@ -3320,18 +3321,21 @@ void intel_dp_sync_state(struct intel_encoder *encoder,
>  			 const struct intel_crtc_state *crtc_state)
>  {
>  	struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
> -
> -	if (!crtc_state)
> -		return;
> +	bool dpcd_updated = false;
>  
>  	/*
>  	 * Don't clobber DPCD if it's been already read out during output
>  	 * setup (eDP) or detect.
>  	 */
> -	if (intel_dp->dpcd[DP_DPCD_REV] == 0)
> +	if (crtc_state && intel_dp->dpcd[DP_DPCD_REV] == 0) {
>  		intel_dp_get_dpcd(intel_dp);
> +		dpcd_updated = true;
> +	}
>  
> -	intel_dp_reset_max_link_params(intel_dp);
> +	intel_dp_tunnel_resume(intel_dp, dpcd_updated);
> +
> +	if (crtc_state)
> +		intel_dp_reset_max_link_params(intel_dp);
>  }
>  
>  bool intel_dp_initial_fastset_check(struct intel_encoder *encoder,
> @@ -5973,6 +5977,8 @@ void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder)
>  	struct intel_dp *intel_dp = enc_to_intel_dp(intel_encoder);
>  
>  	intel_pps_vdd_off_sync(intel_dp);
> +
> +	intel_dp_tunnel_suspend(intel_dp);
>  }
>  
>  void intel_dp_encoder_shutdown(struct intel_encoder *intel_encoder)
> -- 
> 2.39.2

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 18/19] drm/i915/dp: Suspend/resume DP tunnels
  2024-01-31 16:18   ` Ville Syrjälä
@ 2024-01-31 16:59     ` Imre Deak
  0 siblings, 0 replies; 61+ messages in thread
From: Imre Deak @ 2024-01-31 16:59 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, dri-devel

On Wed, Jan 31, 2024 at 06:18:22PM +0200, Ville Syrjälä wrote:
> On Tue, Jan 23, 2024 at 12:28:49PM +0200, Imre Deak wrote:
> > Suspend and resume DP tunnels during system suspend/resume, disabling
> > the BW allocation mode during suspend, re-enabling it after resume. This
> > reflects the link's BW management component (Thunderbolt CM) disabling
> > BWA during suspend. Before any BW requests the driver must read the
> > sink's DPRX capabilities (since the BW manager requires this
> > information, so snoops for it on AUX), so ensure this read takes place.
> 
> Isn't that going to screw up the age old problem of .compute_config()
> potentially failing during the resume modeset if we no longer have
> the same amount of bandwidth available as we had before suspend?
> So far we've been getting away with this exactly by not updating 
> the dpcd stuff before the modeset during resume.

Right, in the case where this would be a problem (so not counting where
the caps haven't been read out yet and so we update here
intel_dp->dpcd), the caps in intel_dp->dpcd will be preserved, not
actually updated with the read-out values, see intel_dp_tunnel_resume()
in patch 11.

The same goes for the tunnel (group) BW: it will not be updated during
resume (by way of the connector/tunnel detection being blocked during
the restore modeset), so the restore modeset should see the same amount
of BW as there was during suspend.

> 
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_dp.c | 16 +++++++++++-----
> >  1 file changed, 11 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> > index 8ebfb039000f6..bc138a54f8d7b 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > @@ -36,6 +36,7 @@
> >  #include <asm/byteorder.h>
> >  
> >  #include <drm/display/drm_dp_helper.h>
> > +#include <drm/display/drm_dp_tunnel.h>
> >  #include <drm/display/drm_dsc_helper.h>
> >  #include <drm/display/drm_hdmi_helper.h>
> >  #include <drm/drm_atomic_helper.h>
> > @@ -3320,18 +3321,21 @@ void intel_dp_sync_state(struct intel_encoder *encoder,
> >  			 const struct intel_crtc_state *crtc_state)
> >  {
> >  	struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
> > -
> > -	if (!crtc_state)
> > -		return;
> > +	bool dpcd_updated = false;
> >  
> >  	/*
> >  	 * Don't clobber DPCD if it's been already read out during output
> >  	 * setup (eDP) or detect.
> >  	 */
> > -	if (intel_dp->dpcd[DP_DPCD_REV] == 0)
> > +	if (crtc_state && intel_dp->dpcd[DP_DPCD_REV] == 0) {
> >  		intel_dp_get_dpcd(intel_dp);
> > +		dpcd_updated = true;
> > +	}
> >  
> > -	intel_dp_reset_max_link_params(intel_dp);
> > +	intel_dp_tunnel_resume(intel_dp, dpcd_updated);
> > +
> > +	if (crtc_state)
> > +		intel_dp_reset_max_link_params(intel_dp);
> >  }
> >  
> >  bool intel_dp_initial_fastset_check(struct intel_encoder *encoder,
> > @@ -5973,6 +5977,8 @@ void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder)
> >  	struct intel_dp *intel_dp = enc_to_intel_dp(intel_encoder);
> >  
> >  	intel_pps_vdd_off_sync(intel_dp);
> > +
> > +	intel_dp_tunnel_suspend(intel_dp);
> >  }
> >  
> >  void intel_dp_encoder_shutdown(struct intel_encoder *intel_encoder)
> > -- 
> > 2.39.2
> 
> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-01-31 16:09   ` Ville Syrjälä
@ 2024-01-31 18:49     ` Imre Deak
  2024-02-05 16:13       ` Ville Syrjälä
  0 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-01-31 18:49 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, dri-devel

On Wed, Jan 31, 2024 at 06:09:04PM +0200, Ville Syrjälä wrote:
> On Tue, Jan 23, 2024 at 12:28:33PM +0200, Imre Deak wrote:
> > Add support for Display Port DP tunneling. For now this includes the
> > support for Bandwidth Allocation Mode, leaving adding Panel Replay
> > support for later.
> > 
> > BWA allows using displays that share the same (Thunderbolt) link with
> > their maximum resolution. Atm, this may not be possible due to the
> > coarse granularity of partitioning the link BW among the displays on the
> > link: the BW allocation policy is in a SW/FW/HW component on the link
> > (on Thunderbolt it's the SW or FW Connection Manager), independent of
> > the driver. This policy will set the DPRX maximum rate and lane count
> > DPCD registers the GFX driver will see (0x00000, 0x00001, 0x02200,
> > 0x02201) based on the available link BW.
> > 
> > The granularity of the current BW allocation policy is course, based on
> > the required link rate in the 1.62Gbs..8.1Gbps range and it may prevent
> > using higher resolutions all together: the display connected first will
> > get a share of the link BW which corresponds to its full DPRX capability
> > (regardless of the actual mode it uses). A subsequent display connected
> > will only get the remaining BW, which could be well below its full
> > capability.
> > 
> > BWA solves the above course granularity (reducing it to a 250Mbs..1Gps
> > range) and first-come/first-served issues by letting the driver request
> > the BW for each display on a link which reflects the actual modes the
> > displays use.
> > 
> > This patch adds the DRM core helper functions, while a follow-up change
> > in the patchset takes them into use in the i915 driver.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/display/Kconfig         |   17 +
> >  drivers/gpu/drm/display/Makefile        |    2 +
> >  drivers/gpu/drm/display/drm_dp_tunnel.c | 1715 +++++++++++++++++++++++
> >  include/drm/display/drm_dp.h            |   60 +
> >  include/drm/display/drm_dp_tunnel.h     |  270 ++++
> >  5 files changed, 2064 insertions(+)
> >  create mode 100644 drivers/gpu/drm/display/drm_dp_tunnel.c
> >  create mode 100644 include/drm/display/drm_dp_tunnel.h
> > 
> > diff --git a/drivers/gpu/drm/display/Kconfig b/drivers/gpu/drm/display/Kconfig
> > index 09712b88a5b83..b024a84b94c1c 100644
> > --- a/drivers/gpu/drm/display/Kconfig
> > +++ b/drivers/gpu/drm/display/Kconfig
> > @@ -17,6 +17,23 @@ config DRM_DISPLAY_DP_HELPER
> >  	help
> >  	  DRM display helpers for DisplayPort.
> >  
> > +config DRM_DISPLAY_DP_TUNNEL
> > +	bool
> > +	select DRM_DISPLAY_DP_HELPER
> > +	help
> > +	  Enable support for DisplayPort tunnels.
> > +
> > +config DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +	bool "Enable debugging the DP tunnel state"
> > +	depends on REF_TRACKER
> > +	depends on DRM_DISPLAY_DP_TUNNEL
> > +	depends on DEBUG_KERNEL
> > +	depends on EXPERT
> > +	help
> > +	  Enables debugging the DP tunnel manager's status.
> > +
> > +	  If in doubt, say "N".
> 
> It's not exactly clear what a "DP tunnel" is.
> Shouldn't thunderbolt be mentioned here somewhere?

The only way I'm aware of tunneling can work is through a TBT link yes,
however I'm not sure if it couldn't work on any DP link, the interface -
to request BW - is simply the AUX bus after all and AFAIR the standard
doesn't mention TBT either (but have to reread that). The above
descriptions should be extended anyway and the usual TBT scenario
mentioned at least, so will do that.

> > +
> >  config DRM_DISPLAY_HDCP_HELPER
> >  	bool
> >  	depends on DRM_DISPLAY_HELPER
> > diff --git a/drivers/gpu/drm/display/Makefile b/drivers/gpu/drm/display/Makefile
> > index 17ac4a1006a80..7ca61333c6696 100644
> > --- a/drivers/gpu/drm/display/Makefile
> > +++ b/drivers/gpu/drm/display/Makefile
> > @@ -8,6 +8,8 @@ drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_HELPER) += \
> >  	drm_dp_helper.o \
> >  	drm_dp_mst_topology.o \
> >  	drm_dsc_helper.o
> > +drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_TUNNEL) += \
> > +	drm_dp_tunnel.o
> >  drm_display_helper-$(CONFIG_DRM_DISPLAY_HDCP_HELPER) += drm_hdcp_helper.o
> >  drm_display_helper-$(CONFIG_DRM_DISPLAY_HDMI_HELPER) += \
> >  	drm_hdmi_helper.o \
> > diff --git a/drivers/gpu/drm/display/drm_dp_tunnel.c b/drivers/gpu/drm/display/drm_dp_tunnel.c
> > new file mode 100644
> > index 0000000000000..58f6330db7d9d
> > --- /dev/null
> > +++ b/drivers/gpu/drm/display/drm_dp_tunnel.c
> > @@ -0,0 +1,1715 @@
> > +// SPDX-License-Identifier: MIT
> > +/*
> > + * Copyright © 2023 Intel Corporation
> > + */
> > +
> > +#include <linux/ref_tracker.h>
> > +#include <linux/types.h>
> > +
> > +#include <drm/drm_atomic_state_helper.h>
> > +
> > +#include <drm/drm_atomic.h>
> > +#include <drm/drm_print.h>
> > +#include <drm/display/drm_dp.h>
> > +#include <drm/display/drm_dp_helper.h>
> > +#include <drm/display/drm_dp_tunnel.h>
> > +
> > +#define to_group(__private_obj) \
> > +	container_of(__private_obj, struct drm_dp_tunnel_group, base)
> > +
> > +#define to_group_state(__private_state) \
> > +	container_of(__private_state, struct drm_dp_tunnel_group_state, base)
> > +
> > +#define is_dp_tunnel_private_obj(__obj) \
> > +	((__obj)->funcs == &tunnel_group_funcs)
> > +
> > +#define for_each_new_group_in_state(__state, __new_group_state, __i) \
> > +	for ((__i) = 0; \
> > +	     (__i) < (__state)->num_private_objs; \
> > +	     (__i)++) \
> > +		for_each_if ((__state)->private_objs[__i].ptr && \
> > +			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
> > +			     ((__new_group_state) = \
> > +				to_group_state((__state)->private_objs[__i].new_state), 1))
> > +
> > +#define for_each_old_group_in_state(__state, __old_group_state, __i) \
> > +	for ((__i) = 0; \
> > +	     (__i) < (__state)->num_private_objs; \
> > +	     (__i)++) \
> > +		for_each_if ((__state)->private_objs[__i].ptr && \
> > +			     is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \
> > +			     ((__old_group_state) = \
> > +				to_group_state((__state)->private_objs[__i].old_state), 1))
> > +
> > +#define for_each_tunnel_in_group(__group, __tunnel) \
> > +	list_for_each_entry(__tunnel, &(__group)->tunnels, node)
> > +
> > +#define for_each_tunnel_state(__group_state, __tunnel_state) \
> > +	list_for_each_entry(__tunnel_state, &(__group_state)->tunnel_states, node)
> > +
> > +#define for_each_tunnel_state_safe(__group_state, __tunnel_state, __tunnel_state_tmp) \
> > +	list_for_each_entry_safe(__tunnel_state, __tunnel_state_tmp, \
> > +				 &(__group_state)->tunnel_states, node)
> > +
> > +#define kbytes_to_mbits(__kbytes) \
> > +	DIV_ROUND_UP((__kbytes) * 8, 1000)
> > +
> > +#define DPTUN_BW_ARG(__bw) ((__bw) < 0 ? (__bw) : kbytes_to_mbits(__bw))
> > +
> > +#define __tun_prn(__tunnel, __level, __type, __fmt, ...) \
> > +	drm_##__level##__type((__tunnel)->group->mgr->dev, \
> > +			      "[DPTUN %s][%s] " __fmt, \
> > +			      drm_dp_tunnel_name(__tunnel), \
> > +			      (__tunnel)->aux->name, ## \
> > +			      __VA_ARGS__)
> > +
> > +#define tun_dbg(__tunnel, __fmt, ...) \
> > +	__tun_prn(__tunnel, dbg, _kms, __fmt, ## __VA_ARGS__)
> > +
> > +#define tun_dbg_stat(__tunnel, __err, __fmt, ...) do { \
> > +	if (__err) \
> > +		__tun_prn(__tunnel, dbg, _kms, __fmt " (Failed, err: %pe)\n", \
> > +			  ## __VA_ARGS__, ERR_PTR(__err)); \
> > +	else \
> > +		__tun_prn(__tunnel, dbg, _kms, __fmt " (Ok)\n", \
> > +			  ## __VA_ARGS__); \
> > +} while (0)
> > +
> > +#define tun_dbg_atomic(__tunnel, __fmt, ...) \
> > +	__tun_prn(__tunnel, dbg, _atomic, __fmt, ## __VA_ARGS__)
> > +
> > +#define tun_grp_dbg(__group, __fmt, ...) \
> > +	drm_dbg_kms((__group)->mgr->dev, \
> > +		    "[DPTUN %s] " __fmt, \
> > +		    drm_dp_tunnel_group_name(__group), ## \
> > +		    __VA_ARGS__)
> > +
> > +#define DP_TUNNELING_BASE DP_TUNNELING_OUI
> > +
> > +#define __DPTUN_REG_RANGE(start, size) \
> > +	GENMASK_ULL(start + size - 1, start)
> > +
> > +#define DPTUN_REG_RANGE(addr, size) \
> > +	__DPTUN_REG_RANGE((addr) - DP_TUNNELING_BASE, size)
> > +
> > +#define DPTUN_REG(addr) DPTUN_REG_RANGE(addr, 1)
> > +
> > +#define DPTUN_INFO_REG_MASK ( \
> > +	DPTUN_REG_RANGE(DP_TUNNELING_OUI, DP_TUNNELING_OUI_BYTES) | \
> > +	DPTUN_REG_RANGE(DP_TUNNELING_DEV_ID, DP_TUNNELING_DEV_ID_BYTES) | \
> > +	DPTUN_REG(DP_TUNNELING_HW_REV) | \
> > +	DPTUN_REG(DP_TUNNELING_SW_REV_MAJOR) | \
> > +	DPTUN_REG(DP_TUNNELING_SW_REV_MINOR) | \
> > +	DPTUN_REG(DP_TUNNELING_CAPABILITIES) | \
> > +	DPTUN_REG(DP_IN_ADAPTER_INFO) | \
> > +	DPTUN_REG(DP_USB4_DRIVER_ID) | \
> > +	DPTUN_REG(DP_USB4_DRIVER_BW_CAPABILITY) | \
> > +	DPTUN_REG(DP_IN_ADAPTER_TUNNEL_INFORMATION) | \
> > +	DPTUN_REG(DP_BW_GRANULARITY) | \
> > +	DPTUN_REG(DP_ESTIMATED_BW) | \
> > +	DPTUN_REG(DP_ALLOCATED_BW) | \
> > +	DPTUN_REG(DP_TUNNELING_MAX_LINK_RATE) | \
> > +	DPTUN_REG(DP_TUNNELING_MAX_LANE_COUNT) | \
> > +	DPTUN_REG(DP_DPTX_BW_ALLOCATION_MODE_CONTROL))
> > +
> > +static const DECLARE_BITMAP(dptun_info_regs, 64) = {
> > +	DPTUN_INFO_REG_MASK & -1UL,
> > +#if BITS_PER_LONG == 32
> > +	DPTUN_INFO_REG_MASK >> 32,
> > +#endif
> > +};
> > +
> > +struct drm_dp_tunnel_regs {
> > +	u8 buf[HWEIGHT64(DPTUN_INFO_REG_MASK)];
> > +};
> 
> That seems to be some kind of thing to allow us to store
> the values for non-consecutive DPCD registers in a
> contiguous non-sparse array? How much memory are we
> actually saving here as opposed to just using the
> full sized array?

Actually not for saving space, rather a way to define the contiguous
ranges that can be read out in one transfer, without accessing other
registers (which may have side-effects). A bitmap being the more decent
way without having to specify the ranges in an ad-hoc way.

> 
> Wasn't really expecting this kind of thing in here...
> 
> > +
> > +struct drm_dp_tunnel_group;
> > +
> > +struct drm_dp_tunnel {
> > +	struct drm_dp_tunnel_group *group;
> > +
> > +	struct list_head node;
> > +
> > +	struct kref kref;
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +	struct ref_tracker *tracker;
> > +#endif
> > +	struct drm_dp_aux *aux;
> > +	char name[8];
> > +
> > +	int bw_granularity;
> > +	int estimated_bw;
> > +	int allocated_bw;
> > +
> > +	int max_dprx_rate;
> > +	u8 max_dprx_lane_count;
> > +
> > +	u8 adapter_id;
> > +
> > +	bool bw_alloc_supported:1;
> > +	bool bw_alloc_enabled:1;
> > +	bool has_io_error:1;
> > +	bool destroyed:1;
> > +};
> > +
> > +struct drm_dp_tunnel_group_state;
> > +
> > +struct drm_dp_tunnel_state {
> > +	struct drm_dp_tunnel_group_state *group_state;
> > +
> > +	struct drm_dp_tunnel_ref tunnel_ref;
> > +
> > +	struct list_head node;
> > +
> > +	u32 stream_mask;
> > +	int *stream_bw;
> > +};
> > +
> > +struct drm_dp_tunnel_group_state {
> > +	struct drm_private_state base;
> > +
> > +	struct list_head tunnel_states;
> > +};
> > +
> > +struct drm_dp_tunnel_group {
> > +	struct drm_private_obj base;
> > +	struct drm_dp_tunnel_mgr *mgr;
> > +
> > +	struct list_head tunnels;
> > +
> > +	int available_bw;	/* available BW including the allocated_bw of all tunnels */
> > +	int drv_group_id;
> > +
> > +	char name[8];
> > +
> > +	bool active:1;
> > +};
> > +
> > +struct drm_dp_tunnel_mgr {
> > +	struct drm_device *dev;
> > +
> > +	int group_count;
> > +	struct drm_dp_tunnel_group *groups;
> > +	wait_queue_head_t bw_req_queue;
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +	struct ref_tracker_dir ref_tracker;
> > +#endif
> > +};
> > +
> > +static int next_reg_area(int *offset)
> > +{
> > +	*offset = find_next_bit(dptun_info_regs, 64, *offset);
> > +
> > +	return find_next_zero_bit(dptun_info_regs, 64, *offset + 1) - *offset;
> > +}
> > +
> > +#define tunnel_reg_ptr(__regs, __address) ({ \
> > +	WARN_ON(!test_bit((__address) - DP_TUNNELING_BASE, dptun_info_regs)); \
> > +	&(__regs)->buf[bitmap_weight(dptun_info_regs, (__address) - DP_TUNNELING_BASE)]; \
> > +})
> > +
> > +static int read_tunnel_regs(struct drm_dp_aux *aux, struct drm_dp_tunnel_regs *regs)
> > +{
> > +	int offset = 0;
> > +	int len;
> > +
> > +	while ((len = next_reg_area(&offset))) {
> > +		int address = DP_TUNNELING_BASE + offset;
> > +
> > +		if (drm_dp_dpcd_read(aux, address, tunnel_reg_ptr(regs, address), len) < 0)
> > +			return -EIO;
> > +
> > +		offset += len;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static u8 tunnel_reg(const struct drm_dp_tunnel_regs *regs, int address)
> > +{
> > +	return *tunnel_reg_ptr(regs, address);
> > +}
> > +
> > +static int tunnel_reg_drv_group_id(const struct drm_dp_tunnel_regs *regs)
> > +{
> > +	int drv_id = tunnel_reg(regs, DP_USB4_DRIVER_ID) & DP_USB4_DRIVER_ID_MASK;
> > +	int group_id = tunnel_reg(regs, DP_IN_ADAPTER_TUNNEL_INFORMATION) & DP_GROUP_ID_MASK;
> 
> Maybe these things should be u8/etc. everywhere? Would at least
> indicate that I don't need to look for where negative values
> are handled...

Ok, will change these.

> > +
> > +	if (!group_id)
> > +		return 0;
> > +
> > +	return (drv_id << DP_GROUP_ID_BITS) | group_id;
> > +}
> > +
> > +/* Return granularity in kB/s units */
> > +static int tunnel_reg_bw_granularity(const struct drm_dp_tunnel_regs *regs)
> > +{
> > +	int gr = tunnel_reg(regs, DP_BW_GRANULARITY) & DP_BW_GRANULARITY_MASK;
> > +
> > +	WARN_ON(gr > 2);
> > +
> > +	return (250000 << gr) / 8;
> > +}
> > +
> > +static int tunnel_reg_max_dprx_rate(const struct drm_dp_tunnel_regs *regs)
> > +{
> > +	u8 bw_code = tunnel_reg(regs, DP_TUNNELING_MAX_LINK_RATE);
> > +
> > +	return drm_dp_bw_code_to_link_rate(bw_code);
> > +}
> > +
> > +static int tunnel_reg_max_dprx_lane_count(const struct drm_dp_tunnel_regs *regs)
> > +{
> > +	u8 lane_count = tunnel_reg(regs, DP_TUNNELING_MAX_LANE_COUNT) &
> > +			DP_TUNNELING_MAX_LANE_COUNT_MASK;
> > +
> > +	return lane_count;
> > +}
> > +
> > +static bool tunnel_reg_bw_alloc_supported(const struct drm_dp_tunnel_regs *regs)
> > +{
> > +	u8 cap_mask = DP_TUNNELING_SUPPORT | DP_IN_BW_ALLOCATION_MODE_SUPPORT;
> > +
> > +	if ((tunnel_reg(regs, DP_TUNNELING_CAPABILITIES) & cap_mask) != cap_mask)
> > +		return false;
> > +
> > +	return tunnel_reg(regs, DP_USB4_DRIVER_BW_CAPABILITY) &
> > +	       DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT;
> > +}
> > +
> > +static bool tunnel_reg_bw_alloc_enabled(const struct drm_dp_tunnel_regs *regs)
> > +{
> > +	return tunnel_reg(regs, DP_DPTX_BW_ALLOCATION_MODE_CONTROL) &
> > +		DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE;
> > +}
> > +
> > +static int tunnel_group_drv_id(int drv_group_id)
> > +{
> > +	return drv_group_id >> DP_GROUP_ID_BITS;
> > +}
> > +
> > +static int tunnel_group_id(int drv_group_id)
> > +{
> > +	return drv_group_id & DP_GROUP_ID_MASK;
> > +}
> > +
> > +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return tunnel->name;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_name);
> > +
> > +static const char *drm_dp_tunnel_group_name(const struct drm_dp_tunnel_group *group)
> > +{
> > +	return group->name;
> > +}
> > +
> > +static struct drm_dp_tunnel_group *
> > +lookup_or_alloc_group(struct drm_dp_tunnel_mgr *mgr, int drv_group_id)
> > +{
> > +	struct drm_dp_tunnel_group *group = NULL;
> > +	int i;
> > +
> > +	for (i = 0; i < mgr->group_count; i++) {
> > +		/*
> > +		 * A tunnel group with 0 group ID shouldn't have more than one
> > +		 * tunnels.
> > +		 */
> > +		if (tunnel_group_id(drv_group_id) &&
> > +		    mgr->groups[i].drv_group_id == drv_group_id)
> > +			return &mgr->groups[i];
> > +
> > +		if (!group && !mgr->groups[i].active)
> > +			group = &mgr->groups[i];
> > +	}
> > +
> > +	if (!group) {
> > +		drm_dbg_kms(mgr->dev,
> > +			    "DPTUN: Can't allocate more tunnel groups\n");
> > +		return NULL;
> > +	}
> > +
> > +	group->drv_group_id = drv_group_id;
> > +	group->active = true;
> > +
> > +	snprintf(group->name, sizeof(group->name), "%d:%d:*",
> 
> What does the '*' indicate?

The prefix in all DP tunnel/group debug message is
Driver-ID:Group-ID:DP-Adapter-ID, for group debug messages * standing
for all tunnels (aka DP-Adapters) in the group.

> > +		 tunnel_group_drv_id(drv_group_id) & ((1 << DP_GROUP_ID_BITS) - 1),
> > +		 tunnel_group_id(drv_group_id) & ((1 << DP_USB4_DRIVER_ID_BITS) - 1));
> > +
> > +	return group;
> > +}
> > +
> > +static void free_group(struct drm_dp_tunnel_group *group)
> > +{
> > +	struct drm_dp_tunnel_mgr *mgr = group->mgr;
> > +
> > +	if (drm_WARN_ON(mgr->dev, !list_empty(&group->tunnels)))
> > +		return;
> > +
> > +	group->drv_group_id = 0;
> > +	group->available_bw = -1;
> > +	group->active = false;
> > +}
> > +
> > +static struct drm_dp_tunnel *
> > +tunnel_get(struct drm_dp_tunnel *tunnel)
> > +{
> > +	kref_get(&tunnel->kref);
> > +
> > +	return tunnel;
> > +}
> > +
> > +static void free_tunnel(struct kref *kref)
> > +{
> > +	struct drm_dp_tunnel *tunnel = container_of(kref, typeof(*tunnel), kref);
> > +	struct drm_dp_tunnel_group *group = tunnel->group;
> > +
> > +	list_del(&tunnel->node);
> > +	if (list_empty(&group->tunnels))
> > +		free_group(group);
> > +
> > +	kfree(tunnel);
> > +}
> > +
> > +static void tunnel_put(struct drm_dp_tunnel *tunnel)
> > +{
> > +	kref_put(&tunnel->kref, free_tunnel);
> > +}
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +static void track_tunnel_ref(struct drm_dp_tunnel *tunnel,
> > +			     struct ref_tracker **tracker)
> > +{
> > +	ref_tracker_alloc(&tunnel->group->mgr->ref_tracker,
> > +			  tracker, GFP_KERNEL);
> > +}
> > +
> > +static void untrack_tunnel_ref(struct drm_dp_tunnel *tunnel,
> > +			       struct ref_tracker **tracker)
> > +{
> > +	ref_tracker_free(&tunnel->group->mgr->ref_tracker,
> > +			 tracker);
> > +}
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> > +{
> > +	track_tunnel_ref(tunnel, NULL);
> > +
> > +	return tunnel_get(tunnel);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
> 
> Why do these exist?

They implement drm_dp_tunnel_get()/put() if
CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE=n.

> > +
> > +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel)
> > +{
> > +	tunnel_put(tunnel);
> > +	untrack_tunnel_ref(tunnel, NULL);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_put_untracked);
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel,
> > +		    struct ref_tracker **tracker)
> > +{
> > +	track_tunnel_ref(tunnel, tracker);
> > +
> > +	return tunnel_get(tunnel);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_get);
> > +
> > +void drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel,
> > +			 struct ref_tracker **tracker)
> > +{
> > +	untrack_tunnel_ref(tunnel, tracker);
> > +	tunnel_put(tunnel);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_put);
> > +#else
> > +#define track_tunnel_ref(tunnel, tracker) do {} while (0)
> > +#define untrack_tunnel_ref(tunnel, tracker) do {} while (0)
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return tunnel_get(tunnel);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
> > +
> > +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel)
> > +{
> > +	tunnel_put(tunnel);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_put_untracked);
> > +#endif
> > +
> > +static bool add_tunnel_to_group(struct drm_dp_tunnel_mgr *mgr,
> > +				int drv_group_id,
> > +				struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_group *group =
> > +		lookup_or_alloc_group(mgr, drv_group_id);
> > +
> > +	if (!group)
> > +		return false;
> > +
> > +	tunnel->group = group;
> > +	list_add(&tunnel->node, &group->tunnels);
> > +
> > +	return true;
> > +}
> > +
> > +static struct drm_dp_tunnel *
> > +create_tunnel(struct drm_dp_tunnel_mgr *mgr,
> > +	      struct drm_dp_aux *aux,
> > +	      const struct drm_dp_tunnel_regs *regs)
> > +{
> > +	int drv_group_id = tunnel_reg_drv_group_id(regs);
> > +	struct drm_dp_tunnel *tunnel;
> > +
> > +	tunnel = kzalloc(sizeof(*tunnel), GFP_KERNEL);
> > +	if (!tunnel)
> > +		return NULL;
> > +
> > +	INIT_LIST_HEAD(&tunnel->node);
> > +
> > +	kref_init(&tunnel->kref);
> > +
> > +	tunnel->aux = aux;
> > +
> > +	tunnel->adapter_id = tunnel_reg(regs, DP_IN_ADAPTER_INFO) & DP_IN_ADAPTER_NUMBER_MASK;
> > +
> > +	snprintf(tunnel->name, sizeof(tunnel->name), "%d:%d:%d",
> > +		 tunnel_group_drv_id(drv_group_id) & ((1 << DP_GROUP_ID_BITS) - 1),
> > +		 tunnel_group_id(drv_group_id) & ((1 << DP_USB4_DRIVER_ID_BITS) - 1),
> > +		 tunnel->adapter_id & ((1 << DP_IN_ADAPTER_NUMBER_BITS) - 1));
> > +
> > +	tunnel->bw_granularity = tunnel_reg_bw_granularity(regs);
> > +	tunnel->allocated_bw = tunnel_reg(regs, DP_ALLOCATED_BW) *
> > +			       tunnel->bw_granularity;
> > +
> > +	tunnel->bw_alloc_supported = tunnel_reg_bw_alloc_supported(regs);
> > +	tunnel->bw_alloc_enabled = tunnel_reg_bw_alloc_enabled(regs);
> > +
> > +	if (!add_tunnel_to_group(mgr, drv_group_id, tunnel)) {
> > +		kfree(tunnel);
> > +
> > +		return NULL;
> > +	}
> > +
> > +	track_tunnel_ref(tunnel, &tunnel->tracker);
> > +
> > +	return tunnel;
> > +}
> > +
> > +static void destroy_tunnel(struct drm_dp_tunnel *tunnel)
> > +{
> > +	untrack_tunnel_ref(tunnel, &tunnel->tracker);
> > +	tunnel_put(tunnel);
> > +}
> > +
> > +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel)
> > +{
> > +	tunnel->has_io_error = true;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_set_io_error);
> > +
> > +static char yes_no_chr(int val)
> > +{
> > +	return val ? 'Y' : 'N';
> > +}
> > +
> > +#define SKIP_DPRX_CAPS_CHECK		BIT(0)
> > +#define ALLOW_ALLOCATED_BW_CHANGE	BIT(1)
> > +
> > +static bool tunnel_regs_are_valid(struct drm_dp_tunnel_mgr *mgr,
> > +				  const struct drm_dp_tunnel_regs *regs,
> > +				  unsigned int flags)
> > +{
> > +	int drv_group_id = tunnel_reg_drv_group_id(regs);
> > +	bool check_dprx = !(flags & SKIP_DPRX_CAPS_CHECK);
> > +	bool ret = true;
> > +
> > +	if (!tunnel_reg_bw_alloc_supported(regs)) {
> > +		if (tunnel_group_id(drv_group_id)) {
> > +			drm_dbg_kms(mgr->dev,
> > +				    "DPTUN: A non-zero group ID is only allowed with BWA support\n");
> > +			ret = false;
> > +		}
> > +
> > +		if (tunnel_reg(regs, DP_ALLOCATED_BW)) {
> > +			drm_dbg_kms(mgr->dev,
> > +				    "DPTUN: BW is allocated without BWA support\n");
> > +			ret = false;
> > +		}
> > +
> > +		return ret;
> > +	}
> > +
> > +	if (!tunnel_group_id(drv_group_id)) {
> > +		drm_dbg_kms(mgr->dev,
> > +			    "DPTUN: BWA support requires a non-zero group ID\n");
> > +		ret = false;
> > +	}
> > +
> > +	if (check_dprx && hweight8(tunnel_reg_max_dprx_lane_count(regs)) != 1) {
> > +		drm_dbg_kms(mgr->dev,
> > +			    "DPTUN: Invalid DPRX lane count: %d\n",
> > +			    tunnel_reg_max_dprx_lane_count(regs));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	if (check_dprx && !tunnel_reg_max_dprx_rate(regs)) {
> > +		drm_dbg_kms(mgr->dev,
> > +			    "DPTUN: DPRX rate is 0\n");
> > +
> > +		ret = false;
> > +	}
> > +
> > +	if (tunnel_reg(regs, DP_ALLOCATED_BW) > tunnel_reg(regs, DP_ESTIMATED_BW)) {
> > +		drm_dbg_kms(mgr->dev,
> > +			    "DPTUN: Allocated BW %d > estimated BW %d Mb/s\n",
> > +			    DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) *
> > +					 tunnel_reg_bw_granularity(regs)),
> > +			    DPTUN_BW_ARG(tunnel_reg(regs, DP_ESTIMATED_BW) *
> > +					 tunnel_reg_bw_granularity(regs)));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	return ret;
> > +}
> > +
> > +static bool tunnel_info_changes_are_valid(struct drm_dp_tunnel *tunnel,
> > +					  const struct drm_dp_tunnel_regs *regs,
> > +					  unsigned int flags)
> > +{
> > +	int new_drv_group_id = tunnel_reg_drv_group_id(regs);
> > +	bool ret = true;
> > +
> > +	if (tunnel->bw_alloc_supported != tunnel_reg_bw_alloc_supported(regs)) {
> > +		tun_dbg(tunnel,
> > +			"BW alloc support has changed %c -> %c\n",
> > +			yes_no_chr(tunnel->bw_alloc_supported),
> > +			yes_no_chr(tunnel_reg_bw_alloc_supported(regs)));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	if (tunnel->group->drv_group_id != new_drv_group_id) {
> > +		tun_dbg(tunnel,
> > +			"Driver/group ID has changed %d:%d:* -> %d:%d:*\n",
> > +			tunnel_group_drv_id(tunnel->group->drv_group_id),
> > +			tunnel_group_id(tunnel->group->drv_group_id),
> > +			tunnel_group_drv_id(new_drv_group_id),
> > +			tunnel_group_id(new_drv_group_id));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	if (!tunnel->bw_alloc_supported)
> > +		return ret;
> > +
> > +	if (tunnel->bw_granularity != tunnel_reg_bw_granularity(regs)) {
> > +		tun_dbg(tunnel,
> > +			"BW granularity has changed: %d -> %d Mb/s\n",
> > +			DPTUN_BW_ARG(tunnel->bw_granularity),
> > +			DPTUN_BW_ARG(tunnel_reg_bw_granularity(regs)));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	/*
> > +	 * On some devices at least the BW alloc mode enabled status is always
> > +	 * reported as 0, so skip checking that here.
> > +	 */
> > +
> > +	if (!(flags & ALLOW_ALLOCATED_BW_CHANGE) &&
> > +	    tunnel->allocated_bw !=
> > +	    tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity) {
> > +		tun_dbg(tunnel,
> > +			"Allocated BW has changed: %d -> %d Mb/s\n",
> > +			DPTUN_BW_ARG(tunnel->allocated_bw),
> > +			DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	return ret;
> > +}
> > +
> > +static int
> > +read_and_verify_tunnel_regs(struct drm_dp_tunnel *tunnel,
> > +			    struct drm_dp_tunnel_regs *regs,
> > +			    unsigned int flags)
> > +{
> > +	int err;
> > +
> > +	err = read_tunnel_regs(tunnel->aux, regs);
> > +	if (err < 0) {
> > +		drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +		return err;
> > +	}
> > +
> > +	if (!tunnel_regs_are_valid(tunnel->group->mgr, regs, flags))
> > +		return -EINVAL;
> > +
> > +	if (!tunnel_info_changes_are_valid(tunnel, regs, flags))
> > +		return -EINVAL;
> > +
> > +	return 0;
> > +}
> > +
> > +static bool update_dprx_caps(struct drm_dp_tunnel *tunnel, const struct drm_dp_tunnel_regs *regs)
> > +{
> > +	bool changed = false;
> > +
> > +	if (tunnel_reg_max_dprx_rate(regs) != tunnel->max_dprx_rate) {
> > +		tunnel->max_dprx_rate = tunnel_reg_max_dprx_rate(regs);
> > +		changed = true;
> > +	}
> > +
> > +	if (tunnel_reg_max_dprx_lane_count(regs) != tunnel->max_dprx_lane_count) {
> > +		tunnel->max_dprx_lane_count = tunnel_reg_max_dprx_lane_count(regs);
> > +		changed = true;
> > +	}
> > +
> > +	return changed;
> > +}
> > +
> > +static int dev_id_len(const u8 *dev_id, int max_len)
> > +{
> > +	while (max_len && dev_id[max_len - 1] == '\0')
> > +		max_len--;
> > +
> > +	return max_len;
> > +}
> > +
> > +static int get_max_dprx_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	int bw = drm_dp_max_dprx_data_rate(tunnel->max_dprx_rate,
> > +					   tunnel->max_dprx_lane_count);
> > +
> > +	return min(roundup(bw, tunnel->bw_granularity),
> > +		   MAX_DP_REQUEST_BW * tunnel->bw_granularity);
> > +}
> > +
> > +static int get_max_tunnel_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return min(get_max_dprx_bw(tunnel), tunnel->group->available_bw);
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_detect - Detect DP tunnel on the link
> > + * @mgr: Tunnel manager
> > + * @aux: DP AUX on which the tunnel will be detected
> > + *
> > + * Detect if there is any DP tunnel on the link and add it to the tunnel
> > + * group's tunnel list.
> > + *
> > + * Returns 0 on success, negative error code on failure.

The above is buggy, will fix it.

> > + */
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> > +		       struct drm_dp_aux *aux)
> > +{
> > +	struct drm_dp_tunnel_regs regs;
> > +	struct drm_dp_tunnel *tunnel;
> > +	int err;
> > +
> > +	err = read_tunnel_regs(aux, &regs);
> > +	if (err)
> > +		return ERR_PTR(err);
> > +
> > +	if (!(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
> > +	      DP_TUNNELING_SUPPORT))
> > +		return ERR_PTR(-ENODEV);
> > +
> > +	/* The DPRX caps are valid only after enabling BW alloc mode. */
> > +	if (!tunnel_regs_are_valid(mgr, &regs, SKIP_DPRX_CAPS_CHECK))
> > +		return ERR_PTR(-EINVAL);
> > +
> > +	tunnel = create_tunnel(mgr, aux, &regs);
> > +	if (!tunnel)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	tun_dbg(tunnel,
> > +		"OUI:%*phD DevID:%*pE Rev-HW:%d.%d SW:%d.%d PR-Sup:%c BWA-Sup:%c BWA-En:%c\n",
> > +		DP_TUNNELING_OUI_BYTES,
> > +			tunnel_reg_ptr(&regs, DP_TUNNELING_OUI),
> > +		dev_id_len(tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID), DP_TUNNELING_DEV_ID_BYTES),
> > +			tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID),
> > +		(tunnel_reg(&regs, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MAJOR_MASK) >>
> > +			DP_TUNNELING_HW_REV_MAJOR_SHIFT,
> > +		(tunnel_reg(&regs, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MINOR_MASK) >>
> > +			DP_TUNNELING_HW_REV_MINOR_SHIFT,
> > +		tunnel_reg(&regs, DP_TUNNELING_SW_REV_MAJOR),
> > +		tunnel_reg(&regs, DP_TUNNELING_SW_REV_MINOR),
> > +		yes_no_chr(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
> > +			   DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT),
> > +		yes_no_chr(tunnel->bw_alloc_supported),
> > +		yes_no_chr(tunnel->bw_alloc_enabled));
> > +
> > +	return tunnel;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_detect);
> > +
> > +/**
> > + * drm_dp_tunnel_destroy - Destroy tunnel object
> > + * @tunnel: Tunnel object
> > + *
> > + * Remove the tunnel from the tunnel topology and destroy it.
> > + */
> > +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> > +{
> > +	if (drm_WARN_ON(tunnel->group->mgr->dev, tunnel->destroyed))
> > +		return -ENODEV;
> > +
> > +	tun_dbg(tunnel, "destroying\n");
> > +
> > +	tunnel->destroyed = true;
> > +	destroy_tunnel(tunnel);
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_destroy);
> > +
> > +static int check_tunnel(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	if (tunnel->destroyed)
> > +		return -ENODEV;
> > +
> > +	if (tunnel->has_io_error)
> > +		return -EIO;
> > +
> > +	return 0;
> > +}
> > +
> > +static int group_allocated_bw(struct drm_dp_tunnel_group *group)
> > +{
> > +	struct drm_dp_tunnel *tunnel;
> > +	int group_allocated_bw = 0;
> > +
> > +	for_each_tunnel_in_group(group, tunnel) {
> > +		if (check_tunnel(tunnel) == 0 &&
> > +		    tunnel->bw_alloc_enabled)
> > +			group_allocated_bw += tunnel->allocated_bw;
> > +	}
> > +
> > +	return group_allocated_bw;
> > +}
> > +
> > +static int calc_group_available_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return group_allocated_bw(tunnel->group) -
> > +	       tunnel->allocated_bw +
> > +	       tunnel->estimated_bw;
> > +}
> > +
> > +static int update_group_available_bw(struct drm_dp_tunnel *tunnel,
> > +				     const struct drm_dp_tunnel_regs *regs)
> > +{
> > +	struct drm_dp_tunnel *tunnel_iter;
> > +	int group_available_bw;
> > +	bool changed;
> > +
> > +	tunnel->estimated_bw = tunnel_reg(regs, DP_ESTIMATED_BW) * tunnel->bw_granularity;
> > +
> > +	if (calc_group_available_bw(tunnel) == tunnel->group->available_bw)
> > +		return 0;
> > +
> > +	for_each_tunnel_in_group(tunnel->group, tunnel_iter) {
> > +		int err;
> > +
> > +		if (tunnel_iter == tunnel)
> > +			continue;
> > +
> > +		if (check_tunnel(tunnel_iter) != 0 ||
> > +		    !tunnel_iter->bw_alloc_enabled)
> > +			continue;
> > +
> > +		err = drm_dp_dpcd_probe(tunnel_iter->aux, DP_DPCD_REV);
> > +		if (err) {
> > +			tun_dbg(tunnel_iter,
> > +				"Probe failed, assume disconnected (err %pe)\n",
> > +				ERR_PTR(err));
> > +			drm_dp_tunnel_set_io_error(tunnel_iter);
> > +		}
> > +	}
> > +
> > +	group_available_bw = calc_group_available_bw(tunnel);
> > +
> > +	tun_dbg(tunnel, "Updated group available BW: %d->%d\n",
> > +		DPTUN_BW_ARG(tunnel->group->available_bw),
> > +		DPTUN_BW_ARG(group_available_bw));
> > +
> > +	changed = tunnel->group->available_bw != group_available_bw;
> > +
> > +	tunnel->group->available_bw = group_available_bw;
> > +
> > +	return changed ? 1 : 0;
> > +}
> > +
> > +static int set_bw_alloc_mode(struct drm_dp_tunnel *tunnel, bool enable)
> > +{
> > +	u8 mask = DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE | DP_UNMASK_BW_ALLOCATION_IRQ;
> > +	u8 val;
> > +
> > +	if (drm_dp_dpcd_readb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, &val) < 0)
> > +		goto out_err;
> > +
> > +	if (enable)
> > +		val |= mask;
> > +	else
> > +		val &= ~mask;
> > +
> > +	if (drm_dp_dpcd_writeb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, val) < 0)
> > +		goto out_err;
> > +
> > +	tunnel->bw_alloc_enabled = enable;
> > +
> > +	return 0;
> > +
> > +out_err:
> > +	drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +	return -EIO;
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_enable_bw_alloc: Enable DP tunnel BW allocation mode
> > + * @tunnel: Tunnel object
> > + *
> > + * Enable the DP tunnel BW allocation mode on @tunnel if it supports it.
> > + *
> > + * Returns 0 in case of success, negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_regs regs;
> > +	int err = check_tunnel(tunnel);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	if (!tunnel->bw_alloc_supported)
> > +		return -EOPNOTSUPP;
> > +
> > +	if (!tunnel_group_id(tunnel->group->drv_group_id))
> > +		return -EINVAL;
> > +
> > +	err = set_bw_alloc_mode(tunnel, true);
> > +	if (err)
> > +		goto out;
> > +
> > +	err = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> > +	if (err) {
> > +		set_bw_alloc_mode(tunnel, false);
> > +
> > +		goto out;
> > +	}
> > +
> > +	if (!tunnel->max_dprx_rate)
> > +		update_dprx_caps(tunnel, &regs);
> > +
> > +	if (tunnel->group->available_bw == -1) {
> > +		err = update_group_available_bw(tunnel, &regs);
> > +		if (err > 0)
> > +			err = 0;
> > +	}
> > +out:
> > +	tun_dbg_stat(tunnel, err,
> > +		     "Enabling BW alloc mode: DPRX:%dx%d Group alloc:%d/%d Mb/s",
> > +		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
> > +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> > +
> > +	return err;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_enable_bw_alloc);
> > +
> > +/**
> > + * drm_dp_tunnel_disable_bw_alloc: Disable DP tunnel BW allocation mode
> > + * @tunnel: Tunnel object
> > + *
> > + * Disable the DP tunnel BW allocation mode on @tunnel.
> > + *
> > + * Returns 0 in case of success, negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > +{
> > +	int err = check_tunnel(tunnel);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	err = set_bw_alloc_mode(tunnel, false);
> > +
> > +	tun_dbg_stat(tunnel, err, "Disabling BW alloc mode");
> > +
> > +	return err;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_disable_bw_alloc);
> > +
> > +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return tunnel->bw_alloc_enabled;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_bw_alloc_is_enabled);
> > +
> > +static int bw_req_complete(struct drm_dp_aux *aux, bool *status_changed)
> > +{
> > +	u8 bw_req_mask = DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED;
> > +	u8 status_change_mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
> > +	u8 val;
> > +
> > +	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> > +		return -EIO;
> > +
> > +	*status_changed = val & status_change_mask;
> > +
> > +	val &= bw_req_mask;
> > +
> > +	if (!val)
> > +		return -EAGAIN;
> > +
> > +	if (drm_dp_dpcd_writeb(aux, DP_TUNNELING_STATUS, val) < 0)
> > +		return -EIO;
> > +
> > +	return val == DP_BW_REQUEST_SUCCEEDED ? 0 : -ENOSPC;
> > +}
> > +
> > +static int allocate_tunnel_bw(struct drm_dp_tunnel *tunnel, int bw)
> > +{
> > +	struct drm_dp_tunnel_mgr *mgr = tunnel->group->mgr;
> > +	int request_bw = DIV_ROUND_UP(bw, tunnel->bw_granularity);
> > +	unsigned long wait_expires;
> > +	DEFINE_WAIT(wait);
> > +	int err;
> > +
> > +	/* Atomic check should prevent the following. */
> > +	if (drm_WARN_ON(mgr->dev, request_bw > MAX_DP_REQUEST_BW)) {
> > +		err = -EINVAL;
> > +		goto out;
> > +	}
> > +
> > +	if (drm_dp_dpcd_writeb(tunnel->aux, DP_REQUEST_BW, request_bw) < 0) {
> > +		err = -EIO;
> > +		goto out;
> > +	}
> > +
> > +	wait_expires = jiffies + msecs_to_jiffies(3000);
> > +
> > +	for (;;) {
> > +		bool status_changed;
> > +
> > +		err = bw_req_complete(tunnel->aux, &status_changed);
> > +		if (err != -EAGAIN)
> > +			break;
> > +
> > +		if (status_changed) {
> > +			struct drm_dp_tunnel_regs regs;
> > +
> > +			err = read_and_verify_tunnel_regs(tunnel, &regs,
> > +							  ALLOW_ALLOCATED_BW_CHANGE);
> > +			if (err)
> > +				break;
> > +		}
> > +
> > +		if (time_after(jiffies, wait_expires)) {
> > +			err = -ETIMEDOUT;
> > +			break;
> > +		}
> > +
> > +		prepare_to_wait(&mgr->bw_req_queue, &wait, TASK_UNINTERRUPTIBLE);
> 
> Shouldn't the prepare_to_wait() be done before checking the
> condition?

Yes, this order could miss a wake-up between bw_req_complete() and
prepare_to_wait(), will move the latter before the former thanks for
spotting it.

> 
> > +		schedule_timeout(msecs_to_jiffies(200));
> 
> I guess the timeout here saves us, even if we race with the wakeup
> due to the above.

Yes, it's a poll+IRQ wait but for another reason: the TBT stack on some
platforms (ADLP, granted only a development platform) does not raise an
interrupt at all. Maybe needs a comment.

> > +	};
> > +
> > +	finish_wait(&mgr->bw_req_queue, &wait);
> > +
> > +	if (err)
> > +		goto out;
> > +
> > +	tunnel->allocated_bw = request_bw * tunnel->bw_granularity;
> > +
> > +out:
> > +	tun_dbg_stat(tunnel, err, "Allocating %d/%d Mb/s for tunnel: Group alloc:%d/%d Mb/s",
> > +		     DPTUN_BW_ARG(request_bw * tunnel->bw_granularity),
> > +		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> > +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> > +
> > +	if (err == -EIO)
> > +		drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +	return err;
> > +}
> > +
> > +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> > +{
> > +	int err = check_tunnel(tunnel);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	return allocate_tunnel_bw(tunnel, bw);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_alloc_bw);
> > +
> > +static int check_and_clear_status_change(struct drm_dp_tunnel *tunnel)
> > +{
> > +	u8 mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
> > +	u8 val;
> > +
> > +	if (drm_dp_dpcd_readb(tunnel->aux, DP_TUNNELING_STATUS, &val) < 0)
> > +		goto out_err;
> > +
> > +	val &= mask;
> > +
> > +	if (val) {
> > +		if (drm_dp_dpcd_writeb(tunnel->aux, DP_TUNNELING_STATUS, val) < 0)
> > +			goto out_err;
> > +
> > +		return 1;
> > +	}
> > +
> > +	if (!drm_dp_tunnel_bw_alloc_is_enabled(tunnel))
> > +		return 0;
> > +
> > +	/*
> > +	 * Check for estimated BW changes explicitly to account for lost
> > +	 * BW change notifications.
> > +	 */
> > +	if (drm_dp_dpcd_readb(tunnel->aux, DP_ESTIMATED_BW, &val) < 0)
> > +		goto out_err;
> > +
> > +	if (val * tunnel->bw_granularity != tunnel->estimated_bw)
> > +		return 1;
> > +
> > +	return 0;
> > +
> > +out_err:
> > +	drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +	return -EIO;
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_update_state: Update DP tunnel SW state with the HW state
> > + * @tunnel: Tunnel object
> > + *
> > + * Update the SW state of @tunnel with the HW state.
> > + *
> > + * Returns 0 if the state has not changed, 1 if it has changed and got updated
> > + * successfully and a negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_regs regs;
> > +	bool changed = false;
> > +	int ret = check_tunnel(tunnel);
> > +
> > +	if (ret < 0)
> > +		return ret;
> > +
> > +	ret = check_and_clear_status_change(tunnel);
> > +	if (ret < 0)
> > +		goto out;
> > +
> > +	if (!ret)
> > +		return 0;
> > +
> > +	ret = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> > +	if (ret)
> > +		goto out;
> > +
> > +	if (update_dprx_caps(tunnel, &regs))
> > +		changed = true;
> > +
> > +	ret = update_group_available_bw(tunnel, &regs);
> > +	if (ret == 1)
> > +		changed = true;
> > +
> > +out:
> > +	tun_dbg_stat(tunnel, ret < 0 ? ret : 0,
> > +		     "State update: Changed:%c DPRX:%dx%d Tunnel alloc:%d/%d Group alloc:%d/%d Mb/s",
> > +		     yes_no_chr(changed),
> > +		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
> > +		     DPTUN_BW_ARG(tunnel->allocated_bw),
> > +		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> > +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> > +
> > +	if (ret < 0)
> > +		return ret;
> > +
> > +	if (changed)
> > +		return 1;
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_update_state);
> > +
> > +/*
> > + * Returns 0 if no re-probe is needed, 1 if a re-probe is needed,
> > + * a negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_aux *aux)
> > +{
> > +	u8 val;
> > +
> > +	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> > +		return -EIO;
> > +
> > +	if (val & (DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED))
> > +		wake_up_all(&mgr->bw_req_queue);
> > +
> > +	if (val & (DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED))
> > +		return 1;
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_handle_irq);
> > +
> > +/**
> > + * drm_dp_tunnel_max_dprx_rate - Query the maximum rate of the tunnel's DPRX
> > + * @tunnel: Tunnel object
> > + *
> > + * The function is used to query the maximum link rate of the DPRX connected
> > + * to @tunnel. Note that this rate will not be limited by the BW limit of the
> > + * tunnel, as opposed to the standard and extended DP_MAX_LINK_RATE DPCD
> > + * registers.
> > + *
> > + * Returns the maximum link rate in 10 kbit/s units.
> > + */
> > +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return tunnel->max_dprx_rate;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_rate);
> > +
> > +/**
> > + * drm_dp_tunnel_max_dprx_lane_count - Query the maximum lane count of the tunnel's DPRX
> > + * @tunnel: Tunnel object
> > + *
> > + * The function is used to query the maximum lane count of the DPRX connected
> > + * to @tunnel. Note that this lane count will not be limited by the BW limit of
> > + * the tunnel, as opposed to the standard and extended DP_MAX_LANE_COUNT DPCD
> > + * registers.
> > + *
> > + * Returns the maximum lane count.
> > + */
> > +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return tunnel->max_dprx_lane_count;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_lane_count);
> > +
> > +/**
> > + * drm_dp_tunnel_available_bw - Query the estimated total available BW of the tunnel
> > + * @tunnel: Tunnel object
> > + *
> > + * This function is used to query the estimated total available BW of the
> > + * tunnel. This includes the currently allocated and free BW for all the
> > + * tunnels in @tunnel's group. The available BW is valid only after the BW
> > + * allocation mode has been enabled for the tunnel and its state got updated
> > + * calling drm_dp_tunnel_update_state().
> > + *
> > + * Returns the @tunnel group's estimated total available bandwidth in kB/s
> > + * units, or -1 if the available BW isn't valid (the BW allocation mode is
> > + * not enabled or the tunnel's state hasn't been updated).
> > + */
> > +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return tunnel->group->available_bw;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_available_bw);
> > +
> > +static struct drm_dp_tunnel_group_state *
> > +drm_dp_tunnel_atomic_get_group_state(struct drm_atomic_state *state,
> > +				     const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return (struct drm_dp_tunnel_group_state *)
> > +		drm_atomic_get_private_obj_state(state,
> > +						 &tunnel->group->base);
> > +}
> > +
> > +static struct drm_dp_tunnel_state *
> > +add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> > +		 struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	tun_dbg_atomic(tunnel,
> > +		       "Adding state for tunnel %p to group state %p\n",
> > +		       tunnel, group_state);
> > +
> > +	tunnel_state = kzalloc(sizeof(*tunnel_state), GFP_KERNEL);
> > +	if (!tunnel_state)
> > +		return NULL;
> > +
> > +	tunnel_state->group_state = group_state;
> > +
> > +	drm_dp_tunnel_ref_get(tunnel, &tunnel_state->tunnel_ref);
> > +
> > +	INIT_LIST_HEAD(&tunnel_state->node);
> > +	list_add(&tunnel_state->node, &group_state->tunnel_states);
> > +
> > +	return tunnel_state;
> > +}
> > +
> > +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state)
> > +{
> > +	tun_dbg_atomic(tunnel_state->tunnel_ref.tunnel,
> > +		       "Clearing state for tunnel %p\n",
> > +		       tunnel_state->tunnel_ref.tunnel);
> > +
> > +	list_del(&tunnel_state->node);
> > +
> > +	kfree(tunnel_state->stream_bw);
> > +	drm_dp_tunnel_ref_put(&tunnel_state->tunnel_ref);
> > +
> > +	kfree(tunnel_state);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_clear_state);
> 
> That looks like some kind of destructor so the function name doesn't
> seem to fit.

Right, will rename it.

> Is there even any need to export that since it doesn't look like
> any kind of high level thing, and it's called from a static function
> below?

Yes, the export could be only a left-over, will make it static. Btw, the
latest version is also available at
https://github.com/ideak/linux/commits/dp_tun_bw_alloc

> > +
> > +static void clear_tunnel_group_state(struct drm_dp_tunnel_group_state *group_state)
> > +{
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +	struct drm_dp_tunnel_state *tunnel_state_tmp;
> > +
> > +	for_each_tunnel_state_safe(group_state, tunnel_state, tunnel_state_tmp)
> > +		drm_dp_tunnel_atomic_clear_state(tunnel_state);
> > +}
> > +
> > +static struct drm_dp_tunnel_state *
> > +get_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> > +		 const struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	for_each_tunnel_state(group_state, tunnel_state)
> > +		if (tunnel_state->tunnel_ref.tunnel == tunnel)
> > +			return tunnel_state;
> > +
> > +	return NULL;
> > +}
> > +
> > +static struct drm_dp_tunnel_state *
> > +get_or_add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> > +			struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	tunnel_state = get_tunnel_state(group_state, tunnel);
> > +	if (tunnel_state)
> > +		return tunnel_state;
> > +
> > +	return add_tunnel_state(group_state, tunnel);
> > +}
> > +
> > +static struct drm_private_state *
> > +tunnel_group_duplicate_state(struct drm_private_obj *obj)
> > +{
> > +	struct drm_dp_tunnel_group_state *group_state = to_group_state(obj->state);
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> > +	if (!group_state)
> > +		return NULL;
> > +
> > +	INIT_LIST_HEAD(&group_state->tunnel_states);
> > +
> > +	__drm_atomic_helper_private_obj_duplicate_state(obj, &group_state->base);
> > +
> > +	for_each_tunnel_state(to_group_state(obj->state), tunnel_state) {
> > +		struct drm_dp_tunnel_state *new_tunnel_state;
> > +
> > +		new_tunnel_state = get_or_add_tunnel_state(group_state,
> > +							   tunnel_state->tunnel_ref.tunnel);
> > +		if (!new_tunnel_state)
> > +			goto out_free_state;
> > +
> > +		new_tunnel_state->stream_mask = tunnel_state->stream_mask;
> > +		new_tunnel_state->stream_bw = kmemdup(tunnel_state->stream_bw,
> > +						      sizeof(*tunnel_state->stream_bw) *
> > +							hweight32(tunnel_state->stream_mask),
> > +						      GFP_KERNEL);
> > +
> > +		if (!new_tunnel_state->stream_bw)
> > +			goto out_free_state;
> > +	}
> > +
> > +	return &group_state->base;
> > +
> > +out_free_state:
> > +	clear_tunnel_group_state(group_state);
> > +	kfree(group_state);
> > +
> > +	return NULL;
> > +}
> > +
> > +static void tunnel_group_destroy_state(struct drm_private_obj *obj, struct drm_private_state *state)
> > +{
> > +	struct drm_dp_tunnel_group_state *group_state = to_group_state(state);
> > +
> > +	clear_tunnel_group_state(group_state);
> > +	kfree(group_state);
> > +}
> > +
> > +static const struct drm_private_state_funcs tunnel_group_funcs = {
> > +	.atomic_duplicate_state = tunnel_group_duplicate_state,
> > +	.atomic_destroy_state = tunnel_group_destroy_state,
> > +};
> > +
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > +			       struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_group_state *group_state =
> > +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	if (IS_ERR(group_state))
> > +		return ERR_CAST(group_state);
> > +
> > +	tunnel_state = get_or_add_tunnel_state(group_state, tunnel);
> > +	if (!tunnel_state)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	return tunnel_state;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_state);
> > +
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > +				   const struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_group_state *new_group_state;
> > +	int i;
> > +
> > +	for_each_new_group_in_state(state, new_group_state, i)
> > +		if (to_group(new_group_state->base.obj) == tunnel->group)
> > +			return get_tunnel_state(new_group_state, tunnel);
> > +
> > +	return NULL;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_new_state);
> > +
> > +static bool init_group(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_tunnel_group *group)
> > +{
> > +	struct drm_dp_tunnel_group_state *group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> > +
> > +	if (!group_state)
> > +		return false;
> > +
> > +	INIT_LIST_HEAD(&group_state->tunnel_states);
> > +
> > +	group->mgr = mgr;
> > +	group->available_bw = -1;
> > +	INIT_LIST_HEAD(&group->tunnels);
> > +
> > +	drm_atomic_private_obj_init(mgr->dev, &group->base, &group_state->base,
> > +				    &tunnel_group_funcs);
> > +
> > +	return true;
> > +}
> > +
> > +static void cleanup_group(struct drm_dp_tunnel_group *group)
> > +{
> > +	drm_atomic_private_obj_fini(&group->base);
> > +}
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
> > +{
> > +	const struct drm_dp_tunnel_state *tunnel_state;
> > +	u32 stream_mask = 0;
> > +
> > +	for_each_tunnel_state(group_state, tunnel_state) {
> > +		drm_WARN(to_group(group_state->base.obj)->mgr->dev,
> > +			 tunnel_state->stream_mask & stream_mask,
> > +			 "[DPTUN %s]: conflicting stream IDs %x (IDs in other tunnels %x)\n",
> > +			 tunnel_state->tunnel_ref.tunnel->name,
> > +			 tunnel_state->stream_mask,
> > +			 stream_mask);
> > +
> > +		stream_mask |= tunnel_state->stream_mask;
> > +	}
> > +}
> > +#else
> > +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
> > +{
> > +}
> > +#endif
> > +
> > +static int stream_id_to_idx(u32 stream_mask, u8 stream_id)
> > +{
> > +	return hweight32(stream_mask & (BIT(stream_id) - 1));
> > +}
> > +
> > +static int resize_bw_array(struct drm_dp_tunnel_state *tunnel_state,
> > +			   unsigned long old_mask, unsigned long new_mask)
> > +{
> > +	unsigned long move_mask = old_mask & new_mask;
> > +	int *new_bws = NULL;
> > +	int id;
> > +
> > +	WARN_ON(!new_mask);
> > +
> > +	if (old_mask == new_mask)
> > +		return 0;
> > +
> > +	new_bws = kcalloc(hweight32(new_mask), sizeof(*new_bws), GFP_KERNEL);
> > +	if (!new_bws)
> > +		return -ENOMEM;
> > +
> > +	for_each_set_bit(id, &move_mask, BITS_PER_TYPE(move_mask))
> > +		new_bws[stream_id_to_idx(new_mask, id)] =
> > +			tunnel_state->stream_bw[stream_id_to_idx(old_mask, id)];
> > +
> > +	kfree(tunnel_state->stream_bw);
> > +	tunnel_state->stream_bw = new_bws;
> > +	tunnel_state->stream_mask = new_mask;
> > +
> > +	return 0;
> > +}
> > +
> > +static int set_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> > +			 u8 stream_id, int bw)
> > +{
> > +	int err;
> > +
> > +	err = resize_bw_array(tunnel_state,
> > +			      tunnel_state->stream_mask,
> > +			      tunnel_state->stream_mask | BIT(stream_id));
> > +	if (err)
> > +		return err;
> > +
> > +	tunnel_state->stream_bw[stream_id_to_idx(tunnel_state->stream_mask, stream_id)] = bw;
> > +
> > +	return 0;
> > +}
> > +
> > +static int clear_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> > +			   u8 stream_id)
> > +{
> > +	if (!(tunnel_state->stream_mask & ~BIT(stream_id))) {
> > +		drm_dp_tunnel_atomic_clear_state(tunnel_state);
> > +		return 0;
> > +	}
> > +
> > +	return resize_bw_array(tunnel_state,
> > +			       tunnel_state->stream_mask,
> > +			       tunnel_state->stream_mask & ~BIT(stream_id));
> > +}
> > +
> > +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> > +					 struct drm_dp_tunnel *tunnel,
> > +					 u8 stream_id, int bw)
> > +{
> > +	struct drm_dp_tunnel_group_state *new_group_state =
> > +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +	int err;
> > +
> > +	if (drm_WARN_ON(tunnel->group->mgr->dev,
> > +			stream_id > BITS_PER_TYPE(tunnel_state->stream_mask)))
> > +		return -EINVAL;
> > +
> > +	tun_dbg(tunnel,
> > +		"Setting %d Mb/s for stream %d\n",
> > +		DPTUN_BW_ARG(bw), stream_id);
> > +
> > +	if (bw == 0) {
> > +		tunnel_state = get_tunnel_state(new_group_state, tunnel);
> > +		if (!tunnel_state)
> > +			return 0;
> > +
> > +		return clear_stream_bw(tunnel_state, stream_id);
> > +	}
> > +
> > +	tunnel_state = get_or_add_tunnel_state(new_group_state, tunnel);
> > +	if (drm_WARN_ON(state->dev, !tunnel_state))
> > +		return -EINVAL;
> > +
> > +	err = set_stream_bw(tunnel_state, stream_id, bw);
> > +	if (err)
> > +		return err;
> > +
> > +	check_unique_stream_ids(new_group_state);
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_set_stream_bw);
> > +
> > +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
> > +{
> > +	int tunnel_bw = 0;
> > +	int i;
> > +
> > +	for (i = 0; i < hweight32(tunnel_state->stream_mask); i++)
> > +		tunnel_bw += tunnel_state->stream_bw[i];
> > +
> > +	return tunnel_bw;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_tunnel_bw);
> > +
> > +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> > +						    const struct drm_dp_tunnel *tunnel,
> > +						    u32 *stream_mask)
> > +{
> > +	struct drm_dp_tunnel_group_state *group_state =
> > +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	if (IS_ERR(group_state))
> > +		return PTR_ERR(group_state);
> > +
> > +	*stream_mask = 0;
> > +	for_each_tunnel_state(group_state, tunnel_state)
> > +		*stream_mask |= tunnel_state->stream_mask;
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_group_streams_in_state);
> > +
> > +static int
> > +drm_dp_tunnel_atomic_check_group_bw(struct drm_dp_tunnel_group_state *new_group_state,
> > +				    u32 *failed_stream_mask)
> > +{
> > +	struct drm_dp_tunnel_group *group = to_group(new_group_state->base.obj);
> > +	struct drm_dp_tunnel_state *new_tunnel_state;
> > +	u32 group_stream_mask = 0;
> > +	int group_bw = 0;
> > +
> > +	for_each_tunnel_state(new_group_state, new_tunnel_state) {
> > +		struct drm_dp_tunnel *tunnel = new_tunnel_state->tunnel_ref.tunnel;
> > +		int max_dprx_bw = get_max_dprx_bw(tunnel);
> > +		int tunnel_bw = drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state);
> > +
> > +		tun_dbg(tunnel,
> > +			"%sRequired %d/%d Mb/s total for tunnel.\n",
> > +			tunnel_bw > max_dprx_bw ? "Not enough BW: " : "",
> > +			DPTUN_BW_ARG(tunnel_bw),
> > +			DPTUN_BW_ARG(max_dprx_bw));
> > +
> > +		if (tunnel_bw > max_dprx_bw) {
> > +			*failed_stream_mask = new_tunnel_state->stream_mask;
> > +			return -ENOSPC;
> > +		}
> > +
> > +		group_bw += min(roundup(tunnel_bw, tunnel->bw_granularity),
> > +				max_dprx_bw);
> > +		group_stream_mask |= new_tunnel_state->stream_mask;
> > +	}
> > +
> > +	tun_grp_dbg(group,
> > +		    "%sRequired %d/%d Mb/s total for tunnel group.\n",
> > +		    group_bw > group->available_bw ? "Not enough BW: " : "",
> > +		    DPTUN_BW_ARG(group_bw),
> > +		    DPTUN_BW_ARG(group->available_bw));
> > +
> > +	if (group_bw > group->available_bw) {
> > +		*failed_stream_mask = group_stream_mask;
> > +		return -ENOSPC;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> > +					  u32 *failed_stream_mask)
> > +{
> > +	struct drm_dp_tunnel_group_state *new_group_state;
> > +	int i;
> > +
> > +	for_each_new_group_in_state(state, new_group_state, i) {
> > +		int ret;
> > +
> > +		ret = drm_dp_tunnel_atomic_check_group_bw(new_group_state,
> > +							  failed_stream_mask);
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_check_stream_bws);
> > +
> > +static void destroy_mgr(struct drm_dp_tunnel_mgr *mgr)
> > +{
> > +	int i;
> > +
> > +	for (i = 0; i < mgr->group_count; i++) {
> > +		cleanup_group(&mgr->groups[i]);
> > +		drm_WARN_ON(mgr->dev, !list_empty(&mgr->groups[i].tunnels));
> > +	}
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +	ref_tracker_dir_exit(&mgr->ref_tracker);
> > +#endif
> > +
> > +	kfree(mgr->groups);
> > +	kfree(mgr);
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_mgr_create - Create a DP tunnel manager
> > + * @i915: i915 driver object
> > + *
> > + * Creates a DP tunnel manager.
> > + *
> > + * Returns a pointer to the tunnel manager if created successfully or NULL in
> > + * case of an error.
> > + */
> > +struct drm_dp_tunnel_mgr *
> > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
> > +{
> > +	struct drm_dp_tunnel_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
> 
> I dislike it when functions that can fail or that have side effects
> are called from the variable declaration block. There's quite a bit
> of that in this patch. IMO it's far too easy to overlook such function
> calls.
> 
> > +	int i;
> > +
> 
> ie. the kzalloc() should be here IMO.

Okay, will change this and other places.

> 
> > +	if (!mgr)
> > +		return NULL;
> > +
> > +	mgr->dev = dev;
> > +	init_waitqueue_head(&mgr->bw_req_queue);
> > +
> > +	mgr->groups = kcalloc(max_group_count, sizeof(*mgr->groups), GFP_KERNEL);
> > +	if (!mgr->groups) {
> > +		kfree(mgr);
> > +
> > +		return NULL;
> > +	}
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +	ref_tracker_dir_init(&mgr->ref_tracker, 16, "dptun");
> > +#endif
> > +
> > +	for (i = 0; i < max_group_count; i++) {
> > +		if (!init_group(mgr, &mgr->groups[i])) {
> > +			destroy_mgr(mgr);
> > +
> > +			return NULL;
> > +		}
> > +
> > +		mgr->group_count++;
> > +	}
> > +
> > +	return mgr;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_mgr_create);
> > +
> > +/**
> > + * drm_dp_tunnel_mgr_destroy - Destroy DP tunnel manager
> > + * @mgr: Tunnel manager object
> > + *
> > + * Destroy the tunnel manager.
> > + */
> > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr)
> > +{
> > +	destroy_mgr(mgr);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_mgr_destroy);
> > diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
> > index 281afff6ee4e5..8bfd5d007be8d 100644
> > --- a/include/drm/display/drm_dp.h
> > +++ b/include/drm/display/drm_dp.h
> > @@ -1382,6 +1382,66 @@
> >  #define DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET	0x69494
> >  #define DP_HDCP_2_2_REG_DBG_OFFSET		0x69518
> >  
> > +/* DP-tunneling */
> > +#define DP_TUNNELING_OUI				0xe0000
> > +#define  DP_TUNNELING_OUI_BYTES				3
> > +
> > +#define DP_TUNNELING_DEV_ID				0xe0003
> > +#define  DP_TUNNELING_DEV_ID_BYTES			6
> > +
> > +#define DP_TUNNELING_HW_REV				0xe0009
> > +#define  DP_TUNNELING_HW_REV_MAJOR_SHIFT		4
> > +#define  DP_TUNNELING_HW_REV_MAJOR_MASK			(0xf << DP_TUNNELING_HW_REV_MAJOR_SHIFT)
> > +#define  DP_TUNNELING_HW_REV_MINOR_SHIFT		0
> > +#define  DP_TUNNELING_HW_REV_MINOR_MASK			(0xf << DP_TUNNELING_HW_REV_MINOR_SHIFT)
> > +
> > +#define DP_TUNNELING_SW_REV_MAJOR			0xe000a
> > +#define DP_TUNNELING_SW_REV_MINOR			0xe000b
> > +
> > +#define DP_TUNNELING_CAPABILITIES			0xe000d
> > +#define  DP_IN_BW_ALLOCATION_MODE_SUPPORT		(1 << 7)
> > +#define  DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT		(1 << 6)
> > +#define  DP_TUNNELING_SUPPORT				(1 << 0)
> > +
> > +#define DP_IN_ADAPTER_INFO				0xe000e
> > +#define  DP_IN_ADAPTER_NUMBER_BITS			7
> > +#define  DP_IN_ADAPTER_NUMBER_MASK			((1 << DP_IN_ADAPTER_NUMBER_BITS) - 1)
> > +
> > +#define DP_USB4_DRIVER_ID				0xe000f
> > +#define  DP_USB4_DRIVER_ID_BITS				4
> > +#define  DP_USB4_DRIVER_ID_MASK				((1 << DP_USB4_DRIVER_ID_BITS) - 1)
> > +
> > +#define DP_USB4_DRIVER_BW_CAPABILITY			0xe0020
> > +#define  DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT	(1 << 7)
> > +
> > +#define DP_IN_ADAPTER_TUNNEL_INFORMATION		0xe0021
> > +#define  DP_GROUP_ID_BITS				3
> > +#define  DP_GROUP_ID_MASK				((1 << DP_GROUP_ID_BITS) - 1)
> > +
> > +#define DP_BW_GRANULARITY				0xe0022
> > +#define  DP_BW_GRANULARITY_MASK				0x3
> > +
> > +#define DP_ESTIMATED_BW					0xe0023
> > +#define DP_ALLOCATED_BW					0xe0024
> > +
> > +#define DP_TUNNELING_STATUS				0xe0025
> > +#define  DP_BW_ALLOCATION_CAPABILITY_CHANGED		(1 << 3)
> > +#define  DP_ESTIMATED_BW_CHANGED			(1 << 2)
> > +#define  DP_BW_REQUEST_SUCCEEDED			(1 << 1)
> > +#define  DP_BW_REQUEST_FAILED				(1 << 0)
> > +
> > +#define DP_TUNNELING_MAX_LINK_RATE			0xe0028
> > +
> > +#define DP_TUNNELING_MAX_LANE_COUNT			0xe0029
> > +#define  DP_TUNNELING_MAX_LANE_COUNT_MASK		0x1f
> > +
> > +#define DP_DPTX_BW_ALLOCATION_MODE_CONTROL		0xe0030
> > +#define  DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE	(1 << 7)
> > +#define  DP_UNMASK_BW_ALLOCATION_IRQ			(1 << 6)
> > +
> > +#define DP_REQUEST_BW					0xe0031
> > +#define  MAX_DP_REQUEST_BW				255
> > +
> >  /* LTTPR: Link Training (LT)-tunable PHY Repeaters */
> >  #define DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV 0xf0000 /* 1.3 */
> >  #define DP_MAX_LINK_RATE_PHY_REPEATER			    0xf0001 /* 1.4a */
> > diff --git a/include/drm/display/drm_dp_tunnel.h b/include/drm/display/drm_dp_tunnel.h
> > new file mode 100644
> > index 0000000000000..f6449b1b4e6e9
> > --- /dev/null
> > +++ b/include/drm/display/drm_dp_tunnel.h
> > @@ -0,0 +1,270 @@
> > +/* SPDX-License-Identifier: MIT */
> > +/*
> > + * Copyright © 2023 Intel Corporation
> > + */
> > +
> > +#ifndef __DRM_DP_TUNNEL_H__
> > +#define __DRM_DP_TUNNEL_H__
> > +
> > +#include <linux/err.h>
> > +#include <linux/errno.h>
> > +#include <linux/types.h>
> > +
> > +struct drm_dp_aux;
> > +
> > +struct drm_device;
> > +
> > +struct drm_atomic_state;
> > +struct drm_dp_tunnel_mgr;
> > +struct drm_dp_tunnel_state;
> > +
> > +struct ref_tracker;
> > +
> > +struct drm_dp_tunnel_ref {
> > +	struct drm_dp_tunnel *tunnel;
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +	struct ref_tracker *tracker;
> > +#endif
> > +};
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel);
> > +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel);
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
> > +
> > +void
> > +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
> > +#else
> > +#define drm_dp_tunnel_get(tunnel, tracker) \
> > +	drm_dp_tunnel_get_untracked(tunnel)
> > +
> > +#define drm_dp_tunnel_put(tunnel, tracker) \
> > +	drm_dp_tunnel_put_untracked(tunnel)
> > +
> > +#endif
> > +
> > +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
> > +					   struct drm_dp_tunnel_ref *tunnel_ref)
> > +{
> > +	tunnel_ref->tunnel = drm_dp_tunnel_get(tunnel, &tunnel_ref->tracker);
> > +}
> > +
> > +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref)
> > +{
> > +	drm_dp_tunnel_put(tunnel_ref->tunnel, &tunnel_ref->tracker);
> > +}
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> > +		     struct drm_dp_aux *aux);
> > +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel);
> > +
> > +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel);
> > +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw);
> > +int drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel);
> > +
> > +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel);
> > +
> > +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> > +			     struct drm_dp_aux *aux);
> > +
> > +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel);
> > +
> > +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel);
> > +
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > +			       struct drm_dp_tunnel *tunnel);
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > +				   const struct drm_dp_tunnel *tunnel);
> > +
> > +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state);
> > +
> > +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> > +				       struct drm_dp_tunnel *tunnel,
> > +				       u8 stream_id, int bw);
> > +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> > +						    const struct drm_dp_tunnel *tunnel,
> > +						    u32 *stream_mask);
> > +
> > +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> > +					  u32 *failed_stream_mask);
> > +
> > +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state);
> > +
> > +struct drm_dp_tunnel_mgr *
> > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count);
> > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr);
> > +
> > +#else
> > +
> > +static inline struct drm_dp_tunnel *
> > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return NULL;
> > +}
> > +
> > +static inline void
> > +drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel) {}
> > +
> > +static inline struct drm_dp_tunnel *
> > +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker)
> > +{
> > +	return NULL;
> > +}
> > +
> > +static inline void
> > +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker) {}
> > +
> > +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
> > +					   struct drm_dp_tunnel_ref *tunnel_ref) {}
> > +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref) {}
> > +
> > +static inline struct drm_dp_tunnel *
> > +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> > +		     struct drm_dp_aux *aux)
> > +{
> > +	return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return false;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel) {}
> > +static inline int
> > +drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> > +			 struct drm_dp_aux *aux)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return -1;
> > +}
> > +
> > +static inline const char *
> > +drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return NULL;
> > +}
> > +
> > +static inline struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > +			       struct drm_dp_tunnel *tunnel)
> > +{
> > +	return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > +				   const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline void
> > +drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state) {}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> > +				   struct drm_dp_tunnel *tunnel,
> > +				   u8 stream_id, int bw)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> > +						const struct drm_dp_tunnel *tunnel,
> > +						u32 *stream_mask)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> > +				      u32 *failed_stream_mask)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline struct drm_dp_tunnel_mgr *
> > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
> > +{
> > +	return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline
> > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr) {}
> > +
> > +
> > +#endif /* CONFIG_DRM_DISPLAY_DP_TUNNEL */
> > +
> > +#endif /* __DRM_DP_TUNNEL_H__ */
> > -- 
> > 2.39.2
> 
> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 12/19] drm/i915/dp: Add DP tunnel atomic state and check BW limit
  2024-01-23 10:28 ` [PATCH 12/19] drm/i915/dp: Add DP tunnel atomic state and check BW limit Imre Deak
@ 2024-02-05 16:11   ` Ville Syrjälä
  2024-02-05 17:52     ` Imre Deak
  0 siblings, 1 reply; 61+ messages in thread
From: Ville Syrjälä @ 2024-02-05 16:11 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx, dri-devel

On Tue, Jan 23, 2024 at 12:28:43PM +0200, Imre Deak wrote:
> Add the atomic state during a modeset required to enable the DP tunnel
> BW allocation mode on links where such a tunnel was detected.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_atomic.c  |  8 ++++++++
>  drivers/gpu/drm/i915/display/intel_display.c | 19 +++++++++++++++++++
>  drivers/gpu/drm/i915/display/intel_link_bw.c |  5 +++++
>  3 files changed, 32 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_atomic.c b/drivers/gpu/drm/i915/display/intel_atomic.c
> index 96ab37e158995..4236740ede9ed 100644
> --- a/drivers/gpu/drm/i915/display/intel_atomic.c
> +++ b/drivers/gpu/drm/i915/display/intel_atomic.c
> @@ -260,6 +260,10 @@ intel_crtc_duplicate_state(struct drm_crtc *crtc)
>  	if (crtc_state->post_csc_lut)
>  		drm_property_blob_get(crtc_state->post_csc_lut);
>  
> +	if (crtc_state->dp_tunnel_ref.tunnel)
> +		drm_dp_tunnel_ref_get(old_crtc_state->dp_tunnel_ref.tunnel,

I'd probably s/old_crtc_state/crtc_state/ here. Same pointer, but
looks out of place given everyone else just operates on 'crtc_state' 
here.

> +					&crtc_state->dp_tunnel_ref);

Shame we have to have this ref wrapper. But I guess no clean
way to have a magic tracked pointer type that works like a
normal pointer in C...

> +
>  	crtc_state->update_pipe = false;
>  	crtc_state->update_m_n = false;
>  	crtc_state->update_lrr = false;
> @@ -311,6 +315,8 @@ intel_crtc_destroy_state(struct drm_crtc *crtc,
>  
>  	__drm_atomic_helper_crtc_destroy_state(&crtc_state->uapi);
>  	intel_crtc_free_hw_state(crtc_state);
> +	if (crtc_state->dp_tunnel_ref.tunnel)
> +		drm_dp_tunnel_ref_put(&crtc_state->dp_tunnel_ref);
>  	kfree(crtc_state);
>  }
>  
> @@ -346,6 +352,8 @@ void intel_atomic_state_clear(struct drm_atomic_state *s)
>  	/* state->internal not reset on purpose */
>  
>  	state->dpll_set = state->modeset = false;
> +
> +	intel_dp_tunnel_atomic_cleanup_inherited_state(state);

This seems to be in the wrong patch?

>  }
>  
>  struct intel_crtc_state *
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> index b9f985a5e705b..46b27a32c8640 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -33,6 +33,7 @@
>  #include <linux/string_helpers.h>
>  
>  #include <drm/display/drm_dp_helper.h>
> +#include <drm/display/drm_dp_tunnel.h>
>  #include <drm/drm_atomic.h>
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_atomic_uapi.h>
> @@ -73,6 +74,7 @@
>  #include "intel_dp.h"
>  #include "intel_dp_link_training.h"
>  #include "intel_dp_mst.h"
> +#include "intel_dp_tunnel.h"
>  #include "intel_dpll.h"
>  #include "intel_dpll_mgr.h"
>  #include "intel_dpt.h"
> @@ -4490,6 +4492,8 @@ copy_bigjoiner_crtc_state_modeset(struct intel_atomic_state *state,
>  	saved_state->crc_enabled = slave_crtc_state->crc_enabled;
>  
>  	intel_crtc_free_hw_state(slave_crtc_state);
> +	if (slave_crtc_state->dp_tunnel_ref.tunnel)
> +		drm_dp_tunnel_ref_put(&slave_crtc_state->dp_tunnel_ref);
>  	memcpy(slave_crtc_state, saved_state, sizeof(*slave_crtc_state));
>  	kfree(saved_state);
>  
> @@ -4505,6 +4509,10 @@ copy_bigjoiner_crtc_state_modeset(struct intel_atomic_state *state,
>  		      &master_crtc_state->hw.adjusted_mode);
>  	slave_crtc_state->hw.scaling_filter = master_crtc_state->hw.scaling_filter;
>  
> +	if (master_crtc_state->dp_tunnel_ref.tunnel)
> +		drm_dp_tunnel_ref_get(master_crtc_state->dp_tunnel_ref.tunnel,
> +					&slave_crtc_state->dp_tunnel_ref);
> +
>  	copy_bigjoiner_crtc_state_nomodeset(state, slave_crtc);
>  
>  	slave_crtc_state->uapi.mode_changed = master_crtc_state->uapi.mode_changed;
> @@ -4533,6 +4541,13 @@ intel_crtc_prepare_cleared_state(struct intel_atomic_state *state,
>  	/* free the old crtc_state->hw members */
>  	intel_crtc_free_hw_state(crtc_state);
>  
> +	if (crtc_state->dp_tunnel_ref.tunnel) {
> +		drm_dp_tunnel_atomic_set_stream_bw(&state->base,
> +						   crtc_state->dp_tunnel_ref.tunnel,
> +						   crtc->pipe, 0);
> +		drm_dp_tunnel_ref_put(&crtc_state->dp_tunnel_ref);
> +	}
> +
>  	/* FIXME: before the switch to atomic started, a new pipe_config was
>  	 * kzalloc'd. Code that depends on any field being zero should be
>  	 * fixed, so that the crtc_state can be safely duplicated. For now,
> @@ -5374,6 +5389,10 @@ static int intel_modeset_pipe(struct intel_atomic_state *state,
>  	if (ret)
>  		return ret;
>  
> +	ret = intel_dp_tunnel_atomic_add_state_for_crtc(state, crtc);
> +	if (ret)
> +		return ret;
> +
>  	ret = intel_dp_mst_add_topology_state_for_crtc(state, crtc);
>  	if (ret)
>  		return ret;
> diff --git a/drivers/gpu/drm/i915/display/intel_link_bw.c b/drivers/gpu/drm/i915/display/intel_link_bw.c
> index 9c6d35a405a18..5b539ba996ddf 100644
> --- a/drivers/gpu/drm/i915/display/intel_link_bw.c
> +++ b/drivers/gpu/drm/i915/display/intel_link_bw.c
> @@ -8,6 +8,7 @@
>  #include "intel_atomic.h"
>  #include "intel_display_types.h"
>  #include "intel_dp_mst.h"
> +#include "intel_dp_tunnel.h"
>  #include "intel_fdi.h"
>  #include "intel_link_bw.h"
>  
> @@ -149,6 +150,10 @@ static int check_all_link_config(struct intel_atomic_state *state,
>  	if (ret)
>  		return ret;
>  
> +	ret = intel_dp_tunnel_atomic_check_link(state, limits);
> +	if (ret)
> +		return ret;
> +
>  	ret = intel_fdi_atomic_check_link(state, limits);
>  	if (ret)
>  		return ret;
> -- 
> 2.39.2

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-01-31 18:49     ` Imre Deak
@ 2024-02-05 16:13       ` Ville Syrjälä
  2024-02-05 17:15         ` Imre Deak
  0 siblings, 1 reply; 61+ messages in thread
From: Ville Syrjälä @ 2024-02-05 16:13 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx, dri-devel

On Wed, Jan 31, 2024 at 08:49:16PM +0200, Imre Deak wrote:
> On Wed, Jan 31, 2024 at 06:09:04PM +0200, Ville Syrjälä wrote:
> > On Tue, Jan 23, 2024 at 12:28:33PM +0200, Imre Deak wrote:
> > > +static void untrack_tunnel_ref(struct drm_dp_tunnel *tunnel,
> > > +			       struct ref_tracker **tracker)
> > > +{
> > > +	ref_tracker_free(&tunnel->group->mgr->ref_tracker,
> > > +			 tracker);
> > > +}
> > > +
> > > +struct drm_dp_tunnel *
> > > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	track_tunnel_ref(tunnel, NULL);
> > > +
> > > +	return tunnel_get(tunnel);
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
> > 
> > Why do these exist?
> 
> They implement drm_dp_tunnel_get()/put() if
> CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE=n.

Why does that kind of irrelevant detail need to be visible
in the exported api?

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-02-05 16:13       ` Ville Syrjälä
@ 2024-02-05 17:15         ` Imre Deak
  2024-02-05 22:17           ` Ville Syrjälä
  0 siblings, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-02-05 17:15 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, dri-devel

On Mon, Feb 05, 2024 at 06:13:30PM +0200, Ville Syrjälä wrote:
> On Wed, Jan 31, 2024 at 08:49:16PM +0200, Imre Deak wrote:
> > On Wed, Jan 31, 2024 at 06:09:04PM +0200, Ville Syrjälä wrote:
> > > On Tue, Jan 23, 2024 at 12:28:33PM +0200, Imre Deak wrote:
> > > > +static void untrack_tunnel_ref(struct drm_dp_tunnel *tunnel,
> > > > +			       struct ref_tracker **tracker)
> > > > +{
> > > > +	ref_tracker_free(&tunnel->group->mgr->ref_tracker,
> > > > +			 tracker);
> > > > +}
> > > > +
> > > > +struct drm_dp_tunnel *
> > > > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> > > > +{
> > > > +	track_tunnel_ref(tunnel, NULL);
> > > > +
> > > > +	return tunnel_get(tunnel);
> > > > +}
> > > > +EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
> > > 
> > > Why do these exist?
> > 
> > They implement drm_dp_tunnel_get()/put() if
> > CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE=n.
> 
> Why does that kind of irrelevant detail need to be visible
> in the exported api?

In non-debug builds the ref_tracker object isn't needed and so
drm_dp_tunnel_ref won't contain a pointer to it either.
drm_dp_tunnel_get/put_untracked() provide a way to get/put a tunnel
reference without having to pass a ref_tracker pointer.

> 
> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 12/19] drm/i915/dp: Add DP tunnel atomic state and check BW limit
  2024-02-05 16:11   ` Ville Syrjälä
@ 2024-02-05 17:52     ` Imre Deak
  0 siblings, 0 replies; 61+ messages in thread
From: Imre Deak @ 2024-02-05 17:52 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, dri-devel

On Mon, Feb 05, 2024 at 06:11:00PM +0200, Ville Syrjälä wrote:
> On Tue, Jan 23, 2024 at 12:28:43PM +0200, Imre Deak wrote:
> > Add the atomic state during a modeset required to enable the DP tunnel
> > BW allocation mode on links where such a tunnel was detected.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_atomic.c  |  8 ++++++++
> >  drivers/gpu/drm/i915/display/intel_display.c | 19 +++++++++++++++++++
> >  drivers/gpu/drm/i915/display/intel_link_bw.c |  5 +++++
> >  3 files changed, 32 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_atomic.c b/drivers/gpu/drm/i915/display/intel_atomic.c
> > index 96ab37e158995..4236740ede9ed 100644
> > --- a/drivers/gpu/drm/i915/display/intel_atomic.c
> > +++ b/drivers/gpu/drm/i915/display/intel_atomic.c
> > @@ -260,6 +260,10 @@ intel_crtc_duplicate_state(struct drm_crtc *crtc)
> >  	if (crtc_state->post_csc_lut)
> >  		drm_property_blob_get(crtc_state->post_csc_lut);
> >  
> > +	if (crtc_state->dp_tunnel_ref.tunnel)
> > +		drm_dp_tunnel_ref_get(old_crtc_state->dp_tunnel_ref.tunnel,
> 
> I'd probably s/old_crtc_state/crtc_state/ here. Same pointer, but
> looks out of place given everyone else just operates on 'crtc_state' 
> here.

Ok, will change that.

> > +					&crtc_state->dp_tunnel_ref);
> 
> Shame we have to have this ref wrapper. But I guess no clean
> way to have a magic tracked pointer type that works like a
> normal pointer in C...

I suppose returning a pointer to a kmalloced drm_dp_tunnel_ref from
drm_tunnel_get() and freeing this in drm_tunnel_put() would be one way,
but that imo defeats the purpose of the tracker information being valid
even after put() (so that ref_tracker can print information about where
a particular reference was already dropped).

> 
> > +
> >  	crtc_state->update_pipe = false;
> >  	crtc_state->update_m_n = false;
> >  	crtc_state->update_lrr = false;
> > @@ -311,6 +315,8 @@ intel_crtc_destroy_state(struct drm_crtc *crtc,
> >  
> >  	__drm_atomic_helper_crtc_destroy_state(&crtc_state->uapi);
> >  	intel_crtc_free_hw_state(crtc_state);
> > +	if (crtc_state->dp_tunnel_ref.tunnel)
> > +		drm_dp_tunnel_ref_put(&crtc_state->dp_tunnel_ref);
> >  	kfree(crtc_state);
> >  }
> >  
> > @@ -346,6 +352,8 @@ void intel_atomic_state_clear(struct drm_atomic_state *s)
> >  	/* state->internal not reset on purpose */
> >  
> >  	state->dpll_set = state->modeset = false;
> > +
> > +	intel_dp_tunnel_atomic_cleanup_inherited_state(state);
> 
> This seems to be in the wrong patch?

Yes, I guess a more logical place is in

[PATCH 14/19] drm/i915/dp: Compute DP tunel BW during encoder state computation

where the state is added, will move it there.

> 
> >  }
> >  
> >  struct intel_crtc_state *
> > diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> > index b9f985a5e705b..46b27a32c8640 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display.c
> > +++ b/drivers/gpu/drm/i915/display/intel_display.c
> > @@ -33,6 +33,7 @@
> >  #include <linux/string_helpers.h>
> >  
> >  #include <drm/display/drm_dp_helper.h>
> > +#include <drm/display/drm_dp_tunnel.h>
> >  #include <drm/drm_atomic.h>
> >  #include <drm/drm_atomic_helper.h>
> >  #include <drm/drm_atomic_uapi.h>
> > @@ -73,6 +74,7 @@
> >  #include "intel_dp.h"
> >  #include "intel_dp_link_training.h"
> >  #include "intel_dp_mst.h"
> > +#include "intel_dp_tunnel.h"
> >  #include "intel_dpll.h"
> >  #include "intel_dpll_mgr.h"
> >  #include "intel_dpt.h"
> > @@ -4490,6 +4492,8 @@ copy_bigjoiner_crtc_state_modeset(struct intel_atomic_state *state,
> >  	saved_state->crc_enabled = slave_crtc_state->crc_enabled;
> >  
> >  	intel_crtc_free_hw_state(slave_crtc_state);
> > +	if (slave_crtc_state->dp_tunnel_ref.tunnel)
> > +		drm_dp_tunnel_ref_put(&slave_crtc_state->dp_tunnel_ref);
> >  	memcpy(slave_crtc_state, saved_state, sizeof(*slave_crtc_state));
> >  	kfree(saved_state);
> >  
> > @@ -4505,6 +4509,10 @@ copy_bigjoiner_crtc_state_modeset(struct intel_atomic_state *state,
> >  		      &master_crtc_state->hw.adjusted_mode);
> >  	slave_crtc_state->hw.scaling_filter = master_crtc_state->hw.scaling_filter;
> >  
> > +	if (master_crtc_state->dp_tunnel_ref.tunnel)
> > +		drm_dp_tunnel_ref_get(master_crtc_state->dp_tunnel_ref.tunnel,
> > +					&slave_crtc_state->dp_tunnel_ref);
> > +
> >  	copy_bigjoiner_crtc_state_nomodeset(state, slave_crtc);
> >  
> >  	slave_crtc_state->uapi.mode_changed = master_crtc_state->uapi.mode_changed;
> > @@ -4533,6 +4541,13 @@ intel_crtc_prepare_cleared_state(struct intel_atomic_state *state,
> >  	/* free the old crtc_state->hw members */
> >  	intel_crtc_free_hw_state(crtc_state);
> >  
> > +	if (crtc_state->dp_tunnel_ref.tunnel) {
> > +		drm_dp_tunnel_atomic_set_stream_bw(&state->base,
> > +						   crtc_state->dp_tunnel_ref.tunnel,
> > +						   crtc->pipe, 0);
> > +		drm_dp_tunnel_ref_put(&crtc_state->dp_tunnel_ref);
> > +	}
> > +
> >  	/* FIXME: before the switch to atomic started, a new pipe_config was
> >  	 * kzalloc'd. Code that depends on any field being zero should be
> >  	 * fixed, so that the crtc_state can be safely duplicated. For now,
> > @@ -5374,6 +5389,10 @@ static int intel_modeset_pipe(struct intel_atomic_state *state,
> >  	if (ret)
> >  		return ret;
> >  
> > +	ret = intel_dp_tunnel_atomic_add_state_for_crtc(state, crtc);
> > +	if (ret)
> > +		return ret;
> > +
> >  	ret = intel_dp_mst_add_topology_state_for_crtc(state, crtc);
> >  	if (ret)
> >  		return ret;
> > diff --git a/drivers/gpu/drm/i915/display/intel_link_bw.c b/drivers/gpu/drm/i915/display/intel_link_bw.c
> > index 9c6d35a405a18..5b539ba996ddf 100644
> > --- a/drivers/gpu/drm/i915/display/intel_link_bw.c
> > +++ b/drivers/gpu/drm/i915/display/intel_link_bw.c
> > @@ -8,6 +8,7 @@
> >  #include "intel_atomic.h"
> >  #include "intel_display_types.h"
> >  #include "intel_dp_mst.h"
> > +#include "intel_dp_tunnel.h"
> >  #include "intel_fdi.h"
> >  #include "intel_link_bw.h"
> >  
> > @@ -149,6 +150,10 @@ static int check_all_link_config(struct intel_atomic_state *state,
> >  	if (ret)
> >  		return ret;
> >  
> > +	ret = intel_dp_tunnel_atomic_check_link(state, limits);
> > +	if (ret)
> > +		return ret;
> > +
> >  	ret = intel_fdi_atomic_check_link(state, limits);
> >  	if (ret)
> >  		return ret;
> > -- 
> > 2.39.2
> 
> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-02-05 17:15         ` Imre Deak
@ 2024-02-05 22:17           ` Ville Syrjälä
  0 siblings, 0 replies; 61+ messages in thread
From: Ville Syrjälä @ 2024-02-05 22:17 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx, dri-devel

On Mon, Feb 05, 2024 at 07:15:17PM +0200, Imre Deak wrote:
> On Mon, Feb 05, 2024 at 06:13:30PM +0200, Ville Syrjälä wrote:
> > On Wed, Jan 31, 2024 at 08:49:16PM +0200, Imre Deak wrote:
> > > On Wed, Jan 31, 2024 at 06:09:04PM +0200, Ville Syrjälä wrote:
> > > > On Tue, Jan 23, 2024 at 12:28:33PM +0200, Imre Deak wrote:
> > > > > +static void untrack_tunnel_ref(struct drm_dp_tunnel *tunnel,
> > > > > +			       struct ref_tracker **tracker)
> > > > > +{
> > > > > +	ref_tracker_free(&tunnel->group->mgr->ref_tracker,
> > > > > +			 tracker);
> > > > > +}
> > > > > +
> > > > > +struct drm_dp_tunnel *
> > > > > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> > > > > +{
> > > > > +	track_tunnel_ref(tunnel, NULL);
> > > > > +
> > > > > +	return tunnel_get(tunnel);
> > > > > +}
> > > > > +EXPORT_SYMBOL(drm_dp_tunnel_get_untracked);
> > > > 
> > > > Why do these exist?
> > > 
> > > They implement drm_dp_tunnel_get()/put() if
> > > CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE=n.
> > 
> > Why does that kind of irrelevant detail need to be visible
> > in the exported api?
> 
> In non-debug builds the ref_tracker object isn't needed and so
> drm_dp_tunnel_ref won't contain a pointer to it either.

Since it's just a pointer I don't see much point in making
things more complicated by leaving it out.

> drm_dp_tunnel_get/put_untracked() provide a way to get/put a tunnel
> reference without having to pass a ref_tracker pointer.
> 
> > 
> > -- 
> > Ville Syrjälä
> > Intel

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 11/19] drm/i915/dp: Add support for DP tunnel BW allocation
  2024-01-23 10:28 ` [PATCH 11/19] drm/i915/dp: Add support for DP tunnel BW allocation Imre Deak
@ 2024-02-05 22:47   ` Ville Syrjälä
  2024-02-06 11:58     ` Imre Deak
  2024-02-06 23:08   ` Ville Syrjälä
  1 sibling, 1 reply; 61+ messages in thread
From: Ville Syrjälä @ 2024-02-05 22:47 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx, dri-devel

On Tue, Jan 23, 2024 at 12:28:42PM +0200, Imre Deak wrote:
> +static int check_inherited_tunnel_state(struct intel_atomic_state *state,
> +					struct intel_dp *intel_dp,
> +					const struct intel_digital_connector_state *old_conn_state)
> +{
> +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> +	const struct intel_connector *connector =
> +		to_intel_connector(old_conn_state->base.connector);
> +	struct intel_crtc *old_crtc;
> +	const struct intel_crtc_state *old_crtc_state;
> +
> +	/*
> +	 * If a BWA tunnel gets detected only after the corresponding
> +	 * connector got enabled already without a BWA tunnel, or a different
> +	 * BWA tunnel (which was removed meanwhile) the old CRTC state won't
> +	 * contain the state of the current tunnel. This tunnel still has a
> +	 * reserved BW, which needs to be released, add the state for such
> +	 * inherited tunnels separately only to this atomic state.
> +	 */
> +	if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> +		return 0;
> +
> +	if (!old_conn_state->base.crtc)
> +		return 0;
> +
> +	old_crtc = to_intel_crtc(old_conn_state->base.crtc);
> +	old_crtc_state = intel_atomic_get_old_crtc_state(state, old_crtc);
> +
> +	if (!old_crtc_state->hw.active ||
> +	    old_crtc_state->dp_tunnel_ref.tunnel == intel_dp->tunnel)
> +		return 0;
> +
> +	drm_dbg_kms(&i915->drm,
> +		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Adding state for inherited tunnel %p\n",
> +		    drm_dp_tunnel_name(intel_dp->tunnel),
> +		    connector->base.base.id,
> +		    connector->base.name,
> +		    encoder->base.base.id,
> +		    encoder->base.name,
> +		    old_crtc->base.base.id,
> +		    old_crtc->base.name,
> +		    intel_dp->tunnel);
> +
> +	return add_inherited_tunnel_state(state, intel_dp->tunnel, old_crtc);

I still strongly dislike this "tunnels are magically created by detect
behind our back" approach. IMO in an ideal world we'd only ever create the
tunnels during modeset/sanitize. What was the reason that didn't work again?
I think you explained it to me in person at least once already, but can't
remember anymore...

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 11/19] drm/i915/dp: Add support for DP tunnel BW allocation
  2024-02-05 22:47   ` Ville Syrjälä
@ 2024-02-06 11:58     ` Imre Deak
  0 siblings, 0 replies; 61+ messages in thread
From: Imre Deak @ 2024-02-06 11:58 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, dri-devel

On Tue, Feb 06, 2024 at 12:47:22AM +0200, Ville Syrjälä wrote:
> On Tue, Jan 23, 2024 at 12:28:42PM +0200, Imre Deak wrote:
> > +static int check_inherited_tunnel_state(struct intel_atomic_state *state,
> > +					struct intel_dp *intel_dp,
> > +					const struct intel_digital_connector_state *old_conn_state)
> > +{
> > +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> > +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> > +	const struct intel_connector *connector =
> > +		to_intel_connector(old_conn_state->base.connector);
> > +	struct intel_crtc *old_crtc;
> > +	const struct intel_crtc_state *old_crtc_state;
> > +
> > +	/*
> > +	 * If a BWA tunnel gets detected only after the corresponding
> > +	 * connector got enabled already without a BWA tunnel, or a different
> > +	 * BWA tunnel (which was removed meanwhile) the old CRTC state won't
> > +	 * contain the state of the current tunnel. This tunnel still has a
> > +	 * reserved BW, which needs to be released, add the state for such
> > +	 * inherited tunnels separately only to this atomic state.
> > +	 */
> > +	if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> > +		return 0;
> > +
> > +	if (!old_conn_state->base.crtc)
> > +		return 0;
> > +
> > +	old_crtc = to_intel_crtc(old_conn_state->base.crtc);
> > +	old_crtc_state = intel_atomic_get_old_crtc_state(state, old_crtc);
> > +
> > +	if (!old_crtc_state->hw.active ||
> > +	    old_crtc_state->dp_tunnel_ref.tunnel == intel_dp->tunnel)
> > +		return 0;
> > +
> > +	drm_dbg_kms(&i915->drm,
> > +		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Adding state for inherited tunnel %p\n",
> > +		    drm_dp_tunnel_name(intel_dp->tunnel),
> > +		    connector->base.base.id,
> > +		    connector->base.name,
> > +		    encoder->base.base.id,
> > +		    encoder->base.name,
> > +		    old_crtc->base.base.id,
> > +		    old_crtc->base.name,
> > +		    intel_dp->tunnel);
> > +
> > +	return add_inherited_tunnel_state(state, intel_dp->tunnel, old_crtc);
> 
> I still strongly dislike this "tunnels are magically created by detect
> behind our back" approach. IMO in an ideal world we'd only ever create the
> tunnels during modeset/sanitize. What was the reason that didn't work again?
> I think you explained it to me in person at least once already, but can't
> remember anymore...

The tunnel information, describing which group the tunnel belongs to and
so how much BW it can use is needed already during detect time: to
filter the connectors' mode list during connector probing and to
pass/fail an atomic check of connectors that go through a tunnel/group
based on the modes the connectors use, the BW these require vs. the
available BW of the tunnel group.

The atomic state for the tunnel - with the required BW through it - is
only created/added during a modeset.

> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 01/19] drm/dp: Add drm_dp_max_dprx_data_rate()
  2024-01-26 13:28     ` Imre Deak
@ 2024-02-06 20:23       ` Shankar, Uma
  0 siblings, 0 replies; 61+ messages in thread
From: Shankar, Uma @ 2024-02-06 20:23 UTC (permalink / raw)
  To: Deak, Imre, Ville Syrjälä; +Cc: intel-gfx, dri-devel



> -----Original Message-----
> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Imre
> Deak
> Sent: Friday, January 26, 2024 6:58 PM
> To: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Cc: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> Subject: Re: [PATCH 01/19] drm/dp: Add drm_dp_max_dprx_data_rate()
> 
> On Fri, Jan 26, 2024 at 01:36:02PM +0200, Ville Syrjälä wrote:
> > On Tue, Jan 23, 2024 at 12:28:32PM +0200, Imre Deak wrote:
> > > Copy intel_dp_max_data_rate() to DRM core. It will be needed by a
> > > follow-up DP tunnel patch, checking the maximum rate the DPRX (sink)
> > > supports. Accordingly use the drm_dp_max_dprx_data_rate() name for
> > > clarity. This patchset will also switch calling the new DRM function
> > > in i915 instead of intel_dp_max_data_rate().
> > >
> > > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > > ---
> > >  drivers/gpu/drm/display/drm_dp_helper.c | 58
> +++++++++++++++++++++++++
> > >  include/drm/display/drm_dp_helper.h     |  2 +
> > >  2 files changed, 60 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/display/drm_dp_helper.c
> > > b/drivers/gpu/drm/display/drm_dp_helper.c
> > > index b1ca3a1100dab..24911243d4d3a 100644
> > > --- a/drivers/gpu/drm/display/drm_dp_helper.c
> > > +++ b/drivers/gpu/drm/display/drm_dp_helper.c
> > > @@ -4058,3 +4058,61 @@ int drm_dp_bw_channel_coding_efficiency(bool
> is_uhbr)
> > >  		return 800000;
> > >  }
> > >  EXPORT_SYMBOL(drm_dp_bw_channel_coding_efficiency);
> > > +
> > > +/*
> > > + * Given a link rate and lanes, get the data bandwidth.
> > > + *
> > > + * Data bandwidth is the actual payload rate, which depends on the
> > > +data
> > > + * bandwidth efficiency and the link rate.
> > > + *
> > > + * For 8b/10b channel encoding, SST and non-FEC, the data bandwidth
> > > +efficiency
> > > + * is 80%. For example, for a 1.62 Gbps link, 1.62*10^9 bps * 0.80
> > > +* (1/8) =
> > > + * 162000 kBps. With 8-bit symbols, we have 162000 kHz symbol
> > > +clock. Just by
> > > + * coincidence, the port clock in kHz matches the data bandwidth in
> > > +kBps, and
> > > + * they equal the link bit rate in Gbps multiplied by 100000. (Note
> > > +that this no
> > > + * longer holds for data bandwidth as soon as FEC or MST is taken
> > > +into account!)
> > > + *
> > > + * For 128b/132b channel encoding, the data bandwidth efficiency is
> > > +96.71%. For
> > > + * example, for a 10 Gbps link, 10*10^9 bps * 0.9671 * (1/8) =
> > > +1208875
> > > + * kBps. With 32-bit symbols, we have 312500 kHz symbol clock. The
> > > +value 1000000
> > > + * does not match the symbol clock, the port clock (not even if you
> > > +think in
> > > + * terms of a byte clock), nor the data bandwidth. It only matches
> > > +the link bit
> > > + * rate in units of 10000 bps.
> > > + *
> > > + * Note that protocol layers above the DPRX link level considered
> > > +here can
> > > + * further limit the maximum data rate. Such layers are the MST
> > > +topology (with
> > > + * limits on the link between the source and first branch device as
> > > +well as on
> > > + * the whole MST path until the DPRX link) and (Thunderbolt) DP
> > > +tunnels -
> > > + * which in turn can encapsulate an MST link with its own limit -
> > > +with each
> > > + * SST or MST encapsulated tunnel sharing the BW of a tunnel group.
> > > + *
> > > + * TODO: Add support for querying the max data rate with the above
> > > +limits as
> > > + * well.
> > > + *
> > > + * Returns the maximum data rate in kBps units.
> > > + */
> > > +int drm_dp_max_dprx_data_rate(int max_link_rate, int max_lanes) {
> > > +	int ch_coding_efficiency =
> > > +
> 	drm_dp_bw_channel_coding_efficiency(drm_dp_is_uhbr_rate(max_link_
> rate));
> > > +	int max_link_rate_kbps = max_link_rate * 10;
> >
> > That x10 value seems rather pointless.
> 
> I suppose the point was to make the units clearer, but it could be clarified instead
> in max_link_rates' documentation, which is missing atm.
> 
> > > +
> > > +	/*
> > > +	 * UHBR rates always use 128b/132b channel encoding, and have
> > > +	 * 97.71% data bandwidth efficiency. Consider max_link_rate the
> > > +	 * link bit rate in units of 10000 bps.
> > > +	 */
> > > +	/*
> > > +	 * Lower than UHBR rates always use 8b/10b channel encoding, and have
> > > +	 * 80% data bandwidth efficiency for SST non-FEC. However, this turns
> > > +	 * out to be a nop by coincidence:
> > > +	 *
> > > +	 *	int max_link_rate_kbps = max_link_rate * 10;
> > > +	 *	max_link_rate_kbps =
> DIV_ROUND_DOWN_ULL(max_link_rate_kbps * 8, 10);
> > > +	 *	max_link_rate = max_link_rate_kbps / 8;
> > > +	 */
> >
> > Not sure why we are repeating the nuts and bolts detils in the
> > comments so much? Doesn't drm_dp_bw_channel_coding_efficiency()
> > explain all this already?
> 
> I simply copied the function, but yes in this context there is duplication, thanks for
> reading through all that. Will consolidate both the above and the bigger comment
> before the function with the existing docs here.

Changes look good to me. With above addressed:
Reviewed-by: Uma Shankar <uma.shankar@intel.com>

> >
> > > +	return DIV_ROUND_DOWN_ULL(mul_u32_u32(max_link_rate_kbps *
> max_lanes,
> > > +					      ch_coding_efficiency),
> > > +				  1000000 * 8);
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_max_dprx_data_rate);
> > > diff --git a/include/drm/display/drm_dp_helper.h
> > > b/include/drm/display/drm_dp_helper.h
> > > index 863b2e7add29e..454ae7517419a 100644
> > > --- a/include/drm/display/drm_dp_helper.h
> > > +++ b/include/drm/display/drm_dp_helper.h
> > > @@ -813,4 +813,6 @@ int drm_dp_bw_overhead(int lane_count, int hactive,
> > >  		       int bpp_x16, unsigned long flags);  int
> > > drm_dp_bw_channel_coding_efficiency(bool is_uhbr);
> > >
> > > +int drm_dp_max_dprx_data_rate(int max_link_rate, int max_lanes);
> > > +
> > >  #endif /* _DRM_DP_HELPER_H_ */
> > > --
> > > 2.39.2
> >
> > --
> > Ville Syrjälä
> > Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 04/19] drm/i915/dp: Use drm_dp_max_dprx_data_rate()
  2024-01-23 10:28 ` [PATCH 04/19] drm/i915/dp: Use drm_dp_max_dprx_data_rate() Imre Deak
@ 2024-02-06 20:27   ` Shankar, Uma
  0 siblings, 0 replies; 61+ messages in thread
From: Shankar, Uma @ 2024-02-06 20:27 UTC (permalink / raw)
  To: Deak, Imre, intel-gfx; +Cc: dri-devel



> -----Original Message-----
> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Imre
> Deak
> Sent: Tuesday, January 23, 2024 3:59 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> Subject: [PATCH 04/19] drm/i915/dp: Use drm_dp_max_dprx_data_rate()
> 
> Instead of intel_dp_max_data_rate() use the equivalent
> drm_dp_max_dprx_data_rate() which was copied from the former one in a
> previous patch.

Looks good to me.
Reviewed-by: Uma Shankar <uma.shankar@intel.com>

> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_display.c |  2 +-
>  drivers/gpu/drm/i915/display/intel_dp.c      | 62 +++-----------------
>  drivers/gpu/drm/i915/display/intel_dp.h      |  1 -
>  drivers/gpu/drm/i915/display/intel_dp_mst.c  |  2 +-
>  4 files changed, 10 insertions(+), 57 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c
> b/drivers/gpu/drm/i915/display/intel_display.c
> index 0caebbb3e2dbb..b9f985a5e705b 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -2478,7 +2478,7 @@ intel_link_compute_m_n(u16 bits_per_pixel_x16, int
> nlanes,
>  	u32 link_symbol_clock = intel_dp_link_symbol_clock(link_clock);
>  	u32 data_m = intel_dp_effective_data_rate(pixel_clock,
> bits_per_pixel_x16,
>  						  bw_overhead);
> -	u32 data_n = intel_dp_max_data_rate(link_clock, nlanes);
> +	u32 data_n = drm_dp_max_dprx_data_rate(link_clock, nlanes);
> 
>  	/*
>  	 * Windows/BIOS uses fixed M/N values always. Follow suit.
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 4e36c2c39888e..c7b06a9b197cc 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -383,52 +383,6 @@ int intel_dp_effective_data_rate(int pixel_clock, int
> bpp_x16,
>  				1000000 * 16 * 8);
>  }
> 
> -/*
> - * Given a link rate and lanes, get the data bandwidth.
> - *
> - * Data bandwidth is the actual payload rate, which depends on the data
> - * bandwidth efficiency and the link rate.
> - *
> - * For 8b/10b channel encoding, SST and non-FEC, the data bandwidth efficiency
> - * is 80%. For example, for a 1.62 Gbps link, 1.62*10^9 bps * 0.80 * (1/8) =
> - * 162000 kBps. With 8-bit symbols, we have 162000 kHz symbol clock. Just by
> - * coincidence, the port clock in kHz matches the data bandwidth in kBps, and
> - * they equal the link bit rate in Gbps multiplied by 100000. (Note that this no
> - * longer holds for data bandwidth as soon as FEC or MST is taken into account!)
> - *
> - * For 128b/132b channel encoding, the data bandwidth efficiency is 96.71%. For
> - * example, for a 10 Gbps link, 10*10^9 bps * 0.9671 * (1/8) = 1208875
> - * kBps. With 32-bit symbols, we have 312500 kHz symbol clock. The value
> 1000000
> - * does not match the symbol clock, the port clock (not even if you think in
> - * terms of a byte clock), nor the data bandwidth. It only matches the link bit
> - * rate in units of 10000 bps.
> - */
> -int
> -intel_dp_max_data_rate(int max_link_rate, int max_lanes) -{
> -	int ch_coding_efficiency =
> -
> 	drm_dp_bw_channel_coding_efficiency(drm_dp_is_uhbr_rate(max_link_
> rate));
> -	int max_link_rate_kbps = max_link_rate * 10;
> -
> -	/*
> -	 * UHBR rates always use 128b/132b channel encoding, and have
> -	 * 97.71% data bandwidth efficiency. Consider max_link_rate the
> -	 * link bit rate in units of 10000 bps.
> -	 */
> -	/*
> -	 * Lower than UHBR rates always use 8b/10b channel encoding, and have
> -	 * 80% data bandwidth efficiency for SST non-FEC. However, this turns
> -	 * out to be a nop by coincidence:
> -	 *
> -	 *	int max_link_rate_kbps = max_link_rate * 10;
> -	 *	max_link_rate_kbps =
> DIV_ROUND_DOWN_ULL(max_link_rate_kbps * 8, 10);
> -	 *	max_link_rate = max_link_rate_kbps / 8;
> -	 */
> -	return DIV_ROUND_DOWN_ULL(mul_u32_u32(max_link_rate_kbps *
> max_lanes,
> -					      ch_coding_efficiency),
> -				  1000000 * 8);
> -}
> -
>  bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp)  {
>  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); @@
> -658,7 +612,7 @@ static bool intel_dp_can_link_train_fallback_for_edp(struct
> intel_dp *intel_dp,
>  	int mode_rate, max_rate;
> 
>  	mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
> -	max_rate = intel_dp_max_data_rate(link_rate, lane_count);
> +	max_rate = drm_dp_max_dprx_data_rate(link_rate, lane_count);
>  	if (mode_rate > max_rate)
>  		return false;
> 
> @@ -1260,7 +1214,7 @@ intel_dp_mode_valid(struct drm_connector
> *_connector,
>  	max_link_clock = intel_dp_max_link_rate(intel_dp);
>  	max_lanes = intel_dp_max_lane_count(intel_dp);
> 
> -	max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes);
> +	max_rate = drm_dp_max_dprx_data_rate(max_link_clock, max_lanes);
>  	mode_rate = intel_dp_link_required(target_clock,
> 
> intel_dp_mode_min_output_bpp(connector, mode));
> 
> @@ -1610,8 +1564,8 @@ intel_dp_compute_link_config_wide(struct intel_dp
> *intel_dp,
>  			for (lane_count = limits->min_lane_count;
>  			     lane_count <= limits->max_lane_count;
>  			     lane_count <<= 1) {
> -				link_avail = intel_dp_max_data_rate(link_rate,
> -								    lane_count);
> +				link_avail =
> drm_dp_max_dprx_data_rate(link_rate,
> +
> lane_count);
> 
>  				if (mode_rate <= link_avail) {
>  					pipe_config->lane_count = lane_count;
> @@ -2462,8 +2416,8 @@ intel_dp_compute_link_config(struct intel_encoder
> *encoder,
>  			    "DP link rate required %i available %i\n",
>  			    intel_dp_link_required(adjusted_mode->crtc_clock,
> 
> to_bpp_int_roundup(pipe_config->dsc.compressed_bpp_x16)),
> -			    intel_dp_max_data_rate(pipe_config->port_clock,
> -						   pipe_config->lane_count));
> +			    drm_dp_max_dprx_data_rate(pipe_config-
> >port_clock,
> +						      pipe_config->lane_count));
>  	} else {
>  		drm_dbg_kms(&i915->drm, "DP lane count %d clock %d bpp
> %d\n",
>  			    pipe_config->lane_count, pipe_config->port_clock,
> @@ -2473,8 +2427,8 @@ intel_dp_compute_link_config(struct intel_encoder
> *encoder,
>  			    "DP link rate required %i available %i\n",
>  			    intel_dp_link_required(adjusted_mode->crtc_clock,
>  						   pipe_config->pipe_bpp),
> -			    intel_dp_max_data_rate(pipe_config->port_clock,
> -						   pipe_config->lane_count));
> +			    drm_dp_max_dprx_data_rate(pipe_config-
> >port_clock,
> +						      pipe_config->lane_count));
>  	}
>  	return 0;
>  }
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h
> b/drivers/gpu/drm/i915/display/intel_dp.h
> index 105c2086310db..46f79747f807d 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -113,7 +113,6 @@ bool intel_dp_get_colorimetry_status(struct intel_dp
> *intel_dp);  int intel_dp_link_required(int pixel_clock, int bpp);  int
> intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
>  				 int bw_overhead);
> -int intel_dp_max_data_rate(int max_link_rate, int max_lanes);  bool
> intel_dp_can_bigjoiner(struct intel_dp *intel_dp);  bool
> intel_dp_needs_vsc_sdp(const struct intel_crtc_state *crtc_state,
>  			    const struct drm_connector_state *conn_state); diff --
> git a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index b15e43ebf138b..cfcc157b7d41d 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -1295,7 +1295,7 @@ intel_dp_mst_mode_valid_ctx(struct drm_connector
> *connector,
>  	max_link_clock = intel_dp_max_link_rate(intel_dp);
>  	max_lanes = intel_dp_max_lane_count(intel_dp);
> 
> -	max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes);
> +	max_rate = drm_dp_max_dprx_data_rate(max_link_clock, max_lanes);
>  	mode_rate = intel_dp_link_required(mode->clock, min_bpp);
> 
>  	ret = drm_modeset_lock(&mgr->base.lock, ctx);
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 05/19] drm/i915/dp: Factor out intel_dp_config_required_rate()
  2024-01-23 10:28 ` [PATCH 05/19] drm/i915/dp: Factor out intel_dp_config_required_rate() Imre Deak
@ 2024-02-06 20:32   ` Shankar, Uma
  0 siblings, 0 replies; 61+ messages in thread
From: Shankar, Uma @ 2024-02-06 20:32 UTC (permalink / raw)
  To: Deak, Imre, intel-gfx; +Cc: dri-devel



> -----Original Message-----
> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Imre
> Deak
> Sent: Tuesday, January 23, 2024 3:59 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> Subject: [PATCH 05/19] drm/i915/dp: Factor out intel_dp_config_required_rate()
> 
> Factor out intel_dp_config_required_rate() used by a follow-up patch enabling the
> DP tunnel BW allocation mode.

Looks good to me.
Reviewed-by: Uma Shankar <uma.shankar@intel.com>

> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 43 +++++++++++--------------
> drivers/gpu/drm/i915/display/intel_dp.h |  1 +
>  2 files changed, 20 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index c7b06a9b197cc..0a5c60428ffb7 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2338,6 +2338,17 @@ intel_dp_compute_config_limits(struct intel_dp
> *intel_dp,
>  						       limits);
>  }
> 
> +int intel_dp_config_required_rate(const struct intel_crtc_state
> +*crtc_state) {
> +	const struct drm_display_mode *adjusted_mode =
> +		&crtc_state->hw.adjusted_mode;
> +	int bpp = crtc_state->dsc.compression_enable ?
> +		to_bpp_int_roundup(crtc_state->dsc.compressed_bpp_x16) :
> +		crtc_state->pipe_bpp;
> +
> +	return intel_dp_link_required(adjusted_mode->crtc_clock, bpp); }
> +
>  static int
>  intel_dp_compute_link_config(struct intel_encoder *encoder,
>  			     struct intel_crtc_state *pipe_config, @@ -2405,31
> +2416,15 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
>  			return ret;
>  	}
> 
> -	if (pipe_config->dsc.compression_enable) {
> -		drm_dbg_kms(&i915->drm,
> -			    "DP lane count %d clock %d Input bpp %d Compressed
> bpp " BPP_X16_FMT "\n",
> -			    pipe_config->lane_count, pipe_config->port_clock,
> -			    pipe_config->pipe_bpp,
> -			    BPP_X16_ARGS(pipe_config-
> >dsc.compressed_bpp_x16));
> +	drm_dbg_kms(&i915->drm,
> +		    "DP lane count %d clock %d bpp input %d compressed "
> BPP_X16_FMT " link rate required %d available %d\n",
> +		    pipe_config->lane_count, pipe_config->port_clock,
> +		    pipe_config->pipe_bpp,
> +		    BPP_X16_ARGS(pipe_config->dsc.compressed_bpp_x16),
> +		    intel_dp_config_required_rate(pipe_config),
> +		    drm_dp_max_dprx_data_rate(pipe_config->port_clock,
> +					      pipe_config->lane_count));
> 
> -		drm_dbg_kms(&i915->drm,
> -			    "DP link rate required %i available %i\n",
> -			    intel_dp_link_required(adjusted_mode->crtc_clock,
> -
> to_bpp_int_roundup(pipe_config->dsc.compressed_bpp_x16)),
> -			    drm_dp_max_dprx_data_rate(pipe_config-
> >port_clock,
> -						      pipe_config->lane_count));
> -	} else {
> -		drm_dbg_kms(&i915->drm, "DP lane count %d clock %d bpp
> %d\n",
> -			    pipe_config->lane_count, pipe_config->port_clock,
> -			    pipe_config->pipe_bpp);
> -
> -		drm_dbg_kms(&i915->drm,
> -			    "DP link rate required %i available %i\n",
> -			    intel_dp_link_required(adjusted_mode->crtc_clock,
> -						   pipe_config->pipe_bpp),
> -			    drm_dp_max_dprx_data_rate(pipe_config-
> >port_clock,
> -						      pipe_config->lane_count));
> -	}
>  	return 0;
>  }
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h
> b/drivers/gpu/drm/i915/display/intel_dp.h
> index 46f79747f807d..37274e3c2902f 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -102,6 +102,7 @@ void intel_dp_mst_suspend(struct drm_i915_private
> *dev_priv);  void intel_dp_mst_resume(struct drm_i915_private *dev_priv);  int
> intel_dp_max_link_rate(struct intel_dp *intel_dp);  int
> intel_dp_max_lane_count(struct intel_dp *intel_dp);
> +int intel_dp_config_required_rate(const struct intel_crtc_state
> +*crtc_state);
>  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
> 
>  void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 06/19] drm/i915/dp: Export intel_dp_max_common_rate/lane_count()
  2024-01-23 10:28 ` [PATCH 06/19] drm/i915/dp: Export intel_dp_max_common_rate/lane_count() Imre Deak
@ 2024-02-06 20:34   ` Shankar, Uma
  0 siblings, 0 replies; 61+ messages in thread
From: Shankar, Uma @ 2024-02-06 20:34 UTC (permalink / raw)
  To: Deak, Imre, intel-gfx; +Cc: dri-devel



> -----Original Message-----
> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Imre
> Deak
> Sent: Tuesday, January 23, 2024 3:59 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> Subject: [PATCH 06/19] drm/i915/dp: Export
> intel_dp_max_common_rate/lane_count()
> 
> Export intel_dp_max_common_rate() and intel_dp_max_lane_count() used by a
> follow-up patch enabling the DP tunnel BW allocation mode.

Looks good to me.
Reviewed-by: Uma Shankar <uma.shankar@intel.com>

> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 4 ++--
> drivers/gpu/drm/i915/display/intel_dp.h | 2 ++
>  2 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 0a5c60428ffb7..f40706c5d1aad 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -309,7 +309,7 @@ static int intel_dp_common_rate(struct intel_dp
> *intel_dp, int index)  }
> 
>  /* Theoretical max between source and sink */ -static int
> intel_dp_max_common_rate(struct intel_dp *intel_dp)
> +int intel_dp_max_common_rate(struct intel_dp *intel_dp)
>  {
>  	return intel_dp_common_rate(intel_dp, intel_dp->num_common_rates -
> 1);  } @@ -326,7 +326,7 @@ static int intel_dp_max_source_lane_count(struct
> intel_digital_port *dig_port)  }
> 
>  /* Theoretical max between source and sink */ -static int
> intel_dp_max_common_lane_count(struct intel_dp *intel_dp)
> +int intel_dp_max_common_lane_count(struct intel_dp *intel_dp)
>  {
>  	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
>  	int source_max = intel_dp_max_source_lane_count(dig_port);
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h
> b/drivers/gpu/drm/i915/display/intel_dp.h
> index 37274e3c2902f..a7906d8738c4a 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -104,6 +104,8 @@ int intel_dp_max_link_rate(struct intel_dp *intel_dp);  int
> intel_dp_max_lane_count(struct intel_dp *intel_dp);  int
> intel_dp_config_required_rate(const struct intel_crtc_state *crtc_state);  int
> intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
> +int intel_dp_max_common_rate(struct intel_dp *intel_dp); int
> +intel_dp_max_common_lane_count(struct intel_dp *intel_dp);
> 
>  void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
>  			   u8 *link_bw, u8 *rate_select);
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 07/19] drm/i915/dp: Factor out intel_dp_update_sink_caps()
  2024-01-23 10:28 ` [PATCH 07/19] drm/i915/dp: Factor out intel_dp_update_sink_caps() Imre Deak
@ 2024-02-06 20:35   ` Shankar, Uma
  0 siblings, 0 replies; 61+ messages in thread
From: Shankar, Uma @ 2024-02-06 20:35 UTC (permalink / raw)
  To: Deak, Imre, intel-gfx; +Cc: dri-devel



> -----Original Message-----
> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Imre
> Deak
> Sent: Tuesday, January 23, 2024 3:59 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> Subject: [PATCH 07/19] drm/i915/dp: Factor out intel_dp_update_sink_caps()
> 
> Factor out a function updating the sink's link rate and lane count capabilities, used
> by a follow-up patch enabling the DP tunnel BW allocation mode.

Looks good to me.
Reviewed-by: Uma Shankar <uma.shankar@intel.com>

> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 11 ++++++++---
> drivers/gpu/drm/i915/display/intel_dp.h |  1 +
>  2 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index f40706c5d1aad..23434d0aba188 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -3949,6 +3949,13 @@ intel_dp_has_sink_count(struct intel_dp *intel_dp)
>  					  &intel_dp->desc);
>  }
> 
> +void intel_dp_update_sink_caps(struct intel_dp *intel_dp) {
> +	intel_dp_set_sink_rates(intel_dp);
> +	intel_dp_set_max_sink_lane_count(intel_dp);
> +	intel_dp_set_common_rates(intel_dp);
> +}
> +
>  static bool
>  intel_dp_get_dpcd(struct intel_dp *intel_dp)  { @@ -3965,9 +3972,7 @@
> intel_dp_get_dpcd(struct intel_dp *intel_dp)
>  		drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc,
>  				 drm_dp_is_branch(intel_dp->dpcd));
> 
> -		intel_dp_set_sink_rates(intel_dp);
> -		intel_dp_set_max_sink_lane_count(intel_dp);
> -		intel_dp_set_common_rates(intel_dp);
> +		intel_dp_update_sink_caps(intel_dp);
>  	}
> 
>  	if (intel_dp_has_sink_count(intel_dp)) { diff --git
> a/drivers/gpu/drm/i915/display/intel_dp.h
> b/drivers/gpu/drm/i915/display/intel_dp.h
> index a7906d8738c4a..49553e43add22 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -106,6 +106,7 @@ int intel_dp_config_required_rate(const struct
> intel_crtc_state *crtc_state);  int intel_dp_rate_select(struct intel_dp *intel_dp,
> int rate);  int intel_dp_max_common_rate(struct intel_dp *intel_dp);  int
> intel_dp_max_common_lane_count(struct intel_dp *intel_dp);
> +void intel_dp_update_sink_caps(struct intel_dp *intel_dp);
> 
>  void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
>  			   u8 *link_bw, u8 *rate_select);
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 08/19] drm/i915/dp: Factor out intel_dp_read_dprx_caps()
  2024-01-23 10:28 ` [PATCH 08/19] drm/i915/dp: Factor out intel_dp_read_dprx_caps() Imre Deak
@ 2024-02-06 20:36   ` Shankar, Uma
  0 siblings, 0 replies; 61+ messages in thread
From: Shankar, Uma @ 2024-02-06 20:36 UTC (permalink / raw)
  To: Deak, Imre, intel-gfx; +Cc: dri-devel



> -----Original Message-----
> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Imre
> Deak
> Sent: Tuesday, January 23, 2024 3:59 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> Subject: [PATCH 08/19] drm/i915/dp: Factor out intel_dp_read_dprx_caps()
> 
> Factor out a function to read the sink's DPRX capabilities used by a follow-up
> patch enabling the DP tunnel BW allocation mode.

Looks good to me.
Reviewed-by: Uma Shankar <uma.shankar@intel.com>

> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  .../drm/i915/display/intel_dp_link_training.c | 30 +++++++++++++++----
> .../drm/i915/display/intel_dp_link_training.h |  1 +
>  2 files changed, 26 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> index 7b140cbf8dd31..fb84ca98bb7ab 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> @@ -162,6 +162,28 @@ static int intel_dp_init_lttpr(struct intel_dp *intel_dp,
> const u8 dpcd[DP_RECEI
>  	return lttpr_count;
>  }
> 
> +int intel_dp_read_dprx_caps(struct intel_dp *intel_dp, u8
> +dpcd[DP_RECEIVER_CAP_SIZE]) {
> +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> +
> +	if (intel_dp_is_edp(intel_dp))
> +		return 0;
> +
> +	/*
> +	 * Detecting LTTPRs must be avoided on platforms with an AUX timeout
> +	 * period < 3.2ms. (see DP Standard v2.0, 2.11.2, 3.6.6.1).
> +	 */
> +	if (DISPLAY_VER(i915) >= 10 && !IS_GEMINILAKE(i915))
> +		if (drm_dp_dpcd_probe(&intel_dp->aux,
> +
> DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV))
> +			return -EIO;
> +
> +	if (drm_dp_read_dpcd_caps(&intel_dp->aux, dpcd))
> +		return -EIO;
> +
> +	return 0;
> +}
> +
>  /**
>   * intel_dp_init_lttpr_and_dprx_caps - detect LTTPR and DPRX caps, init the
> LTTPR link training mode
>   * @intel_dp: Intel DP struct
> @@ -192,12 +214,10 @@ int intel_dp_init_lttpr_and_dprx_caps(struct intel_dp
> *intel_dp)
>  	if (!intel_dp_is_edp(intel_dp) &&
>  	    (DISPLAY_VER(i915) >= 10 && !IS_GEMINILAKE(i915))) {
>  		u8 dpcd[DP_RECEIVER_CAP_SIZE];
> +		int err = intel_dp_read_dprx_caps(intel_dp, dpcd);
> 
> -		if (drm_dp_dpcd_probe(&intel_dp->aux,
> DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV))
> -			return -EIO;
> -
> -		if (drm_dp_read_dpcd_caps(&intel_dp->aux, dpcd))
> -			return -EIO;
> +		if (err != 0)
> +			return err;
> 
>  		lttpr_count = intel_dp_init_lttpr(intel_dp, dpcd);
>  	}
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.h
> b/drivers/gpu/drm/i915/display/intel_dp_link_training.h
> index 2c8f2775891b0..19836a8a4f904 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_link_training.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.h
> @@ -11,6 +11,7 @@
>  struct intel_crtc_state;
>  struct intel_dp;
> 
> +int intel_dp_read_dprx_caps(struct intel_dp *intel_dp, u8
> +dpcd[DP_RECEIVER_CAP_SIZE]);
>  int intel_dp_init_lttpr_and_dprx_caps(struct intel_dp *intel_dp);
> 
>  void intel_dp_get_adjust_train(struct intel_dp *intel_dp,
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 09/19] drm/i915/dp: Add intel_dp_max_link_data_rate()
  2024-01-23 10:28 ` [PATCH 09/19] drm/i915/dp: Add intel_dp_max_link_data_rate() Imre Deak
@ 2024-02-06 20:37   ` Shankar, Uma
  0 siblings, 0 replies; 61+ messages in thread
From: Shankar, Uma @ 2024-02-06 20:37 UTC (permalink / raw)
  To: Deak, Imre, intel-gfx; +Cc: dri-devel



> -----Original Message-----
> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Imre
> Deak
> Sent: Tuesday, January 23, 2024 3:59 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> Subject: [PATCH 09/19] drm/i915/dp: Add intel_dp_max_link_data_rate()
> 
> Add intel_dp_max_link_data_rate() to get the link BW vs. the sink DPRX BW used
> by a follow-up patch enabling the DP tunnel BW allocation mode.
> The link BW can be below the DPRX BW due to a BW limitation on a link shared
> by multiple sinks.

Looks good to me.
Reviewed-by: Uma Shankar <uma.shankar@intel.com>

> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c     | 32 +++++++++++++++++----
>  drivers/gpu/drm/i915/display/intel_dp.h     |  2 ++
>  drivers/gpu/drm/i915/display/intel_dp_mst.c |  3 +-
>  3 files changed, 30 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 23434d0aba188..9cd675c6d0ee8 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -383,6 +383,22 @@ int intel_dp_effective_data_rate(int pixel_clock, int
> bpp_x16,
>  				1000000 * 16 * 8);
>  }
> 
> +/**
> + * intel_dp_max_link_data_rate: Calculate the maximum rate for the
> +given link params
> + * @intel_dp: Intel DP object
> + * @max_dprx_rate: Maximum data rate of the DPRX
> + * @max_dprx_lanes: Maximum lane count of the DPRX
> + *
> + * Calculate the maximum data rate for the provided link parameters.
> + *
> + * Returns the maximum data rate in kBps units.
> + */
> +int intel_dp_max_link_data_rate(struct intel_dp *intel_dp,
> +				int max_dprx_rate, int max_dprx_lanes) {
> +	return drm_dp_max_dprx_data_rate(max_dprx_rate, max_dprx_lanes); }
> +
>  bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp)  {
>  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); @@
> -612,7 +628,7 @@ static bool intel_dp_can_link_train_fallback_for_edp(struct
> intel_dp *intel_dp,
>  	int mode_rate, max_rate;
> 
>  	mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
> -	max_rate = drm_dp_max_dprx_data_rate(link_rate, lane_count);
> +	max_rate = intel_dp_max_link_data_rate(intel_dp, link_rate,
> +lane_count);
>  	if (mode_rate > max_rate)
>  		return false;
> 
> @@ -1214,7 +1230,8 @@ intel_dp_mode_valid(struct drm_connector
> *_connector,
>  	max_link_clock = intel_dp_max_link_rate(intel_dp);
>  	max_lanes = intel_dp_max_lane_count(intel_dp);
> 
> -	max_rate = drm_dp_max_dprx_data_rate(max_link_clock, max_lanes);
> +	max_rate = intel_dp_max_link_data_rate(intel_dp, max_link_clock,
> +max_lanes);
> +
>  	mode_rate = intel_dp_link_required(target_clock,
> 
> intel_dp_mode_min_output_bpp(connector, mode));
> 
> @@ -1564,8 +1581,10 @@ intel_dp_compute_link_config_wide(struct intel_dp
> *intel_dp,
>  			for (lane_count = limits->min_lane_count;
>  			     lane_count <= limits->max_lane_count;
>  			     lane_count <<= 1) {
> -				link_avail =
> drm_dp_max_dprx_data_rate(link_rate,
> -
> lane_count);
> +				link_avail =
> intel_dp_max_link_data_rate(intel_dp,
> +
> link_rate,
> +
> lane_count);
> +
> 
>  				if (mode_rate <= link_avail) {
>  					pipe_config->lane_count = lane_count;
> @@ -2422,8 +2441,9 @@ intel_dp_compute_link_config(struct intel_encoder
> *encoder,
>  		    pipe_config->pipe_bpp,
>  		    BPP_X16_ARGS(pipe_config->dsc.compressed_bpp_x16),
>  		    intel_dp_config_required_rate(pipe_config),
> -		    drm_dp_max_dprx_data_rate(pipe_config->port_clock,
> -					      pipe_config->lane_count));
> +		    intel_dp_max_link_data_rate(intel_dp,
> +						pipe_config->port_clock,
> +						pipe_config->lane_count));
> 
>  	return 0;
>  }
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h
> b/drivers/gpu/drm/i915/display/intel_dp.h
> index 49553e43add22..8b0dfbf06afff 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -117,6 +117,8 @@ bool intel_dp_get_colorimetry_status(struct intel_dp
> *intel_dp);  int intel_dp_link_required(int pixel_clock, int bpp);  int
> intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
>  				 int bw_overhead);
> +int intel_dp_max_link_data_rate(struct intel_dp *intel_dp,
> +				int max_dprx_rate, int max_dprx_lanes);
>  bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp);  bool
> intel_dp_needs_vsc_sdp(const struct intel_crtc_state *crtc_state,
>  			    const struct drm_connector_state *conn_state); diff --
> git a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index cfcc157b7d41d..520393dc8b453 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -1295,7 +1295,8 @@ intel_dp_mst_mode_valid_ctx(struct drm_connector
> *connector,
>  	max_link_clock = intel_dp_max_link_rate(intel_dp);
>  	max_lanes = intel_dp_max_lane_count(intel_dp);
> 
> -	max_rate = drm_dp_max_dprx_data_rate(max_link_clock, max_lanes);
> +	max_rate = intel_dp_max_link_data_rate(intel_dp,
> +					       max_link_clock, max_lanes);
>  	mode_rate = intel_dp_link_required(mode->clock, min_bpp);
> 
>  	ret = drm_modeset_lock(&mgr->base.lock, ctx);
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 13/19] drm/i915/dp: Account for tunnel BW limit in intel_dp_max_link_data_rate()
  2024-01-23 10:28 ` [PATCH 13/19] drm/i915/dp: Account for tunnel BW limit in intel_dp_max_link_data_rate() Imre Deak
@ 2024-02-06 20:42   ` Shankar, Uma
  0 siblings, 0 replies; 61+ messages in thread
From: Shankar, Uma @ 2024-02-06 20:42 UTC (permalink / raw)
  To: Deak, Imre, intel-gfx; +Cc: dri-devel



> -----Original Message-----
> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Imre
> Deak
> Sent: Tuesday, January 23, 2024 3:59 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> Subject: [PATCH 13/19] drm/i915/dp: Account for tunnel BW limit in
> intel_dp_max_link_data_rate()
> 
> Take any link BW limitation into account in intel_dp_max_link_data_rate(). Such a
> limitation can be due to multiple displays on (Thunderbolt) links with DP tunnels
> sharing the link BW.

Looks good to me.
Reviewed-by: Uma Shankar <uma.shankar@intel.com>

> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 32 +++++++++++++++++++++----
>  1 file changed, 28 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 323475569ee7f..78dfe8be6031d 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -63,6 +63,7 @@
>  #include "intel_dp_hdcp.h"
>  #include "intel_dp_link_training.h"
>  #include "intel_dp_mst.h"
> +#include "intel_dp_tunnel.h"
>  #include "intel_dpio_phy.h"
>  #include "intel_dpll.h"
>  #include "intel_fifo_underrun.h"
> @@ -152,6 +153,22 @@ int intel_dp_link_symbol_clock(int rate)
>  	return DIV_ROUND_CLOSEST(rate * 10, intel_dp_link_symbol_size(rate));
> }
> 
> +static int max_dprx_rate(struct intel_dp *intel_dp) {
> +	if (intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> +		return drm_dp_tunnel_max_dprx_rate(intel_dp->tunnel);
> +
> +	return drm_dp_bw_code_to_link_rate(intel_dp-
> >dpcd[DP_MAX_LINK_RATE]);
> +}
> +
> +static int max_dprx_lane_count(struct intel_dp *intel_dp) {
> +	if (intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> +		return drm_dp_tunnel_max_dprx_lane_count(intel_dp->tunnel);
> +
> +	return drm_dp_max_lane_count(intel_dp->dpcd);
> +}
> +
>  static void intel_dp_set_default_sink_rates(struct intel_dp *intel_dp)  {
>  	intel_dp->sink_rates[0] = 162000;
> @@ -180,7 +197,7 @@ static void intel_dp_set_dpcd_sink_rates(struct intel_dp
> *intel_dp)
>  	/*
>  	 * Sink rates for 8b/10b.
>  	 */
> -	max_rate = drm_dp_bw_code_to_link_rate(intel_dp-
> >dpcd[DP_MAX_LINK_RATE]);
> +	max_rate = max_dprx_rate(intel_dp);
>  	max_lttpr_rate = drm_dp_lttpr_max_link_rate(intel_dp-
> >lttpr_common_caps);
>  	if (max_lttpr_rate)
>  		max_rate = min(max_rate, max_lttpr_rate); @@ -259,7 +276,7
> @@ static void intel_dp_set_max_sink_lane_count(struct intel_dp *intel_dp)
>  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
>  	struct intel_encoder *encoder = &intel_dig_port->base;
> 
> -	intel_dp->max_sink_lane_count = drm_dp_max_lane_count(intel_dp-
> >dpcd);
> +	intel_dp->max_sink_lane_count = max_dprx_lane_count(intel_dp);
> 
>  	switch (intel_dp->max_sink_lane_count) {
>  	case 1:
> @@ -389,14 +406,21 @@ int intel_dp_effective_data_rate(int pixel_clock, int
> bpp_x16,
>   * @max_dprx_rate: Maximum data rate of the DPRX
>   * @max_dprx_lanes: Maximum lane count of the DPRX
>   *
> - * Calculate the maximum data rate for the provided link parameters.
> + * Calculate the maximum data rate for the provided link parameters
> + taking into
> + * account any BW limitations by a DP tunnel attached to @intel_dp.
>   *
>   * Returns the maximum data rate in kBps units.
>   */
>  int intel_dp_max_link_data_rate(struct intel_dp *intel_dp,
>  				int max_dprx_rate, int max_dprx_lanes)  {
> -	return drm_dp_max_dprx_data_rate(max_dprx_rate, max_dprx_lanes);
> +	int max_rate = drm_dp_max_dprx_data_rate(max_dprx_rate,
> +max_dprx_lanes);
> +
> +	if (intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> +		max_rate = min(max_rate,
> +			       drm_dp_tunnel_available_bw(intel_dp->tunnel));
> +
> +	return max_rate;
>  }
> 
>  bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp)
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 14/19] drm/i915/dp: Compute DP tunel BW during encoder state computation
  2024-01-23 10:28 ` [PATCH 14/19] drm/i915/dp: Compute DP tunel BW during encoder state computation Imre Deak
@ 2024-02-06 20:44   ` Shankar, Uma
  2024-02-06 23:25   ` Ville Syrjälä
  1 sibling, 0 replies; 61+ messages in thread
From: Shankar, Uma @ 2024-02-06 20:44 UTC (permalink / raw)
  To: Deak, Imre, intel-gfx; +Cc: dri-devel



> -----Original Message-----
> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Imre
> Deak
> Sent: Tuesday, January 23, 2024 3:59 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> Subject: [PATCH 14/19] drm/i915/dp: Compute DP tunel BW during encoder state
> computation
> 
> Compute the BW required through a DP tunnel on links with such tunnels
> detected and add the corresponding atomic state during a modeset.

Looks good to me.
Reviewed-by: Uma Shankar <uma.shankar@intel.com>

> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c     | 16 +++++++++++++---
>  drivers/gpu/drm/i915/display/intel_dp_mst.c | 13 +++++++++++++
>  2 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 78dfe8be6031d..6968fdb7ffcdf 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2880,6 +2880,7 @@ intel_dp_compute_config(struct intel_encoder
> *encoder,
>  			struct drm_connector_state *conn_state)  {
>  	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> +	struct intel_atomic_state *state =
> +to_intel_atomic_state(conn_state->state);
>  	struct drm_display_mode *adjusted_mode = &pipe_config-
> >hw.adjusted_mode;
>  	struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
>  	const struct drm_display_mode *fixed_mode; @@ -2980,6 +2981,9 @@
> intel_dp_compute_config(struct intel_encoder *encoder,
>  	intel_dp_compute_vsc_sdp(intel_dp, pipe_config, conn_state);
>  	intel_dp_compute_hdr_metadata_infoframe_sdp(intel_dp, pipe_config,
> conn_state);
> 
> +	intel_dp_tunnel_atomic_compute_stream_bw(state, intel_dp, connector,
> +						 pipe_config);
> +
>  	return 0;
>  }
> 
> @@ -6087,6 +6091,15 @@ static int intel_dp_connector_atomic_check(struct
> drm_connector *conn,
>  			return ret;
>  	}
> 
> +	if (!intel_connector_needs_modeset(state, conn))
> +		return 0;
> +
> +	ret = intel_dp_tunnel_atomic_check_state(state,
> +						 intel_dp,
> +						 intel_conn);
> +	if (ret)
> +		return ret;
> +
>  	/*
>  	 * We don't enable port sync on BDW due to missing w/as and
>  	 * due to not having adjusted the modeset sequence appropriately.
> @@ -6094,9 +6107,6 @@ static int intel_dp_connector_atomic_check(struct
> drm_connector *conn,
>  	if (DISPLAY_VER(dev_priv) < 9)
>  		return 0;
> 
> -	if (!intel_connector_needs_modeset(state, conn))
> -		return 0;
> -
>  	if (conn->has_tile) {
>  		ret = intel_modeset_tile_group(state, conn->tile_group->id);
>  		if (ret)
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index 520393dc8b453..cbfab3173b9ef 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -42,6 +42,7 @@
>  #include "intel_dp.h"
>  #include "intel_dp_hdcp.h"
>  #include "intel_dp_mst.h"
> +#include "intel_dp_tunnel.h"
>  #include "intel_dpio_phy.h"
>  #include "intel_hdcp.h"
>  #include "intel_hotplug.h"
> @@ -523,6 +524,7 @@ static int intel_dp_mst_compute_config(struct
> intel_encoder *encoder,
>  				       struct drm_connector_state *conn_state)  {
>  	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> +	struct intel_atomic_state *state =
> +to_intel_atomic_state(conn_state->state);
>  	struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
>  	struct intel_dp *intel_dp = &intel_mst->primary->dp;
>  	const struct intel_connector *connector = @@ -619,6 +621,9 @@ static
> int intel_dp_mst_compute_config(struct intel_encoder *encoder,
> 
>  	intel_psr_compute_config(intel_dp, pipe_config, conn_state);
> 
> +	intel_dp_tunnel_atomic_compute_stream_bw(state, intel_dp, connector,
> +						 pipe_config);
> +
>  	return 0;
>  }
> 
> @@ -876,6 +881,14 @@ intel_dp_mst_atomic_check(struct drm_connector
> *connector,
>  	if (ret)
>  		return ret;
> 
> +	if (intel_connector_needs_modeset(state, connector)) {
> +		ret = intel_dp_tunnel_atomic_check_state(state,
> +							 intel_connector-
> >mst_port,
> +							 intel_connector);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	return drm_dp_atomic_release_time_slots(&state->base,
>  						&intel_connector->mst_port-
> >mst_mgr,
>  						intel_connector->port);
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 15/19] drm/i915/dp: Allocate/free DP tunnel BW in the encoder enable/disable hooks
  2024-01-23 10:28 ` [PATCH 15/19] drm/i915/dp: Allocate/free DP tunnel BW in the encoder enable/disable hooks Imre Deak
@ 2024-02-06 20:45   ` Shankar, Uma
  0 siblings, 0 replies; 61+ messages in thread
From: Shankar, Uma @ 2024-02-06 20:45 UTC (permalink / raw)
  To: Deak, Imre, intel-gfx; +Cc: dri-devel



> -----Original Message-----
> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Imre
> Deak
> Sent: Tuesday, January 23, 2024 3:59 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> Subject: [PATCH 15/19] drm/i915/dp: Allocate/free DP tunnel BW in the encoder
> enable/disable hooks
> 
> Allocate and free the DP tunnel BW required by a stream while enabling/disabling
> the stream during a modeset.

Looks good to me.
Reviewed-by: Uma Shankar <uma.shankar@intel.com>

> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/g4x_dp.c    | 28 ++++++++++++++++++++++++
>  drivers/gpu/drm/i915/display/intel_ddi.c |  7 ++++++
>  2 files changed, 35 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/display/g4x_dp.c
> b/drivers/gpu/drm/i915/display/g4x_dp.c
> index dfe0b07a122d1..1e498e1510adf 100644
> --- a/drivers/gpu/drm/i915/display/g4x_dp.c
> +++ b/drivers/gpu/drm/i915/display/g4x_dp.c
> @@ -19,6 +19,7 @@
>  #include "intel_dp.h"
>  #include "intel_dp_aux.h"
>  #include "intel_dp_link_training.h"
> +#include "intel_dp_tunnel.h"
>  #include "intel_dpio_phy.h"
>  #include "intel_fifo_underrun.h"
>  #include "intel_hdmi.h"
> @@ -729,6 +730,24 @@ static void vlv_enable_dp(struct intel_atomic_state
> *state,
>  	encoder->audio_enable(encoder, pipe_config, conn_state);  }
> 
> +static void g4x_dp_pre_pll_enable(struct intel_atomic_state *state,
> +				  struct intel_encoder *encoder,
> +				  const struct intel_crtc_state *new_crtc_state,
> +				  const struct drm_connector_state
> *new_conn_state) {
> +	intel_dp_tunnel_atomic_alloc_bw(state, encoder,
> +					new_crtc_state, new_conn_state);
> +}
> +
> +static void g4x_dp_post_pll_disable(struct intel_atomic_state *state,
> +				    struct intel_encoder *encoder,
> +				    const struct intel_crtc_state *old_crtc_state,
> +				    const struct drm_connector_state
> *old_conn_state) {
> +	intel_dp_tunnel_atomic_free_bw(state, encoder,
> +				       old_crtc_state, old_conn_state); }
> +
>  static void g4x_pre_enable_dp(struct intel_atomic_state *state,
>  			      struct intel_encoder *encoder,
>  			      const struct intel_crtc_state *pipe_config, @@ -
> 762,6 +781,8 @@ static void vlv_dp_pre_pll_enable(struct intel_atomic_state
> *state,
>  	intel_dp_prepare(encoder, pipe_config);
> 
>  	vlv_phy_pre_pll_enable(encoder, pipe_config);
> +
> +	g4x_dp_pre_pll_enable(state, encoder, pipe_config, conn_state);
>  }
> 
>  static void chv_pre_enable_dp(struct intel_atomic_state *state, @@ -785,6
> +806,8 @@ static void chv_dp_pre_pll_enable(struct intel_atomic_state *state,
>  	intel_dp_prepare(encoder, pipe_config);
> 
>  	chv_phy_pre_pll_enable(encoder, pipe_config);
> +
> +	g4x_dp_pre_pll_enable(state, encoder, pipe_config, conn_state);
>  }
> 
>  static void chv_dp_post_pll_disable(struct intel_atomic_state *state, @@ -792,6
> +815,8 @@ static void chv_dp_post_pll_disable(struct intel_atomic_state *state,
>  				    const struct intel_crtc_state *old_crtc_state,
>  				    const struct drm_connector_state
> *old_conn_state)  {
> +	g4x_dp_post_pll_disable(state, encoder, old_crtc_state,
> +old_conn_state);
> +
>  	chv_phy_post_pll_disable(encoder, old_crtc_state);  }
> 
> @@ -1349,11 +1374,14 @@ bool g4x_dp_init(struct drm_i915_private *dev_priv,
>  		intel_encoder->enable = vlv_enable_dp;
>  		intel_encoder->disable = vlv_disable_dp;
>  		intel_encoder->post_disable = vlv_post_disable_dp;
> +		intel_encoder->post_pll_disable = g4x_dp_post_pll_disable;
>  	} else {
> +		intel_encoder->pre_pll_enable = g4x_dp_pre_pll_enable;
>  		intel_encoder->pre_enable = g4x_pre_enable_dp;
>  		intel_encoder->enable = g4x_enable_dp;
>  		intel_encoder->disable = g4x_disable_dp;
>  		intel_encoder->post_disable = g4x_post_disable_dp;
> +		intel_encoder->post_pll_disable = g4x_dp_post_pll_disable;
>  	}
>  	intel_encoder->audio_enable = g4x_dp_audio_enable;
>  	intel_encoder->audio_disable = g4x_dp_audio_disable; diff --git
> a/drivers/gpu/drm/i915/display/intel_ddi.c
> b/drivers/gpu/drm/i915/display/intel_ddi.c
> index 922194b957be2..aa6e7da08fbce 100644
> --- a/drivers/gpu/drm/i915/display/intel_ddi.c
> +++ b/drivers/gpu/drm/i915/display/intel_ddi.c
> @@ -54,6 +54,7 @@
>  #include "intel_dp_aux.h"
>  #include "intel_dp_link_training.h"
>  #include "intel_dp_mst.h"
> +#include "intel_dp_tunnel.h"
>  #include "intel_dpio_phy.h"
>  #include "intel_dsi.h"
>  #include "intel_fdi.h"
> @@ -3141,6 +3142,9 @@ static void intel_ddi_post_pll_disable(struct
> intel_atomic_state *state,
> 
>  	main_link_aux_power_domain_put(dig_port, old_crtc_state);
> 
> +	intel_dp_tunnel_atomic_free_bw(state, encoder,
> +				       old_crtc_state, old_conn_state);
> +
>  	if (is_tc_port)
>  		intel_tc_port_put_link(dig_port);
>  }
> @@ -3480,6 +3484,9 @@ intel_ddi_pre_pll_enable(struct intel_atomic_state
> *state,
>  		intel_ddi_update_active_dpll(state, encoder, master_crtc);
>  	}
> 
> +	intel_dp_tunnel_atomic_alloc_bw(state, encoder,
> +					crtc_state, conn_state);
> +
>  	main_link_aux_power_domain_get(dig_port, crtc_state);
> 
>  	if (is_tc_port && !intel_tc_port_in_tbt_alt_mode(dig_port))
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 17/19] drm/i915/dp: Call intel_dp_sync_state() always for DDI DP encoders
  2024-01-23 10:28 ` [PATCH 17/19] drm/i915/dp: Call intel_dp_sync_state() always for DDI DP encoders Imre Deak
@ 2024-02-06 20:46   ` Shankar, Uma
  0 siblings, 0 replies; 61+ messages in thread
From: Shankar, Uma @ 2024-02-06 20:46 UTC (permalink / raw)
  To: Deak, Imre, intel-gfx; +Cc: dri-devel



> -----Original Message-----
> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Imre
> Deak
> Sent: Tuesday, January 23, 2024 3:59 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> Subject: [PATCH 17/19] drm/i915/dp: Call intel_dp_sync_state() always for DDI
> DP encoders
> 
> A follow-up change will need to resume DP tunnels during system resume, so call
> intel_dp_sync_state() always for DDI encoders, so this function can resume the
> tunnels for all DP connectors.

Looks good to me.
Reviewed-by: Uma Shankar <uma.shankar@intel.com>

> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_ddi.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c
> b/drivers/gpu/drm/i915/display/intel_ddi.c
> index aa6e7da08fbce..1e26e62b82d48 100644
> --- a/drivers/gpu/drm/i915/display/intel_ddi.c
> +++ b/drivers/gpu/drm/i915/display/intel_ddi.c
> @@ -4131,7 +4131,7 @@ static void intel_ddi_sync_state(struct intel_encoder
> *encoder,
>  		intel_tc_port_sanitize_mode(enc_to_dig_port(encoder),
>  					    crtc_state);
> 
> -	if (crtc_state && intel_crtc_has_dp_encoder(crtc_state))
> +	if (intel_encoder_is_dp(encoder))
>  		intel_dp_sync_state(encoder, crtc_state);  }
> 
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 11/19] drm/i915/dp: Add support for DP tunnel BW allocation
  2024-01-23 10:28 ` [PATCH 11/19] drm/i915/dp: Add support for DP tunnel BW allocation Imre Deak
  2024-02-05 22:47   ` Ville Syrjälä
@ 2024-02-06 23:08   ` Ville Syrjälä
  2024-02-07 12:09     ` Imre Deak
  1 sibling, 1 reply; 61+ messages in thread
From: Ville Syrjälä @ 2024-02-06 23:08 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx, dri-devel

On Tue, Jan 23, 2024 at 12:28:42PM +0200, Imre Deak wrote:
> Add support to enable the DP tunnel BW allocation mode. Follow-up
> patches will call the required helpers added here to prepare for a
> modeset on a link with DP tunnels, the last change in the patchset
> actually enabling BWA.
> 
> With BWA enabled, the driver will expose the full mode list a display
> supports, regardless of any BW limitation on a shared (Thunderbolt)
> link. Such BW limits will be checked against only during a modeset, when
> the driver has the full knowledge of each display's BW requirement.
> 
> If the link BW changes in a way that a connector's modelist may also
> change, userspace will get a hotplug notification for all the connectors
> sharing the same link (so it can adjust the mode used for a display).
> 
> The BW limitation can change at any point, asynchronously to modesets
> on a given connector, so a modeset can fail even though the atomic check
> for it passed. In such scenarios userspace will get a bad link
> notification and in response is supposed to retry the modeset.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/Kconfig                  |  13 +
>  drivers/gpu/drm/i915/Kconfig.debug            |   1 +
>  drivers/gpu/drm/i915/Makefile                 |   3 +
>  drivers/gpu/drm/i915/display/intel_atomic.c   |   2 +
>  .../gpu/drm/i915/display/intel_display_core.h |   1 +
>  .../drm/i915/display/intel_display_types.h    |   9 +
>  .../gpu/drm/i915/display/intel_dp_tunnel.c    | 642 ++++++++++++++++++
>  .../gpu/drm/i915/display/intel_dp_tunnel.h    | 131 ++++
>  8 files changed, 802 insertions(+)
>  create mode 100644 drivers/gpu/drm/i915/display/intel_dp_tunnel.c
>  create mode 100644 drivers/gpu/drm/i915/display/intel_dp_tunnel.h
> 
> diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig
> index b5d6e3352071f..4636913c17868 100644
> --- a/drivers/gpu/drm/i915/Kconfig
> +++ b/drivers/gpu/drm/i915/Kconfig
> @@ -155,6 +155,19 @@ config DRM_I915_PXP
>  	  protected session and manage the status of the alive software session,
>  	  as well as its life cycle.
>  
> +config DRM_I915_DP_TUNNEL
> +	bool "Enable DP tunnel support"
> +	depends on DRM_I915
> +	select DRM_DISPLAY_DP_TUNNEL
> +	default y
> +	help
> +	  Choose this option to detect DP tunnels and enable the Bandwidth
> +	  Allocation mode for such tunnels. This allows using the maximum
> +	  resolution allowed by the link BW on all displays sharing the
> +	  link BW, for instance on a Thunderbolt link.
> +
> +	  If in doubt, say "Y".
> +
>  menu "drm/i915 Debugging"
>  depends on DRM_I915
>  depends on EXPERT
> diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
> index 5b7162076850c..bc18e2d9ea05d 100644
> --- a/drivers/gpu/drm/i915/Kconfig.debug
> +++ b/drivers/gpu/drm/i915/Kconfig.debug
> @@ -28,6 +28,7 @@ config DRM_I915_DEBUG
>  	select STACKDEPOT
>  	select STACKTRACE
>  	select DRM_DP_AUX_CHARDEV
> +	select DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE if DRM_I915_DP_TUNNEL
>  	select X86_MSR # used by igt/pm_rpm
>  	select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks)
>  	select DRM_DEBUG_MM if DRM=y
> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
> index c13f14edb5088..3ef6ed41e62b4 100644
> --- a/drivers/gpu/drm/i915/Makefile
> +++ b/drivers/gpu/drm/i915/Makefile
> @@ -369,6 +369,9 @@ i915-y += \
>  	display/vlv_dsi.o \
>  	display/vlv_dsi_pll.o
>  
> +i915-$(CONFIG_DRM_I915_DP_TUNNEL) += \
> +	display/intel_dp_tunnel.o
> +
>  i915-y += \
>  	i915_perf.o
>  
> diff --git a/drivers/gpu/drm/i915/display/intel_atomic.c b/drivers/gpu/drm/i915/display/intel_atomic.c
> index ec0d5168b5035..96ab37e158995 100644
> --- a/drivers/gpu/drm/i915/display/intel_atomic.c
> +++ b/drivers/gpu/drm/i915/display/intel_atomic.c
> @@ -29,6 +29,7 @@
>   * See intel_atomic_plane.c for the plane-specific atomic functionality.
>   */
>  
> +#include <drm/display/drm_dp_tunnel.h>
>  #include <drm/drm_atomic.h>
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_fourcc.h>
> @@ -38,6 +39,7 @@
>  #include "intel_atomic.h"
>  #include "intel_cdclk.h"
>  #include "intel_display_types.h"
> +#include "intel_dp_tunnel.h"
>  #include "intel_global_state.h"
>  #include "intel_hdcp.h"
>  #include "intel_psr.h"
> diff --git a/drivers/gpu/drm/i915/display/intel_display_core.h b/drivers/gpu/drm/i915/display/intel_display_core.h
> index a90f1aa201be8..0993d25a0a686 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_core.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_core.h
> @@ -522,6 +522,7 @@ struct intel_display {
>  	} wq;
>  
>  	/* Grouping using named structs. Keep sorted. */
> +	struct drm_dp_tunnel_mgr *dp_tunnel_mgr;
>  	struct intel_audio audio;
>  	struct intel_dpll dpll;
>  	struct intel_fbc *fbc[I915_MAX_FBCS];
> diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
> index ae2e8cff9d691..b79db78b27728 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_types.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_types.h
> @@ -33,6 +33,7 @@
>  
>  #include <drm/display/drm_dp_dual_mode_helper.h>
>  #include <drm/display/drm_dp_mst_helper.h>
> +#include <drm/display/drm_dp_tunnel.h>
>  #include <drm/display/drm_dsc.h>
>  #include <drm/drm_atomic.h>
>  #include <drm/drm_crtc.h>
> @@ -677,6 +678,8 @@ struct intel_atomic_state {
>  
>  	struct intel_shared_dpll_state shared_dpll[I915_NUM_PLLS];
>  
> +	struct intel_dp_tunnel_inherited_state *dp_tunnel_state;

'dp_tunnel_state' is a bit too generic sounding to me.
'inherited_tunnels' or something like that maybe?

> +
>  	/*
>  	 * Current watermarks can't be trusted during hardware readout, so
>  	 * don't bother calculating intermediate watermarks.
> @@ -1372,6 +1375,9 @@ struct intel_crtc_state {
>  		struct drm_dsc_config config;
>  	} dsc;
>  
> +	/* DP tunnel used for BW allocation. */
> +	struct drm_dp_tunnel_ref dp_tunnel_ref;
> +
>  	/* HSW+ linetime watermarks */
>  	u16 linetime;
>  	u16 ips_linetime;
> @@ -1775,6 +1781,9 @@ struct intel_dp {
>  	/* connector directly attached - won't be use for modeset in mst world */
>  	struct intel_connector *attached_connector;
>  
> +	struct drm_dp_tunnel *tunnel;
> +	bool tunnel_suspended:1;
> +
>  	/* mst connector list */
>  	struct intel_dp_mst_encoder *mst_encoders[I915_MAX_PIPES];
>  	struct drm_dp_mst_topology_mgr mst_mgr;
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_tunnel.c b/drivers/gpu/drm/i915/display/intel_dp_tunnel.c
> new file mode 100644
> index 0000000000000..52dd0108a6c13
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/display/intel_dp_tunnel.c
> @@ -0,0 +1,642 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#include "i915_drv.h"
> +
> +#include <drm/display/drm_dp_tunnel.h>
> +
> +#include "intel_atomic.h"
> +#include "intel_display_limits.h"
> +#include "intel_display_types.h"
> +#include "intel_dp.h"
> +#include "intel_dp_link_training.h"
> +#include "intel_dp_mst.h"
> +#include "intel_dp_tunnel.h"
> +#include "intel_link_bw.h"
> +
> +struct intel_dp_tunnel_inherited_state {
> +	struct {
> +		struct drm_dp_tunnel_ref tunnel_ref;
> +	} tunnels[I915_MAX_PIPES];

Hmm. Does the extra middle-man struct buy us anything?

> +};
> +
> +static void destroy_tunnel(struct intel_dp *intel_dp)
> +{
> +	drm_dp_tunnel_destroy(intel_dp->tunnel);
> +	intel_dp->tunnel = NULL;
> +}
> +
> +void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp)
> +{
> +	if (!intel_dp->tunnel)
> +		return;
> +
> +	destroy_tunnel(intel_dp);
> +}
> +
> +void intel_dp_tunnel_destroy(struct intel_dp *intel_dp)
> +{
> +	if (!intel_dp->tunnel)
> +		return;
> +
> +	if (intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> +		drm_dp_tunnel_disable_bw_alloc(intel_dp->tunnel);
> +
> +	destroy_tunnel(intel_dp);
> +}
> +
> +static int kbytes_to_mbits(int kbytes)
> +{
> +	return DIV_ROUND_UP(kbytes * 8, 1000);
> +}
> +
> +static int get_current_link_bw(struct intel_dp *intel_dp,
> +			       bool *below_dprx_bw)
> +{
> +	int rate = intel_dp_max_common_rate(intel_dp);
> +	int lane_count = intel_dp_max_common_lane_count(intel_dp);
> +	int bw;
> +
> +	bw = intel_dp_max_link_data_rate(intel_dp, rate, lane_count);
> +	*below_dprx_bw = bw < drm_dp_max_dprx_data_rate(rate, lane_count);
> +
> +	return bw;
> +}
> +
> +static int update_tunnel_state(struct intel_dp *intel_dp)
> +{
> +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> +	bool old_bw_below_dprx;
> +	bool new_bw_below_dprx;
> +	int old_bw;
> +	int new_bw;
> +	int ret;
> +
> +	old_bw = get_current_link_bw(intel_dp, &old_bw_below_dprx);
> +
> +	ret = drm_dp_tunnel_update_state(intel_dp->tunnel);
> +	if (ret < 0) {
> +		drm_dbg_kms(&i915->drm,
> +			    "[DPTUN %s][ENCODER:%d:%s] State update failed (err %pe)\n",
> +			    drm_dp_tunnel_name(intel_dp->tunnel),
> +			    encoder->base.base.id,
> +			    encoder->base.name,
> +			    ERR_PTR(ret));
> +
> +		return ret;
> +	}
> +
> +	if (ret == 0 ||
> +	    !drm_dp_tunnel_bw_alloc_is_enabled(intel_dp->tunnel))
> +		return 0;
> +
> +	intel_dp_update_sink_caps(intel_dp);
> +
> +	new_bw = get_current_link_bw(intel_dp, &new_bw_below_dprx);
> +
> +	/* Suppress the notification if the mode list can't change due to bw. */
> +	if (old_bw_below_dprx == new_bw_below_dprx &&
> +	    !new_bw_below_dprx)
> +		return 0;
> +
> +	drm_dbg_kms(&i915->drm,
> +		    "[DPTUN %s][ENCODER:%d:%s] Notify users about BW change: %d -> %d\n",
> +		    drm_dp_tunnel_name(intel_dp->tunnel),
> +		    encoder->base.base.id,
> +		    encoder->base.name,
> +		    kbytes_to_mbits(old_bw),
> +		    kbytes_to_mbits(new_bw));
> +
> +	return 1;
> +}
> +
> +static int allocate_initial_tunnel_bw(struct intel_dp *intel_dp,
> +				      struct drm_modeset_acquire_ctx *ctx)
> +{
> +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> +	const struct intel_crtc *crtc;
> +	int tunnel_bw = 0;
> +	u8 pipe_mask;
> +	int err;
> +
> +	err = intel_dp_get_active_pipes(intel_dp, ctx,
> +					INTEL_DP_GET_PIPES_SYNC,
> +					&pipe_mask);
> +	if (err)
> +		return err;
> +
> +	for_each_intel_crtc_in_pipe_mask(&i915->drm, crtc, pipe_mask) {
> +		const struct intel_crtc_state *crtc_state =
> +			to_intel_crtc_state(crtc->base.state);
> +		int stream_bw = intel_dp_config_required_rate(crtc_state);
> +
> +		drm_dbg_kms(&i915->drm,
> +			    "[DPTUN %s][ENCODER:%d:%s][CRTC:%d:%s] Initial BW for stream %d: %d/%d Mb/s\n",
> +			    drm_dp_tunnel_name(intel_dp->tunnel),
> +			    encoder->base.base.id,
> +			    encoder->base.name,

I would try to have the id+name for each object on the same line.
Avoids the whole thing eating so many lines, and the two are related
so just makes sense.

> +			    crtc->base.base.id,
> +			    crtc->base.name,
> +			    crtc->pipe,
> +			    kbytes_to_mbits(stream_bw),
> +			    kbytes_to_mbits(tunnel_bw));
> +
> +		tunnel_bw += stream_bw;
> +	}
> +
> +	err = drm_dp_tunnel_alloc_bw(intel_dp->tunnel, tunnel_bw);
> +	if (err) {
> +		drm_dbg_kms(&i915->drm,
> +			    "[DPTUN %s][ENCODER:%d:%s] Initial BW allocation failed (err %pe)\n",
> +			    drm_dp_tunnel_name(intel_dp->tunnel),
> +			    encoder->base.base.id,
> +			    encoder->base.name,
> +			    ERR_PTR(err));
> +
> +		return err;
> +	}
> +
> +	return update_tunnel_state(intel_dp);
> +}
> +
> +static int detect_new_tunnel(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx)
> +{
> +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> +	struct drm_dp_tunnel *tunnel;
> +	int ret;
> +
> +	tunnel = drm_dp_tunnel_detect(i915->display.dp_tunnel_mgr,
> +					&intel_dp->aux);
> +	if (IS_ERR(tunnel))
> +		return PTR_ERR(tunnel);
> +
> +	intel_dp->tunnel = tunnel;
> +
> +	ret = drm_dp_tunnel_enable_bw_alloc(intel_dp->tunnel);
> +	if (ret) {
> +		if (ret == -EOPNOTSUPP)
> +			return 0;
> +
> +		drm_dbg_kms(&i915->drm,
> +			    "[DPTUN %s][ENCODER:%d:%s] Failed to enable BW allocation mode (ret %pe)\n",
> +			    drm_dp_tunnel_name(intel_dp->tunnel),
> +			    encoder->base.base.id,
> +			    encoder->base.name,
> +			    ERR_PTR(ret));
> +
> +		/* Keep the tunnel with BWA disabled */
> +		return 0;
> +	}
> +
> +	ret = allocate_initial_tunnel_bw(intel_dp, ctx);
> +	if (ret < 0)
> +		intel_dp_tunnel_destroy(intel_dp);
> +
> +	return ret;
> +}
> +
> +int intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx)
> +{
> +	int ret;
> +
> +	if (intel_dp_is_edp(intel_dp))
> +		return 0;
> +
> +	if (intel_dp->tunnel) {
> +		ret = update_tunnel_state(intel_dp);
> +		if (ret >= 0)
> +			return ret;
> +
> +		/* Try to recreate the tunnel after an update error. */
> +		intel_dp_tunnel_destroy(intel_dp);
> +	}
> +
> +	ret = detect_new_tunnel(intel_dp, ctx);
> +	if (ret >= 0 || ret == -EDEADLK)

This if-statement seems to achieve nothing. Was there supposed
be some error handling for the actual error cases here?

> +		return ret;
> +
> +	return ret;
> +}
> +
> +bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp)
> +{
> +	return intel_dp->tunnel &&
> +		drm_dp_tunnel_bw_alloc_is_enabled(intel_dp->tunnel);

We seem to have quite a few wrappers just for the NULL check.
I wonder if we should just make drm_dp_tunnel*() accept NULL
directly? Maybe something to think for the future...

> +}
> +
> +void intel_dp_tunnel_suspend(struct intel_dp *intel_dp)
> +{
> +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> +	struct intel_connector *connector = intel_dp->attached_connector;
> +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> +
> +	if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> +		return;
> +
> +	drm_dbg_kms(&i915->drm, "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Suspend\n",
> +		    drm_dp_tunnel_name(intel_dp->tunnel),
> +		    connector->base.base.id, connector->base.name,
> +		    encoder->base.base.id, encoder->base.name);
> +
> +	intel_dp->tunnel_suspended = true;
> +}
> +
> +void intel_dp_tunnel_resume(struct intel_dp *intel_dp, bool dpcd_updated)
> +{
> +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> +	struct intel_connector *connector = intel_dp->attached_connector;
> +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> +	u8 dpcd[DP_RECEIVER_CAP_SIZE];
> +	int err = 0;
> +
> +	if (!intel_dp->tunnel_suspended)
> +		return;
> +
> +	intel_dp->tunnel_suspended = false;
> +
> +	drm_dbg_kms(&i915->drm, "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Resume\n",
> +		    drm_dp_tunnel_name(intel_dp->tunnel),
> +		    connector->base.base.id, connector->base.name,
> +		    encoder->base.base.id, encoder->base.name);
> +
> +	/* DPRX caps read required by tunnel detection */
> +	if (!dpcd_updated)
> +		err = intel_dp_read_dprx_caps(intel_dp, dpcd);
> +
> +	if (err)
> +		drm_dp_tunnel_set_io_error(intel_dp->tunnel);
> +	else
> +		err = drm_dp_tunnel_enable_bw_alloc(intel_dp->tunnel);
> +		/* TODO: allocate initial BW */
> +
> +	if (!err)
> +		return;
> +
> +	drm_dbg_kms(&i915->drm,
> +		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Tunnel can't be resumed, will drop and redect it (err %pe)\n",
> +		    drm_dp_tunnel_name(intel_dp->tunnel),
> +		    connector->base.base.id, connector->base.name,
> +		    encoder->base.base.id, encoder->base.name,
> +		    ERR_PTR(err));
> +}
> +
> +static struct drm_dp_tunnel *
> +get_inherited_tunnel_state(struct intel_atomic_state *state,
> +			   const struct intel_crtc *crtc)
> +{
> +	if (!state->dp_tunnel_state)
> +		return NULL;
> +
> +	return state->dp_tunnel_state->tunnels[crtc->pipe].tunnel_ref.tunnel;
> +}
> +
> +static int
> +add_inherited_tunnel_state(struct intel_atomic_state *state,
> +			   struct drm_dp_tunnel *tunnel,
> +			   const struct intel_crtc *crtc)
> +{
> +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> +	struct drm_dp_tunnel *old_tunnel;
> +
> +	old_tunnel = get_inherited_tunnel_state(state, crtc);
> +	if (old_tunnel) {
> +		drm_WARN_ON(&i915->drm, old_tunnel != tunnel);
> +		return 0;
> +	}
> +
> +	if (!state->dp_tunnel_state) {
> +		state->dp_tunnel_state = kzalloc(sizeof(*state->dp_tunnel_state), GFP_KERNEL);

I was pondering if the dynamic allocation for this is super useful.
But I guess we anywya have to deal with various errors in the
places where this gets used so maybe not much benefit from
avoiding it.

> +		if (!state->dp_tunnel_state)
> +			return -ENOMEM;
> +	}
> +
> +	drm_dp_tunnel_ref_get(tunnel,
> +			      &state->dp_tunnel_state->tunnels[crtc->pipe].tunnel_ref);
> +
> +	return 0;
> +}
> +
> +static int check_inherited_tunnel_state(struct intel_atomic_state *state,
> +					struct intel_dp *intel_dp,
> +					const struct intel_digital_connector_state *old_conn_state)
> +{
> +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> +	const struct intel_connector *connector =
> +		to_intel_connector(old_conn_state->base.connector);
> +	struct intel_crtc *old_crtc;
> +	const struct intel_crtc_state *old_crtc_state;
> +
> +	/*
> +	 * If a BWA tunnel gets detected only after the corresponding
> +	 * connector got enabled already without a BWA tunnel, or a different
> +	 * BWA tunnel (which was removed meanwhile) the old CRTC state won't
> +	 * contain the state of the current tunnel. This tunnel still has a
> +	 * reserved BW, which needs to be released, add the state for such
> +	 * inherited tunnels separately only to this atomic state.
> +	 */
> +	if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> +		return 0;
> +
> +	if (!old_conn_state->base.crtc)
> +		return 0;
> +
> +	old_crtc = to_intel_crtc(old_conn_state->base.crtc);
> +	old_crtc_state = intel_atomic_get_old_crtc_state(state, old_crtc);
> +
> +	if (!old_crtc_state->hw.active ||
> +	    old_crtc_state->dp_tunnel_ref.tunnel == intel_dp->tunnel)
> +		return 0;
> +
> +	drm_dbg_kms(&i915->drm,
> +		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Adding state for inherited tunnel %p\n",
> +		    drm_dp_tunnel_name(intel_dp->tunnel),
> +		    connector->base.base.id,
> +		    connector->base.name,
> +		    encoder->base.base.id,
> +		    encoder->base.name,
> +		    old_crtc->base.base.id,
> +		    old_crtc->base.name,
> +		    intel_dp->tunnel);
> +
> +	return add_inherited_tunnel_state(state, intel_dp->tunnel, old_crtc);
> +}
> +
> +void intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state)
> +{
> +	enum pipe pipe;
> +
> +	if (!state->dp_tunnel_state)
> +		return;
> +
> +	for_each_pipe(to_i915(state->base.dev), pipe)
> +		if (state->dp_tunnel_state->tunnels[pipe].tunnel_ref.tunnel)
> +			drm_dp_tunnel_ref_put(&state->dp_tunnel_state->tunnels[pipe].tunnel_ref);
> +
> +	kfree(state->dp_tunnel_state);
> +	state->dp_tunnel_state = NULL;
> +}
> +
> +static int intel_dp_tunnel_atomic_add_group_state(struct intel_atomic_state *state,
> +						  struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> +	u32 pipe_mask;
> +	int err;
> +
> +	if (!tunnel)
> +		return 0;
> +
> +	err = drm_dp_tunnel_atomic_get_group_streams_in_state(&state->base,
> +							      tunnel, &pipe_mask);
> +	if (err)
> +		return err;
> +
> +	drm_WARN_ON(&i915->drm, pipe_mask & ~((1 << I915_MAX_PIPES) - 1));
> +
> +	return intel_modeset_pipes_in_mask_early(state, "DPTUN", pipe_mask);
> +}
> +
> +int intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state,
> +					      struct intel_crtc *crtc)
> +{
> +	const struct intel_crtc_state *new_crtc_state =
> +		intel_atomic_get_new_crtc_state(state, crtc);
> +	const struct drm_dp_tunnel_state *tunnel_state;
> +	struct drm_dp_tunnel *tunnel = new_crtc_state->dp_tunnel_ref.tunnel;
> +
> +	if (!tunnel)
> +		return 0;
> +
> +	tunnel_state = drm_dp_tunnel_atomic_get_state(&state->base, tunnel);
> +	if (IS_ERR(tunnel_state))
> +		return PTR_ERR(tunnel_state);
> +
> +	return 0;
> +}
> +
> +static int check_group_state(struct intel_atomic_state *state,
> +			     struct intel_dp *intel_dp,
> +			     const struct intel_connector *connector,
> +			     struct intel_crtc *crtc)
> +{
> +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> +	const struct intel_crtc_state *crtc_state;
> +
> +	crtc_state = intel_atomic_get_new_crtc_state(state, crtc);

Doesn't really need to be on its own line here.

> +
> +	if (!crtc_state->dp_tunnel_ref.tunnel)
> +		return 0;
> +
> +	drm_dbg_kms(&i915->drm,
> +		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Adding group state for tunnel %p\n",
> +		    drm_dp_tunnel_name(intel_dp->tunnel),
> +		    connector->base.base.id,
> +		    connector->base.name,
> +		    encoder->base.base.id,
> +		    encoder->base.name,
> +		    crtc->base.base.id,
> +		    crtc->base.name,
> +		    intel_dp->tunnel);
> +
> +	return intel_dp_tunnel_atomic_add_group_state(state, crtc_state->dp_tunnel_ref.tunnel);
> +}
> +
> +int intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state,
> +				       struct intel_dp *intel_dp,
> +				       struct intel_connector *connector)
> +{
> +	const struct intel_digital_connector_state *old_conn_state =
> +		intel_atomic_get_new_connector_state(state, connector);

s/get_new/get_old/

> +	const struct intel_digital_connector_state *new_conn_state =
> +		intel_atomic_get_new_connector_state(state, connector);
> +	int err;
> +
> +	if (old_conn_state->base.crtc) {
> +		err = check_group_state(state, intel_dp, connector,
> +					to_intel_crtc(old_conn_state->base.crtc));
> +		if (err)
> +			return err;
> +	}
> +
> +	if (new_conn_state->base.crtc &&
> +	    new_conn_state->base.crtc != old_conn_state->base.crtc) {
> +		err = check_group_state(state, intel_dp, connector,
> +					to_intel_crtc(new_conn_state->base.crtc));
> +		if (err)
> +			return err;
> +	}
> +
> +	return check_inherited_tunnel_state(state, intel_dp, old_conn_state);
> +}
> +
> +void intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
> +					      struct intel_dp *intel_dp,
> +					      const struct intel_connector *connector,
> +					      struct intel_crtc_state *crtc_state)
> +{
> +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> +	const struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);

We don't usually const anything but the states.

> +	int required_rate = intel_dp_config_required_rate(crtc_state);
> +
> +	if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> +		return;
> +
> +	drm_dbg_kms(&i915->drm,
> +		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Stream %d required BW %d Mb/s\n",
> +		    drm_dp_tunnel_name(intel_dp->tunnel),
> +		    connector->base.base.id,
> +		    connector->base.name,
> +		    encoder->base.base.id,
> +		    encoder->base.name,
> +		    crtc->base.base.id,
> +		    crtc->base.name,
> +		    crtc->pipe,
> +		    kbytes_to_mbits(required_rate));
> +
> +	drm_dp_tunnel_atomic_set_stream_bw(&state->base, intel_dp->tunnel,
> +					   crtc->pipe, required_rate);
> +
> +	drm_dp_tunnel_ref_get(intel_dp->tunnel,
> +			      &crtc_state->dp_tunnel_ref);
> +}
> +
> +/**
> + * intel_dp_tunnel_atomic_check_link - Check the DP tunnel atomic state
> + * @state: intel atomic state
> + * @limits: link BW limits
> + *
> + * Check the link configuration for all DP tunnels in @state. If the
> + * configuration is invalid @limits will be updated if possible to
> + * reduce the total BW, after which the configuration for all CRTCs in
> + * @state must be recomputed with the updated @limits.
> + *
> + * Returns:
> + *   - 0 if the confugration is valid
> + *   - %-EAGAIN, if the configuration is invalid and @limits got updated
> + *     with fallback values with which the configuration of all CRTCs in
> + *     @state must be recomputed
> + *   - Other negative error, if the configuration is invalid without a
> + *     fallback possibility, or the check failed for another reason
> + */
> +int intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state,
> +				      struct intel_link_bw_limits *limits)
> +{
> +	u32 failed_stream_mask;
> +	int err;
> +
> +	err = drm_dp_tunnel_atomic_check_stream_bws(&state->base,
> +						    &failed_stream_mask);
> +	if (err != -ENOSPC)
> +		return err;
> +
> +	err = intel_link_bw_reduce_bpp(state, limits,
> +				       failed_stream_mask, "DP tunnel link BW");
> +
> +	return err ? : -EAGAIN;
> +}
> +
> +void intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state,
> +				     struct intel_encoder *encoder,
> +				     const struct intel_crtc_state *new_crtc_state,
> +				     const struct drm_connector_state *new_conn_state)
> +{
> +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> +	struct drm_dp_tunnel *tunnel = new_crtc_state->dp_tunnel_ref.tunnel;
> +	const struct drm_dp_tunnel_state *new_tunnel_state;
> +	int err;
> +
> +	if (!tunnel)
> +		return;
> +
> +	new_tunnel_state = drm_dp_tunnel_atomic_get_new_state(&state->base, tunnel);
> +
> +	err = drm_dp_tunnel_alloc_bw(tunnel,
> +				     drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state));
> +	if (!err)
> +		return;
> +
> +	if (!intel_digital_port_connected(encoder))
> +		return;
> +
> +	drm_dbg_kms(&i915->drm,
> +		    "[DPTUN %s][ENCODER:%d:%s] BW allocation failed on a connected sink (err %pe)\n",
> +		    drm_dp_tunnel_name(tunnel),
> +		    encoder->base.base.id,
> +		    encoder->base.name,
> +		    ERR_PTR(err));
> +
> +	intel_dp_queue_modeset_retry_for_link(state, encoder, new_crtc_state, new_conn_state);
> +}
> +
> +void intel_dp_tunnel_atomic_free_bw(struct intel_atomic_state *state,
> +				    struct intel_encoder *encoder,
> +				    const struct intel_crtc_state *old_crtc_state,
> +				    const struct drm_connector_state *old_conn_state)
> +{
> +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> +	struct intel_crtc *old_crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
> +	struct drm_dp_tunnel *tunnel;
> +	int err;
> +
> +	tunnel = get_inherited_tunnel_state(state, old_crtc);
> +	if (!tunnel)
> +		tunnel = old_crtc_state->dp_tunnel_ref.tunnel;

So what happens if we have tunnels in both places?

The one in old_crtc_state is stale enough that we don't
have to care about it?

> +
> +	if (!tunnel)
> +		return;
> +
> +	err = drm_dp_tunnel_alloc_bw(tunnel, 0);
> +	if (!err)
> +		return;
> +
> +	if (!intel_digital_port_connected(encoder))
> +		return;
> +
> +	drm_dbg_kms(&i915->drm,
> +		    "[DPTUN %s][ENCODER:%d:%s] BW freeing failed on a connected sink (err %pe)\n",
> +		    drm_dp_tunnel_name(tunnel),
> +		    encoder->base.base.id,
> +		    encoder->base.name,
> +		    ERR_PTR(err));
> +
> +	intel_dp_queue_modeset_retry_for_link(state, encoder, old_crtc_state, old_conn_state);
> +}
> +
> +int intel_dp_tunnel_mgr_init(struct drm_i915_private *i915)
> +{
> +	struct drm_dp_tunnel_mgr *tunnel_mgr;
> +	struct drm_connector_list_iter connector_list_iter;
> +	struct intel_connector *connector;
> +	int dp_connectors = 0;
> +
> +	drm_connector_list_iter_begin(&i915->drm, &connector_list_iter);
> +	for_each_intel_connector_iter(connector, &connector_list_iter) {
> +		if (connector->base.connector_type != DRM_MODE_CONNECTOR_DisplayPort)
> +			continue;
> +
> +		dp_connectors++;
> +	}
> +	drm_connector_list_iter_end(&connector_list_iter);
> +
> +	tunnel_mgr = drm_dp_tunnel_mgr_create(&i915->drm, dp_connectors);
> +	if (IS_ERR(tunnel_mgr))
> +		return PTR_ERR(tunnel_mgr);
> +
> +	i915->display.dp_tunnel_mgr = tunnel_mgr;
> +
> +	return 0;
> +}
> +
> +void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915)
> +{
> +	drm_dp_tunnel_mgr_destroy(i915->display.dp_tunnel_mgr);
> +	i915->display.dp_tunnel_mgr = NULL;
> +}
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_tunnel.h b/drivers/gpu/drm/i915/display/intel_dp_tunnel.h
> new file mode 100644
> index 0000000000000..bedba3ba9ad8d
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/display/intel_dp_tunnel.h
> @@ -0,0 +1,131 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#ifndef __INTEL_DP_TUNNEL_H__
> +#define __INTEL_DP_TUNNEL_H__
> +
> +#include <linux/errno.h>
> +#include <linux/types.h>
> +
> +struct drm_i915_private;
> +struct drm_connector_state;
> +struct drm_modeset_acquire_ctx;
> +
> +struct intel_atomic_state;
> +struct intel_connector;
> +struct intel_crtc;
> +struct intel_crtc_state;
> +struct intel_dp;
> +struct intel_encoder;
> +struct intel_link_bw_limits;
> +
> +#if defined(CONFIG_DRM_I915_DP_TUNNEL) && defined(I915)
> +
> +int intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx);
> +void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp);
> +void intel_dp_tunnel_destroy(struct intel_dp *intel_dp);
> +void intel_dp_tunnel_resume(struct intel_dp *intel_dp, bool dpcd_updated);
> +void intel_dp_tunnel_suspend(struct intel_dp *intel_dp);
> +
> +bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp);
> +
> +void
> +intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state);
> +
> +void intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
> +					      struct intel_dp *intel_dp,
> +					      const struct intel_connector *connector,
> +					      struct intel_crtc_state *crtc_state);
> +
> +int intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state,
> +					      struct intel_crtc *crtc);
> +int intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state,
> +				      struct intel_link_bw_limits *limits);
> +int intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state,
> +				       struct intel_dp *intel_dp,
> +				       struct intel_connector *connector);
> +void intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state,
> +				     struct intel_encoder *encoder,
> +				     const struct intel_crtc_state *new_crtc_state,
> +				     const struct drm_connector_state *new_conn_state);
> +void intel_dp_tunnel_atomic_free_bw(struct intel_atomic_state *state,
> +				    struct intel_encoder *encoder,
> +				    const struct intel_crtc_state *old_crtc_state,
> +				    const struct drm_connector_state *old_conn_state);
> +
> +int intel_dp_tunnel_mgr_init(struct drm_i915_private *i915);
> +void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915);
> +
> +#else
> +
> +static inline int
> +intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp) {}
> +static inline void intel_dp_tunnel_destroy(struct intel_dp *intel_dp) {}
> +static inline void intel_dp_tunnel_resume(struct intel_dp *intel_dp, bool dpcd_updated) {}
> +static inline void intel_dp_tunnel_suspend(struct intel_dp *intel_dp) {}
> +
> +static inline bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp)
> +{
> +	return false;
> +}
> +
> +static inline void
> +intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state) {}
> +
> +static inline void
> +intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
> +					 struct intel_dp *intel_dp,
> +					 const struct intel_connector *connector,
> +					 struct intel_crtc_state *crtc_state) {}
> +
> +static inline int
> +intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state,
> +					  struct intel_crtc *crtc)
> +{
> +	return 0;
> +}
> +
> +static inline int
> +intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state,
> +				  struct intel_link_bw_limits *limits)
> +{
> +	return 0;
> +}
> +
> +static inline int
> +intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state,
> +				   struct intel_dp *intel_dp,
> +				   struct intel_connector *connector)
> +{
> +	return 0;
> +}
> +
> +static inline void
> +intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state,
> +				struct intel_encoder *encoder,
> +				const struct intel_crtc_state *new_crtc_state,
> +				const struct drm_connector_state *new_conn_state) {}
> +static inline void
> +intel_dp_tunnel_atomic_free_bw(struct intel_atomic_state *state,
> +			       struct intel_encoder *encoder,
> +			       const struct intel_crtc_state *old_crtc_state,
> +			       const struct drm_connector_state *old_conn_state) {}
> +
> +static inline int
> +intel_dp_tunnel_mgr_init(struct drm_i915_private *i915)
> +{
> +	return 0;
> +}
> +
> +static inline void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915) {}
> +
> +#endif /* CONFIG_DRM_I915_DP_TUNNEL */
> +
> +#endif /* __INTEL_DP_TUNNEL_H__ */
> -- 
> 2.39.2

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 14/19] drm/i915/dp: Compute DP tunel BW during encoder state computation
  2024-01-23 10:28 ` [PATCH 14/19] drm/i915/dp: Compute DP tunel BW during encoder state computation Imre Deak
  2024-02-06 20:44   ` Shankar, Uma
@ 2024-02-06 23:25   ` Ville Syrjälä
  2024-02-07 14:25     ` Imre Deak
  1 sibling, 1 reply; 61+ messages in thread
From: Ville Syrjälä @ 2024-02-06 23:25 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx, dri-devel

On Tue, Jan 23, 2024 at 12:28:45PM +0200, Imre Deak wrote:
> Compute the BW required through a DP tunnel on links with such tunnels
> detected and add the corresponding atomic state during a modeset.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c     | 16 +++++++++++++---
>  drivers/gpu/drm/i915/display/intel_dp_mst.c | 13 +++++++++++++
>  2 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 78dfe8be6031d..6968fdb7ffcdf 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2880,6 +2880,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>  			struct drm_connector_state *conn_state)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> +	struct intel_atomic_state *state = to_intel_atomic_state(conn_state->state);
>  	struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
>  	struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
>  	const struct drm_display_mode *fixed_mode;
> @@ -2980,6 +2981,9 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>  	intel_dp_compute_vsc_sdp(intel_dp, pipe_config, conn_state);
>  	intel_dp_compute_hdr_metadata_infoframe_sdp(intel_dp, pipe_config, conn_state);
>  
> +	intel_dp_tunnel_atomic_compute_stream_bw(state, intel_dp, connector,
> +						 pipe_config);

Error handling seems awol?

> +
>  	return 0;
>  }
>  
> @@ -6087,6 +6091,15 @@ static int intel_dp_connector_atomic_check(struct drm_connector *conn,
>  			return ret;
>  	}
>  
> +	if (!intel_connector_needs_modeset(state, conn))
> +		return 0;
> +
> +	ret = intel_dp_tunnel_atomic_check_state(state,
> +						 intel_dp,
> +						 intel_conn);
> +	if (ret)
> +		return ret;
> +
>  	/*
>  	 * We don't enable port sync on BDW due to missing w/as and
>  	 * due to not having adjusted the modeset sequence appropriately.
> @@ -6094,9 +6107,6 @@ static int intel_dp_connector_atomic_check(struct drm_connector *conn,
>  	if (DISPLAY_VER(dev_priv) < 9)
>  		return 0;
>  
> -	if (!intel_connector_needs_modeset(state, conn))
> -		return 0;
> -
>  	if (conn->has_tile) {
>  		ret = intel_modeset_tile_group(state, conn->tile_group->id);
>  		if (ret)
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index 520393dc8b453..cbfab3173b9ef 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -42,6 +42,7 @@
>  #include "intel_dp.h"
>  #include "intel_dp_hdcp.h"
>  #include "intel_dp_mst.h"
> +#include "intel_dp_tunnel.h"
>  #include "intel_dpio_phy.h"
>  #include "intel_hdcp.h"
>  #include "intel_hotplug.h"
> @@ -523,6 +524,7 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
>  				       struct drm_connector_state *conn_state)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> +	struct intel_atomic_state *state = to_intel_atomic_state(conn_state->state);
>  	struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
>  	struct intel_dp *intel_dp = &intel_mst->primary->dp;
>  	const struct intel_connector *connector =
> @@ -619,6 +621,9 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
>  
>  	intel_psr_compute_config(intel_dp, pipe_config, conn_state);
>  
> +	intel_dp_tunnel_atomic_compute_stream_bw(state, intel_dp, connector,
> +						 pipe_config);
> +
>  	return 0;
>  }
>  
> @@ -876,6 +881,14 @@ intel_dp_mst_atomic_check(struct drm_connector *connector,
>  	if (ret)
>  		return ret;
>  
> +	if (intel_connector_needs_modeset(state, connector)) {
> +		ret = intel_dp_tunnel_atomic_check_state(state,
> +							 intel_connector->mst_port,
> +							 intel_connector);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	return drm_dp_atomic_release_time_slots(&state->base,
>  						&intel_connector->mst_port->mst_mgr,
>  						intel_connector->port);
> -- 
> 2.39.2

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 11/19] drm/i915/dp: Add support for DP tunnel BW allocation
  2024-02-06 23:08   ` Ville Syrjälä
@ 2024-02-07 12:09     ` Imre Deak
  0 siblings, 0 replies; 61+ messages in thread
From: Imre Deak @ 2024-02-07 12:09 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, dri-devel

On Wed, Feb 07, 2024 at 01:08:43AM +0200, Ville Syrjälä wrote:
> On Tue, Jan 23, 2024 at 12:28:42PM +0200, Imre Deak wrote:
> > Add support to enable the DP tunnel BW allocation mode. Follow-up
> > patches will call the required helpers added here to prepare for a
> > modeset on a link with DP tunnels, the last change in the patchset
> > actually enabling BWA.
> > 
> > With BWA enabled, the driver will expose the full mode list a display
> > supports, regardless of any BW limitation on a shared (Thunderbolt)
> > link. Such BW limits will be checked against only during a modeset, when
> > the driver has the full knowledge of each display's BW requirement.
> > 
> > If the link BW changes in a way that a connector's modelist may also
> > change, userspace will get a hotplug notification for all the connectors
> > sharing the same link (so it can adjust the mode used for a display).
> > 
> > The BW limitation can change at any point, asynchronously to modesets
> > on a given connector, so a modeset can fail even though the atomic check
> > for it passed. In such scenarios userspace will get a bad link
> > notification and in response is supposed to retry the modeset.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/Kconfig                  |  13 +
> >  drivers/gpu/drm/i915/Kconfig.debug            |   1 +
> >  drivers/gpu/drm/i915/Makefile                 |   3 +
> >  drivers/gpu/drm/i915/display/intel_atomic.c   |   2 +
> >  .../gpu/drm/i915/display/intel_display_core.h |   1 +
> >  .../drm/i915/display/intel_display_types.h    |   9 +
> >  .../gpu/drm/i915/display/intel_dp_tunnel.c    | 642 ++++++++++++++++++
> >  .../gpu/drm/i915/display/intel_dp_tunnel.h    | 131 ++++
> >  8 files changed, 802 insertions(+)
> >  create mode 100644 drivers/gpu/drm/i915/display/intel_dp_tunnel.c
> >  create mode 100644 drivers/gpu/drm/i915/display/intel_dp_tunnel.h
> > 
> > diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig
> > index b5d6e3352071f..4636913c17868 100644
> > --- a/drivers/gpu/drm/i915/Kconfig
> > +++ b/drivers/gpu/drm/i915/Kconfig
> > @@ -155,6 +155,19 @@ config DRM_I915_PXP
> >  	  protected session and manage the status of the alive software session,
> >  	  as well as its life cycle.
> >  
> > +config DRM_I915_DP_TUNNEL
> > +	bool "Enable DP tunnel support"
> > +	depends on DRM_I915
> > +	select DRM_DISPLAY_DP_TUNNEL
> > +	default y
> > +	help
> > +	  Choose this option to detect DP tunnels and enable the Bandwidth
> > +	  Allocation mode for such tunnels. This allows using the maximum
> > +	  resolution allowed by the link BW on all displays sharing the
> > +	  link BW, for instance on a Thunderbolt link.
> > +
> > +	  If in doubt, say "Y".
> > +
> >  menu "drm/i915 Debugging"
> >  depends on DRM_I915
> >  depends on EXPERT
> > diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
> > index 5b7162076850c..bc18e2d9ea05d 100644
> > --- a/drivers/gpu/drm/i915/Kconfig.debug
> > +++ b/drivers/gpu/drm/i915/Kconfig.debug
> > @@ -28,6 +28,7 @@ config DRM_I915_DEBUG
> >  	select STACKDEPOT
> >  	select STACKTRACE
> >  	select DRM_DP_AUX_CHARDEV
> > +	select DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE if DRM_I915_DP_TUNNEL
> >  	select X86_MSR # used by igt/pm_rpm
> >  	select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks)
> >  	select DRM_DEBUG_MM if DRM=y
> > diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
> > index c13f14edb5088..3ef6ed41e62b4 100644
> > --- a/drivers/gpu/drm/i915/Makefile
> > +++ b/drivers/gpu/drm/i915/Makefile
> > @@ -369,6 +369,9 @@ i915-y += \
> >  	display/vlv_dsi.o \
> >  	display/vlv_dsi_pll.o
> >  
> > +i915-$(CONFIG_DRM_I915_DP_TUNNEL) += \
> > +	display/intel_dp_tunnel.o
> > +
> >  i915-y += \
> >  	i915_perf.o
> >  
> > diff --git a/drivers/gpu/drm/i915/display/intel_atomic.c b/drivers/gpu/drm/i915/display/intel_atomic.c
> > index ec0d5168b5035..96ab37e158995 100644
> > --- a/drivers/gpu/drm/i915/display/intel_atomic.c
> > +++ b/drivers/gpu/drm/i915/display/intel_atomic.c
> > @@ -29,6 +29,7 @@
> >   * See intel_atomic_plane.c for the plane-specific atomic functionality.
> >   */
> >  
> > +#include <drm/display/drm_dp_tunnel.h>
> >  #include <drm/drm_atomic.h>
> >  #include <drm/drm_atomic_helper.h>
> >  #include <drm/drm_fourcc.h>
> > @@ -38,6 +39,7 @@
> >  #include "intel_atomic.h"
> >  #include "intel_cdclk.h"
> >  #include "intel_display_types.h"
> > +#include "intel_dp_tunnel.h"
> >  #include "intel_global_state.h"
> >  #include "intel_hdcp.h"
> >  #include "intel_psr.h"
> > diff --git a/drivers/gpu/drm/i915/display/intel_display_core.h b/drivers/gpu/drm/i915/display/intel_display_core.h
> > index a90f1aa201be8..0993d25a0a686 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display_core.h
> > +++ b/drivers/gpu/drm/i915/display/intel_display_core.h
> > @@ -522,6 +522,7 @@ struct intel_display {
> >  	} wq;
> >  
> >  	/* Grouping using named structs. Keep sorted. */
> > +	struct drm_dp_tunnel_mgr *dp_tunnel_mgr;
> >  	struct intel_audio audio;
> >  	struct intel_dpll dpll;
> >  	struct intel_fbc *fbc[I915_MAX_FBCS];
> > diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
> > index ae2e8cff9d691..b79db78b27728 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display_types.h
> > +++ b/drivers/gpu/drm/i915/display/intel_display_types.h
> > @@ -33,6 +33,7 @@
> >  
> >  #include <drm/display/drm_dp_dual_mode_helper.h>
> >  #include <drm/display/drm_dp_mst_helper.h>
> > +#include <drm/display/drm_dp_tunnel.h>
> >  #include <drm/display/drm_dsc.h>
> >  #include <drm/drm_atomic.h>
> >  #include <drm/drm_crtc.h>
> > @@ -677,6 +678,8 @@ struct intel_atomic_state {
> >  
> >  	struct intel_shared_dpll_state shared_dpll[I915_NUM_PLLS];
> >  
> > +	struct intel_dp_tunnel_inherited_state *dp_tunnel_state;
> 
> 'dp_tunnel_state' is a bit too generic sounding to me.
> 'inherited_tunnels' or something like that maybe?

Yes, will rename it to inherited_dp_tunnels.

> > +
> >  	/*
> >  	 * Current watermarks can't be trusted during hardware readout, so
> >  	 * don't bother calculating intermediate watermarks.
> > @@ -1372,6 +1375,9 @@ struct intel_crtc_state {
> >  		struct drm_dsc_config config;
> >  	} dsc;
> >  
> > +	/* DP tunnel used for BW allocation. */
> > +	struct drm_dp_tunnel_ref dp_tunnel_ref;
> > +
> >  	/* HSW+ linetime watermarks */
> >  	u16 linetime;
> >  	u16 ips_linetime;
> > @@ -1775,6 +1781,9 @@ struct intel_dp {
> >  	/* connector directly attached - won't be use for modeset in mst world */
> >  	struct intel_connector *attached_connector;
> >  
> > +	struct drm_dp_tunnel *tunnel;
> > +	bool tunnel_suspended:1;
> > +
> >  	/* mst connector list */
> >  	struct intel_dp_mst_encoder *mst_encoders[I915_MAX_PIPES];
> >  	struct drm_dp_mst_topology_mgr mst_mgr;
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp_tunnel.c b/drivers/gpu/drm/i915/display/intel_dp_tunnel.c
> > new file mode 100644
> > index 0000000000000..52dd0108a6c13
> > --- /dev/null
> > +++ b/drivers/gpu/drm/i915/display/intel_dp_tunnel.c
> > @@ -0,0 +1,642 @@
> > +// SPDX-License-Identifier: MIT
> > +/*
> > + * Copyright © 2023 Intel Corporation
> > + */
> > +
> > +#include "i915_drv.h"
> > +
> > +#include <drm/display/drm_dp_tunnel.h>
> > +
> > +#include "intel_atomic.h"
> > +#include "intel_display_limits.h"
> > +#include "intel_display_types.h"
> > +#include "intel_dp.h"
> > +#include "intel_dp_link_training.h"
> > +#include "intel_dp_mst.h"
> > +#include "intel_dp_tunnel.h"
> > +#include "intel_link_bw.h"
> > +
> > +struct intel_dp_tunnel_inherited_state {
> > +	struct {
> > +		struct drm_dp_tunnel_ref tunnel_ref;
> > +	} tunnels[I915_MAX_PIPES];
> 
> Hmm. Does the extra middle-man struct buy us anything?

The struct used to have a pipe mask as well, but yes, this version can
be simplified.

> > +};
> > +
> > +static void destroy_tunnel(struct intel_dp *intel_dp)
> > +{
> > +	drm_dp_tunnel_destroy(intel_dp->tunnel);
> > +	intel_dp->tunnel = NULL;
> > +}
> > +
> > +void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp)
> > +{
> > +	if (!intel_dp->tunnel)
> > +		return;
> > +
> > +	destroy_tunnel(intel_dp);
> > +}
> > +
> > +void intel_dp_tunnel_destroy(struct intel_dp *intel_dp)
> > +{
> > +	if (!intel_dp->tunnel)
> > +		return;
> > +
> > +	if (intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> > +		drm_dp_tunnel_disable_bw_alloc(intel_dp->tunnel);
> > +
> > +	destroy_tunnel(intel_dp);
> > +}
> > +
> > +static int kbytes_to_mbits(int kbytes)
> > +{
> > +	return DIV_ROUND_UP(kbytes * 8, 1000);
> > +}
> > +
> > +static int get_current_link_bw(struct intel_dp *intel_dp,
> > +			       bool *below_dprx_bw)
> > +{
> > +	int rate = intel_dp_max_common_rate(intel_dp);
> > +	int lane_count = intel_dp_max_common_lane_count(intel_dp);
> > +	int bw;
> > +
> > +	bw = intel_dp_max_link_data_rate(intel_dp, rate, lane_count);
> > +	*below_dprx_bw = bw < drm_dp_max_dprx_data_rate(rate, lane_count);
> > +
> > +	return bw;
> > +}
> > +
> > +static int update_tunnel_state(struct intel_dp *intel_dp)
> > +{
> > +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> > +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> > +	bool old_bw_below_dprx;
> > +	bool new_bw_below_dprx;
> > +	int old_bw;
> > +	int new_bw;
> > +	int ret;
> > +
> > +	old_bw = get_current_link_bw(intel_dp, &old_bw_below_dprx);
> > +
> > +	ret = drm_dp_tunnel_update_state(intel_dp->tunnel);
> > +	if (ret < 0) {
> > +		drm_dbg_kms(&i915->drm,
> > +			    "[DPTUN %s][ENCODER:%d:%s] State update failed (err %pe)\n",
> > +			    drm_dp_tunnel_name(intel_dp->tunnel),
> > +			    encoder->base.base.id,
> > +			    encoder->base.name,
> > +			    ERR_PTR(ret));
> > +
> > +		return ret;
> > +	}
> > +
> > +	if (ret == 0 ||
> > +	    !drm_dp_tunnel_bw_alloc_is_enabled(intel_dp->tunnel))
> > +		return 0;
> > +
> > +	intel_dp_update_sink_caps(intel_dp);
> > +
> > +	new_bw = get_current_link_bw(intel_dp, &new_bw_below_dprx);
> > +
> > +	/* Suppress the notification if the mode list can't change due to bw. */
> > +	if (old_bw_below_dprx == new_bw_below_dprx &&
> > +	    !new_bw_below_dprx)
> > +		return 0;
> > +
> > +	drm_dbg_kms(&i915->drm,
> > +		    "[DPTUN %s][ENCODER:%d:%s] Notify users about BW change: %d -> %d\n",
> > +		    drm_dp_tunnel_name(intel_dp->tunnel),
> > +		    encoder->base.base.id,
> > +		    encoder->base.name,
> > +		    kbytes_to_mbits(old_bw),
> > +		    kbytes_to_mbits(new_bw));
> > +
> > +	return 1;
> > +}
> > +
> > +static int allocate_initial_tunnel_bw(struct intel_dp *intel_dp,
> > +				      struct drm_modeset_acquire_ctx *ctx)
> > +{
> > +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> > +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> > +	const struct intel_crtc *crtc;
> > +	int tunnel_bw = 0;
> > +	u8 pipe_mask;
> > +	int err;
> > +
> > +	err = intel_dp_get_active_pipes(intel_dp, ctx,
> > +					INTEL_DP_GET_PIPES_SYNC,
> > +					&pipe_mask);
> > +	if (err)
> > +		return err;
> > +
> > +	for_each_intel_crtc_in_pipe_mask(&i915->drm, crtc, pipe_mask) {
> > +		const struct intel_crtc_state *crtc_state =
> > +			to_intel_crtc_state(crtc->base.state);
> > +		int stream_bw = intel_dp_config_required_rate(crtc_state);
> > +
> > +		drm_dbg_kms(&i915->drm,
> > +			    "[DPTUN %s][ENCODER:%d:%s][CRTC:%d:%s] Initial BW for stream %d: %d/%d Mb/s\n",
> > +			    drm_dp_tunnel_name(intel_dp->tunnel),
> > +			    encoder->base.base.id,
> > +			    encoder->base.name,
> 
> I would try to have the id+name for each object on the same line.
> Avoids the whole thing eating so many lines, and the two are related
> so just makes sense.

Ok, will change it.

> > +			    crtc->base.base.id,
> > +			    crtc->base.name,
> > +			    crtc->pipe,
> > +			    kbytes_to_mbits(stream_bw),
> > +			    kbytes_to_mbits(tunnel_bw));
> > +
> > +		tunnel_bw += stream_bw;
> > +	}
> > +
> > +	err = drm_dp_tunnel_alloc_bw(intel_dp->tunnel, tunnel_bw);
> > +	if (err) {
> > +		drm_dbg_kms(&i915->drm,
> > +			    "[DPTUN %s][ENCODER:%d:%s] Initial BW allocation failed (err %pe)\n",
> > +			    drm_dp_tunnel_name(intel_dp->tunnel),
> > +			    encoder->base.base.id,
> > +			    encoder->base.name,
> > +			    ERR_PTR(err));
> > +
> > +		return err;
> > +	}
> > +
> > +	return update_tunnel_state(intel_dp);
> > +}
> > +
> > +static int detect_new_tunnel(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx)
> > +{
> > +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> > +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> > +	struct drm_dp_tunnel *tunnel;
> > +	int ret;
> > +
> > +	tunnel = drm_dp_tunnel_detect(i915->display.dp_tunnel_mgr,
> > +					&intel_dp->aux);
> > +	if (IS_ERR(tunnel))
> > +		return PTR_ERR(tunnel);
> > +
> > +	intel_dp->tunnel = tunnel;
> > +
> > +	ret = drm_dp_tunnel_enable_bw_alloc(intel_dp->tunnel);
> > +	if (ret) {
> > +		if (ret == -EOPNOTSUPP)
> > +			return 0;
> > +
> > +		drm_dbg_kms(&i915->drm,
> > +			    "[DPTUN %s][ENCODER:%d:%s] Failed to enable BW allocation mode (ret %pe)\n",
> > +			    drm_dp_tunnel_name(intel_dp->tunnel),
> > +			    encoder->base.base.id,
> > +			    encoder->base.name,
> > +			    ERR_PTR(ret));
> > +
> > +		/* Keep the tunnel with BWA disabled */
> > +		return 0;
> > +	}
> > +
> > +	ret = allocate_initial_tunnel_bw(intel_dp, ctx);
> > +	if (ret < 0)
> > +		intel_dp_tunnel_destroy(intel_dp);
> > +
> > +	return ret;
> > +}
> > +
> > +int intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx)
> > +{
> > +	int ret;
> > +
> > +	if (intel_dp_is_edp(intel_dp))
> > +		return 0;
> > +
> > +	if (intel_dp->tunnel) {
> > +		ret = update_tunnel_state(intel_dp);
> > +		if (ret >= 0)
> > +			return ret;
> > +
> > +		/* Try to recreate the tunnel after an update error. */
> > +		intel_dp_tunnel_destroy(intel_dp);
> > +	}
> > +
> > +	ret = detect_new_tunnel(intel_dp, ctx);
> > +	if (ret >= 0 || ret == -EDEADLK)
> 
> This if-statement seems to achieve nothing. Was there supposed
> be some error handling for the actual error cases here?

Yes, just a left-over from a previous version of the patch limiting the
number of times a tunnel detection was retried after a failure. It can
be simplified, thanks for spotting it.

> > +		return ret;
> > +
> > +	return ret;
> > +}
> > +
> > +bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp)
> > +{
> > +	return intel_dp->tunnel &&
> > +		drm_dp_tunnel_bw_alloc_is_enabled(intel_dp->tunnel);
> 
> We seem to have quite a few wrappers just for the NULL check.
> I wonder if we should just make drm_dp_tunnel*() accept NULL
> directly? Maybe something to think for the future...

Yes, makes sense to move the check to the helper funcs, will do that.

> > +}
> > +
> > +void intel_dp_tunnel_suspend(struct intel_dp *intel_dp)
> > +{
> > +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> > +	struct intel_connector *connector = intel_dp->attached_connector;
> > +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> > +
> > +	if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> > +		return;
> > +
> > +	drm_dbg_kms(&i915->drm, "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Suspend\n",
> > +		    drm_dp_tunnel_name(intel_dp->tunnel),
> > +		    connector->base.base.id, connector->base.name,
> > +		    encoder->base.base.id, encoder->base.name);
> > +
> > +	intel_dp->tunnel_suspended = true;
> > +}
> > +
> > +void intel_dp_tunnel_resume(struct intel_dp *intel_dp, bool dpcd_updated)
> > +{
> > +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> > +	struct intel_connector *connector = intel_dp->attached_connector;
> > +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> > +	u8 dpcd[DP_RECEIVER_CAP_SIZE];
> > +	int err = 0;
> > +
> > +	if (!intel_dp->tunnel_suspended)
> > +		return;
> > +
> > +	intel_dp->tunnel_suspended = false;
> > +
> > +	drm_dbg_kms(&i915->drm, "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Resume\n",
> > +		    drm_dp_tunnel_name(intel_dp->tunnel),
> > +		    connector->base.base.id, connector->base.name,
> > +		    encoder->base.base.id, encoder->base.name);
> > +
> > +	/* DPRX caps read required by tunnel detection */
> > +	if (!dpcd_updated)
> > +		err = intel_dp_read_dprx_caps(intel_dp, dpcd);
> > +
> > +	if (err)
> > +		drm_dp_tunnel_set_io_error(intel_dp->tunnel);
> > +	else
> > +		err = drm_dp_tunnel_enable_bw_alloc(intel_dp->tunnel);
> > +		/* TODO: allocate initial BW */
> > +
> > +	if (!err)
> > +		return;
> > +
> > +	drm_dbg_kms(&i915->drm,
> > +		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Tunnel can't be resumed, will drop and redect it (err %pe)\n",
> > +		    drm_dp_tunnel_name(intel_dp->tunnel),
> > +		    connector->base.base.id, connector->base.name,
> > +		    encoder->base.base.id, encoder->base.name,
> > +		    ERR_PTR(err));
> > +}
> > +
> > +static struct drm_dp_tunnel *
> > +get_inherited_tunnel_state(struct intel_atomic_state *state,
> > +			   const struct intel_crtc *crtc)
> > +{
> > +	if (!state->dp_tunnel_state)
> > +		return NULL;
> > +
> > +	return state->dp_tunnel_state->tunnels[crtc->pipe].tunnel_ref.tunnel;
> > +}
> > +
> > +static int
> > +add_inherited_tunnel_state(struct intel_atomic_state *state,
> > +			   struct drm_dp_tunnel *tunnel,
> > +			   const struct intel_crtc *crtc)
> > +{
> > +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> > +	struct drm_dp_tunnel *old_tunnel;
> > +
> > +	old_tunnel = get_inherited_tunnel_state(state, crtc);
> > +	if (old_tunnel) {
> > +		drm_WARN_ON(&i915->drm, old_tunnel != tunnel);
> > +		return 0;
> > +	}
> > +
> > +	if (!state->dp_tunnel_state) {
> > +		state->dp_tunnel_state = kzalloc(sizeof(*state->dp_tunnel_state), GFP_KERNEL);
> 
> I was pondering if the dynamic allocation for this is super useful.
> But I guess we anywya have to deal with various errors in the
> places where this gets used so maybe not much benefit from
> avoiding it.

The only related error check is the -ENOMEM one below; it made sense to
me to allocate it only on-demand. 

> > +		if (!state->dp_tunnel_state)
> > +			return -ENOMEM;
> > +	}
> > +
> > +	drm_dp_tunnel_ref_get(tunnel,
> > +			      &state->dp_tunnel_state->tunnels[crtc->pipe].tunnel_ref);
> > +
> > +	return 0;
> > +}
> > +
> > +static int check_inherited_tunnel_state(struct intel_atomic_state *state,
> > +					struct intel_dp *intel_dp,
> > +					const struct intel_digital_connector_state *old_conn_state)
> > +{
> > +	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> > +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> > +	const struct intel_connector *connector =
> > +		to_intel_connector(old_conn_state->base.connector);
> > +	struct intel_crtc *old_crtc;
> > +	const struct intel_crtc_state *old_crtc_state;
> > +
> > +	/*
> > +	 * If a BWA tunnel gets detected only after the corresponding
> > +	 * connector got enabled already without a BWA tunnel, or a different
> > +	 * BWA tunnel (which was removed meanwhile) the old CRTC state won't
> > +	 * contain the state of the current tunnel. This tunnel still has a
> > +	 * reserved BW, which needs to be released, add the state for such
> > +	 * inherited tunnels separately only to this atomic state.
> > +	 */
> > +	if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> > +		return 0;
> > +
> > +	if (!old_conn_state->base.crtc)
> > +		return 0;
> > +
> > +	old_crtc = to_intel_crtc(old_conn_state->base.crtc);
> > +	old_crtc_state = intel_atomic_get_old_crtc_state(state, old_crtc);
> > +
> > +	if (!old_crtc_state->hw.active ||
> > +	    old_crtc_state->dp_tunnel_ref.tunnel == intel_dp->tunnel)
> > +		return 0;
> > +
> > +	drm_dbg_kms(&i915->drm,
> > +		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Adding state for inherited tunnel %p\n",
> > +		    drm_dp_tunnel_name(intel_dp->tunnel),
> > +		    connector->base.base.id,
> > +		    connector->base.name,
> > +		    encoder->base.base.id,
> > +		    encoder->base.name,
> > +		    old_crtc->base.base.id,
> > +		    old_crtc->base.name,
> > +		    intel_dp->tunnel);
> > +
> > +	return add_inherited_tunnel_state(state, intel_dp->tunnel, old_crtc);
> > +}
> > +
> > +void intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state)
> > +{
> > +	enum pipe pipe;
> > +
> > +	if (!state->dp_tunnel_state)
> > +		return;
> > +
> > +	for_each_pipe(to_i915(state->base.dev), pipe)
> > +		if (state->dp_tunnel_state->tunnels[pipe].tunnel_ref.tunnel)
> > +			drm_dp_tunnel_ref_put(&state->dp_tunnel_state->tunnels[pipe].tunnel_ref);
> > +
> > +	kfree(state->dp_tunnel_state);
> > +	state->dp_tunnel_state = NULL;
> > +}
> > +
> > +static int intel_dp_tunnel_atomic_add_group_state(struct intel_atomic_state *state,
> > +						  struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> > +	u32 pipe_mask;
> > +	int err;
> > +
> > +	if (!tunnel)
> > +		return 0;
> > +
> > +	err = drm_dp_tunnel_atomic_get_group_streams_in_state(&state->base,
> > +							      tunnel, &pipe_mask);
> > +	if (err)
> > +		return err;
> > +
> > +	drm_WARN_ON(&i915->drm, pipe_mask & ~((1 << I915_MAX_PIPES) - 1));
> > +
> > +	return intel_modeset_pipes_in_mask_early(state, "DPTUN", pipe_mask);
> > +}
> > +
> > +int intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state,
> > +					      struct intel_crtc *crtc)
> > +{
> > +	const struct intel_crtc_state *new_crtc_state =
> > +		intel_atomic_get_new_crtc_state(state, crtc);
> > +	const struct drm_dp_tunnel_state *tunnel_state;
> > +	struct drm_dp_tunnel *tunnel = new_crtc_state->dp_tunnel_ref.tunnel;
> > +
> > +	if (!tunnel)
> > +		return 0;
> > +
> > +	tunnel_state = drm_dp_tunnel_atomic_get_state(&state->base, tunnel);
> > +	if (IS_ERR(tunnel_state))
> > +		return PTR_ERR(tunnel_state);
> > +
> > +	return 0;
> > +}
> > +
> > +static int check_group_state(struct intel_atomic_state *state,
> > +			     struct intel_dp *intel_dp,
> > +			     const struct intel_connector *connector,
> > +			     struct intel_crtc *crtc)
> > +{
> > +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> > +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> > +	const struct intel_crtc_state *crtc_state;
> > +
> > +	crtc_state = intel_atomic_get_new_crtc_state(state, crtc);
> 
> Doesn't really need to be on its own line here.

Ok.

> > +
> > +	if (!crtc_state->dp_tunnel_ref.tunnel)
> > +		return 0;
> > +
> > +	drm_dbg_kms(&i915->drm,
> > +		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Adding group state for tunnel %p\n",
> > +		    drm_dp_tunnel_name(intel_dp->tunnel),
> > +		    connector->base.base.id,
> > +		    connector->base.name,
> > +		    encoder->base.base.id,
> > +		    encoder->base.name,
> > +		    crtc->base.base.id,
> > +		    crtc->base.name,
> > +		    intel_dp->tunnel);
> > +
> > +	return intel_dp_tunnel_atomic_add_group_state(state, crtc_state->dp_tunnel_ref.tunnel);
> > +}
> > +
> > +int intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state,
> > +				       struct intel_dp *intel_dp,
> > +				       struct intel_connector *connector)
> > +{
> > +	const struct intel_digital_connector_state *old_conn_state =
> > +		intel_atomic_get_new_connector_state(state, connector);
> 
> s/get_new/get_old/

Arg, thanks for catching this, will fix it.

> 
> > +	const struct intel_digital_connector_state *new_conn_state =
> > +		intel_atomic_get_new_connector_state(state, connector);
> > +	int err;
> > +
> > +	if (old_conn_state->base.crtc) {
> > +		err = check_group_state(state, intel_dp, connector,
> > +					to_intel_crtc(old_conn_state->base.crtc));
> > +		if (err)
> > +			return err;
> > +	}
> > +
> > +	if (new_conn_state->base.crtc &&
> > +	    new_conn_state->base.crtc != old_conn_state->base.crtc) {
> > +		err = check_group_state(state, intel_dp, connector,
> > +					to_intel_crtc(new_conn_state->base.crtc));
> > +		if (err)
> > +			return err;
> > +	}
> > +
> > +	return check_inherited_tunnel_state(state, intel_dp, old_conn_state);
> > +}
> > +
> > +void intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
> > +					      struct intel_dp *intel_dp,
> > +					      const struct intel_connector *connector,
> > +					      struct intel_crtc_state *crtc_state)
> > +{
> > +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> > +	struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
> > +	const struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
> 
> We don't usually const anything but the states.

Ok.

> > +	int required_rate = intel_dp_config_required_rate(crtc_state);
> > +
> > +	if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
> > +		return;
> > +
> > +	drm_dbg_kms(&i915->drm,
> > +		    "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Stream %d required BW %d Mb/s\n",
> > +		    drm_dp_tunnel_name(intel_dp->tunnel),
> > +		    connector->base.base.id,
> > +		    connector->base.name,
> > +		    encoder->base.base.id,
> > +		    encoder->base.name,
> > +		    crtc->base.base.id,
> > +		    crtc->base.name,
> > +		    crtc->pipe,
> > +		    kbytes_to_mbits(required_rate));
> > +
> > +	drm_dp_tunnel_atomic_set_stream_bw(&state->base, intel_dp->tunnel,
> > +					   crtc->pipe, required_rate);
> > +
> > +	drm_dp_tunnel_ref_get(intel_dp->tunnel,
> > +			      &crtc_state->dp_tunnel_ref);
> > +}
> > +
> > +/**
> > + * intel_dp_tunnel_atomic_check_link - Check the DP tunnel atomic state
> > + * @state: intel atomic state
> > + * @limits: link BW limits
> > + *
> > + * Check the link configuration for all DP tunnels in @state. If the
> > + * configuration is invalid @limits will be updated if possible to
> > + * reduce the total BW, after which the configuration for all CRTCs in
> > + * @state must be recomputed with the updated @limits.
> > + *
> > + * Returns:
> > + *   - 0 if the confugration is valid
> > + *   - %-EAGAIN, if the configuration is invalid and @limits got updated
> > + *     with fallback values with which the configuration of all CRTCs in
> > + *     @state must be recomputed
> > + *   - Other negative error, if the configuration is invalid without a
> > + *     fallback possibility, or the check failed for another reason
> > + */
> > +int intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state,
> > +				      struct intel_link_bw_limits *limits)
> > +{
> > +	u32 failed_stream_mask;
> > +	int err;
> > +
> > +	err = drm_dp_tunnel_atomic_check_stream_bws(&state->base,
> > +						    &failed_stream_mask);
> > +	if (err != -ENOSPC)
> > +		return err;
> > +
> > +	err = intel_link_bw_reduce_bpp(state, limits,
> > +				       failed_stream_mask, "DP tunnel link BW");
> > +
> > +	return err ? : -EAGAIN;
> > +}
> > +
> > +void intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state,
> > +				     struct intel_encoder *encoder,
> > +				     const struct intel_crtc_state *new_crtc_state,
> > +				     const struct drm_connector_state *new_conn_state)
> > +{
> > +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> > +	struct drm_dp_tunnel *tunnel = new_crtc_state->dp_tunnel_ref.tunnel;
> > +	const struct drm_dp_tunnel_state *new_tunnel_state;
> > +	int err;
> > +
> > +	if (!tunnel)
> > +		return;
> > +
> > +	new_tunnel_state = drm_dp_tunnel_atomic_get_new_state(&state->base, tunnel);
> > +
> > +	err = drm_dp_tunnel_alloc_bw(tunnel,
> > +				     drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state));
> > +	if (!err)
> > +		return;
> > +
> > +	if (!intel_digital_port_connected(encoder))
> > +		return;
> > +
> > +	drm_dbg_kms(&i915->drm,
> > +		    "[DPTUN %s][ENCODER:%d:%s] BW allocation failed on a connected sink (err %pe)\n",
> > +		    drm_dp_tunnel_name(tunnel),
> > +		    encoder->base.base.id,
> > +		    encoder->base.name,
> > +		    ERR_PTR(err));
> > +
> > +	intel_dp_queue_modeset_retry_for_link(state, encoder, new_crtc_state, new_conn_state);
> > +}
> > +
> > +void intel_dp_tunnel_atomic_free_bw(struct intel_atomic_state *state,
> > +				    struct intel_encoder *encoder,
> > +				    const struct intel_crtc_state *old_crtc_state,
> > +				    const struct drm_connector_state *old_conn_state)
> > +{
> > +	struct drm_i915_private *i915 = to_i915(state->base.dev);
> > +	struct intel_crtc *old_crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
> > +	struct drm_dp_tunnel *tunnel;
> > +	int err;
> > +
> > +	tunnel = get_inherited_tunnel_state(state, old_crtc);
> > +	if (!tunnel)
> > +		tunnel = old_crtc_state->dp_tunnel_ref.tunnel;
> 
> So what happens if we have tunnels in both places?
> 
> The one in old_crtc_state is stale enough that we don't
> have to care about it?

Yes, an old tunnel used in a pervious modeset could be dropped after a
sink is re-plugged, in which case the newly detected - inherited -
tunnel state is used here. The old state with its allocated BW is not
valid (freed by TBT Connection Manager) in this case.

> > +
> > +	if (!tunnel)
> > +		return;
> > +
> > +	err = drm_dp_tunnel_alloc_bw(tunnel, 0);
> > +	if (!err)
> > +		return;
> > +
> > +	if (!intel_digital_port_connected(encoder))
> > +		return;
> > +
> > +	drm_dbg_kms(&i915->drm,
> > +		    "[DPTUN %s][ENCODER:%d:%s] BW freeing failed on a connected sink (err %pe)\n",
> > +		    drm_dp_tunnel_name(tunnel),
> > +		    encoder->base.base.id,
> > +		    encoder->base.name,
> > +		    ERR_PTR(err));
> > +
> > +	intel_dp_queue_modeset_retry_for_link(state, encoder, old_crtc_state, old_conn_state);
> > +}
> > +
> > +int intel_dp_tunnel_mgr_init(struct drm_i915_private *i915)
> > +{
> > +	struct drm_dp_tunnel_mgr *tunnel_mgr;
> > +	struct drm_connector_list_iter connector_list_iter;
> > +	struct intel_connector *connector;
> > +	int dp_connectors = 0;
> > +
> > +	drm_connector_list_iter_begin(&i915->drm, &connector_list_iter);
> > +	for_each_intel_connector_iter(connector, &connector_list_iter) {
> > +		if (connector->base.connector_type != DRM_MODE_CONNECTOR_DisplayPort)
> > +			continue;
> > +
> > +		dp_connectors++;
> > +	}
> > +	drm_connector_list_iter_end(&connector_list_iter);
> > +
> > +	tunnel_mgr = drm_dp_tunnel_mgr_create(&i915->drm, dp_connectors);
> > +	if (IS_ERR(tunnel_mgr))
> > +		return PTR_ERR(tunnel_mgr);
> > +
> > +	i915->display.dp_tunnel_mgr = tunnel_mgr;
> > +
> > +	return 0;
> > +}
> > +
> > +void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915)
> > +{
> > +	drm_dp_tunnel_mgr_destroy(i915->display.dp_tunnel_mgr);
> > +	i915->display.dp_tunnel_mgr = NULL;
> > +}
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp_tunnel.h b/drivers/gpu/drm/i915/display/intel_dp_tunnel.h
> > new file mode 100644
> > index 0000000000000..bedba3ba9ad8d
> > --- /dev/null
> > +++ b/drivers/gpu/drm/i915/display/intel_dp_tunnel.h
> > @@ -0,0 +1,131 @@
> > +/* SPDX-License-Identifier: MIT */
> > +/*
> > + * Copyright © 2023 Intel Corporation
> > + */
> > +
> > +#ifndef __INTEL_DP_TUNNEL_H__
> > +#define __INTEL_DP_TUNNEL_H__
> > +
> > +#include <linux/errno.h>
> > +#include <linux/types.h>
> > +
> > +struct drm_i915_private;
> > +struct drm_connector_state;
> > +struct drm_modeset_acquire_ctx;
> > +
> > +struct intel_atomic_state;
> > +struct intel_connector;
> > +struct intel_crtc;
> > +struct intel_crtc_state;
> > +struct intel_dp;
> > +struct intel_encoder;
> > +struct intel_link_bw_limits;
> > +
> > +#if defined(CONFIG_DRM_I915_DP_TUNNEL) && defined(I915)
> > +
> > +int intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx);
> > +void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp);
> > +void intel_dp_tunnel_destroy(struct intel_dp *intel_dp);
> > +void intel_dp_tunnel_resume(struct intel_dp *intel_dp, bool dpcd_updated);
> > +void intel_dp_tunnel_suspend(struct intel_dp *intel_dp);
> > +
> > +bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp);
> > +
> > +void
> > +intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state);
> > +
> > +void intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
> > +					      struct intel_dp *intel_dp,
> > +					      const struct intel_connector *connector,
> > +					      struct intel_crtc_state *crtc_state);
> > +
> > +int intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state,
> > +					      struct intel_crtc *crtc);
> > +int intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state,
> > +				      struct intel_link_bw_limits *limits);
> > +int intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state,
> > +				       struct intel_dp *intel_dp,
> > +				       struct intel_connector *connector);
> > +void intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state,
> > +				     struct intel_encoder *encoder,
> > +				     const struct intel_crtc_state *new_crtc_state,
> > +				     const struct drm_connector_state *new_conn_state);
> > +void intel_dp_tunnel_atomic_free_bw(struct intel_atomic_state *state,
> > +				    struct intel_encoder *encoder,
> > +				    const struct intel_crtc_state *old_crtc_state,
> > +				    const struct drm_connector_state *old_conn_state);
> > +
> > +int intel_dp_tunnel_mgr_init(struct drm_i915_private *i915);
> > +void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915);
> > +
> > +#else
> > +
> > +static inline int
> > +intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp) {}
> > +static inline void intel_dp_tunnel_destroy(struct intel_dp *intel_dp) {}
> > +static inline void intel_dp_tunnel_resume(struct intel_dp *intel_dp, bool dpcd_updated) {}
> > +static inline void intel_dp_tunnel_suspend(struct intel_dp *intel_dp) {}
> > +
> > +static inline bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp)
> > +{
> > +	return false;
> > +}
> > +
> > +static inline void
> > +intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state) {}
> > +
> > +static inline void
> > +intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
> > +					 struct intel_dp *intel_dp,
> > +					 const struct intel_connector *connector,
> > +					 struct intel_crtc_state *crtc_state) {}
> > +
> > +static inline int
> > +intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state,
> > +					  struct intel_crtc *crtc)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline int
> > +intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state,
> > +				  struct intel_link_bw_limits *limits)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline int
> > +intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state,
> > +				   struct intel_dp *intel_dp,
> > +				   struct intel_connector *connector)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline void
> > +intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state,
> > +				struct intel_encoder *encoder,
> > +				const struct intel_crtc_state *new_crtc_state,
> > +				const struct drm_connector_state *new_conn_state) {}
> > +static inline void
> > +intel_dp_tunnel_atomic_free_bw(struct intel_atomic_state *state,
> > +			       struct intel_encoder *encoder,
> > +			       const struct intel_crtc_state *old_crtc_state,
> > +			       const struct drm_connector_state *old_conn_state) {}
> > +
> > +static inline int
> > +intel_dp_tunnel_mgr_init(struct drm_i915_private *i915)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915) {}
> > +
> > +#endif /* CONFIG_DRM_I915_DP_TUNNEL */
> > +
> > +#endif /* __INTEL_DP_TUNNEL_H__ */
> > -- 
> > 2.39.2
> 
> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 14/19] drm/i915/dp: Compute DP tunel BW during encoder state computation
  2024-02-06 23:25   ` Ville Syrjälä
@ 2024-02-07 14:25     ` Imre Deak
  0 siblings, 0 replies; 61+ messages in thread
From: Imre Deak @ 2024-02-07 14:25 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, dri-devel

On Wed, Feb 07, 2024 at 01:25:19AM +0200, Ville Syrjälä wrote:
> On Tue, Jan 23, 2024 at 12:28:45PM +0200, Imre Deak wrote:
> > Compute the BW required through a DP tunnel on links with such tunnels
> > detected and add the corresponding atomic state during a modeset.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_dp.c     | 16 +++++++++++++---
> >  drivers/gpu/drm/i915/display/intel_dp_mst.c | 13 +++++++++++++
> >  2 files changed, 26 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> > index 78dfe8be6031d..6968fdb7ffcdf 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > @@ -2880,6 +2880,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
> >  			struct drm_connector_state *conn_state)
> >  {
> >  	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> > +	struct intel_atomic_state *state = to_intel_atomic_state(conn_state->state);
> >  	struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
> >  	struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
> >  	const struct drm_display_mode *fixed_mode;
> > @@ -2980,6 +2981,9 @@ intel_dp_compute_config(struct intel_encoder *encoder,
> >  	intel_dp_compute_vsc_sdp(intel_dp, pipe_config, conn_state);
> >  	intel_dp_compute_hdr_metadata_infoframe_sdp(intel_dp, pipe_config, conn_state);
> >  
> > +	intel_dp_tunnel_atomic_compute_stream_bw(state, intel_dp, connector,
> > +						 pipe_config);
> 
> Error handling seems awol?

Yes, along with checking the return from
drm_dp_tunnel_atomic_set_stream_bw(), thanks for spotting this.

> 
> > +
> >  	return 0;
> >  }
> >  
> > @@ -6087,6 +6091,15 @@ static int intel_dp_connector_atomic_check(struct drm_connector *conn,
> >  			return ret;
> >  	}
> >  
> > +	if (!intel_connector_needs_modeset(state, conn))
> > +		return 0;
> > +
> > +	ret = intel_dp_tunnel_atomic_check_state(state,
> > +						 intel_dp,
> > +						 intel_conn);
> > +	if (ret)
> > +		return ret;
> > +
> >  	/*
> >  	 * We don't enable port sync on BDW due to missing w/as and
> >  	 * due to not having adjusted the modeset sequence appropriately.
> > @@ -6094,9 +6107,6 @@ static int intel_dp_connector_atomic_check(struct drm_connector *conn,
> >  	if (DISPLAY_VER(dev_priv) < 9)
> >  		return 0;
> >  
> > -	if (!intel_connector_needs_modeset(state, conn))
> > -		return 0;
> > -
> >  	if (conn->has_tile) {
> >  		ret = intel_modeset_tile_group(state, conn->tile_group->id);
> >  		if (ret)
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> > index 520393dc8b453..cbfab3173b9ef 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> > @@ -42,6 +42,7 @@
> >  #include "intel_dp.h"
> >  #include "intel_dp_hdcp.h"
> >  #include "intel_dp_mst.h"
> > +#include "intel_dp_tunnel.h"
> >  #include "intel_dpio_phy.h"
> >  #include "intel_hdcp.h"
> >  #include "intel_hotplug.h"
> > @@ -523,6 +524,7 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
> >  				       struct drm_connector_state *conn_state)
> >  {
> >  	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> > +	struct intel_atomic_state *state = to_intel_atomic_state(conn_state->state);
> >  	struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
> >  	struct intel_dp *intel_dp = &intel_mst->primary->dp;
> >  	const struct intel_connector *connector =
> > @@ -619,6 +621,9 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
> >  
> >  	intel_psr_compute_config(intel_dp, pipe_config, conn_state);
> >  
> > +	intel_dp_tunnel_atomic_compute_stream_bw(state, intel_dp, connector,
> > +						 pipe_config);
> > +
> >  	return 0;
> >  }
> >  
> > @@ -876,6 +881,14 @@ intel_dp_mst_atomic_check(struct drm_connector *connector,
> >  	if (ret)
> >  		return ret;
> >  
> > +	if (intel_connector_needs_modeset(state, connector)) {
> > +		ret = intel_dp_tunnel_atomic_check_state(state,
> > +							 intel_connector->mst_port,
> > +							 intel_connector);
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> >  	return drm_dp_atomic_release_time_slots(&state->base,
> >  						&intel_connector->mst_port->mst_mgr,
> >  						intel_connector->port);
> > -- 
> > 2.39.2
> 
> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-01-23 10:28 ` [PATCH 02/19] drm/dp: Add support for DP tunneling Imre Deak
  2024-01-31 12:50   ` Hogander, Jouni
  2024-01-31 16:09   ` Ville Syrjälä
@ 2024-02-07 20:02   ` Ville Syrjälä
  2024-02-07 20:48     ` Imre Deak
  2 siblings, 1 reply; 61+ messages in thread
From: Ville Syrjälä @ 2024-02-07 20:02 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx, dri-devel

On Tue, Jan 23, 2024 at 12:28:33PM +0200, Imre Deak wrote:
> +static char yes_no_chr(int val)
> +{
> +	return val ? 'Y' : 'N';
> +}

We have str_yes_no() already.

v> +
> +#define SKIP_DPRX_CAPS_CHECK		BIT(0)
> +#define ALLOW_ALLOCATED_BW_CHANGE	BIT(1)
> +
> +static bool tunnel_regs_are_valid(struct drm_dp_tunnel_mgr *mgr,
> +				  const struct drm_dp_tunnel_regs *regs,
> +				  unsigned int flags)
> +{
> +	int drv_group_id = tunnel_reg_drv_group_id(regs);
> +	bool check_dprx = !(flags & SKIP_DPRX_CAPS_CHECK);
> +	bool ret = true;
> +
> +	if (!tunnel_reg_bw_alloc_supported(regs)) {
> +		if (tunnel_group_id(drv_group_id)) {
> +			drm_dbg_kms(mgr->dev,
> +				    "DPTUN: A non-zero group ID is only allowed with BWA support\n");
> +			ret = false;
> +		}
> +
> +		if (tunnel_reg(regs, DP_ALLOCATED_BW)) {
> +			drm_dbg_kms(mgr->dev,
> +				    "DPTUN: BW is allocated without BWA support\n");
> +			ret = false;
> +		}
> +
> +		return ret;
> +	}
> +
> +	if (!tunnel_group_id(drv_group_id)) {
> +		drm_dbg_kms(mgr->dev,
> +			    "DPTUN: BWA support requires a non-zero group ID\n");
> +		ret = false;
> +	}
> +
> +	if (check_dprx && hweight8(tunnel_reg_max_dprx_lane_count(regs)) != 1) {
> +		drm_dbg_kms(mgr->dev,
> +			    "DPTUN: Invalid DPRX lane count: %d\n",
> +			    tunnel_reg_max_dprx_lane_count(regs));
> +
> +		ret = false;
> +	}
> +
> +	if (check_dprx && !tunnel_reg_max_dprx_rate(regs)) {
> +		drm_dbg_kms(mgr->dev,
> +			    "DPTUN: DPRX rate is 0\n");
> +
> +		ret = false;
> +	}
> +
> +	if (tunnel_reg(regs, DP_ALLOCATED_BW) > tunnel_reg(regs, DP_ESTIMATED_BW)) {
> +		drm_dbg_kms(mgr->dev,
> +			    "DPTUN: Allocated BW %d > estimated BW %d Mb/s\n",
> +			    DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) *
> +					 tunnel_reg_bw_granularity(regs)),
> +			    DPTUN_BW_ARG(tunnel_reg(regs, DP_ESTIMATED_BW) *
> +					 tunnel_reg_bw_granularity(regs)));
> +
> +		ret = false;
> +	}
> +
> +	return ret;
> +}
> +
> +static bool tunnel_info_changes_are_valid(struct drm_dp_tunnel *tunnel,
> +					  const struct drm_dp_tunnel_regs *regs,
> +					  unsigned int flags)
> +{
> +	int new_drv_group_id = tunnel_reg_drv_group_id(regs);
> +	bool ret = true;
> +
> +	if (tunnel->bw_alloc_supported != tunnel_reg_bw_alloc_supported(regs)) {
> +		tun_dbg(tunnel,
> +			"BW alloc support has changed %c -> %c\n",
> +			yes_no_chr(tunnel->bw_alloc_supported),
> +			yes_no_chr(tunnel_reg_bw_alloc_supported(regs)));
> +
> +		ret = false;
> +	}
> +
> +	if (tunnel->group->drv_group_id != new_drv_group_id) {
> +		tun_dbg(tunnel,
> +			"Driver/group ID has changed %d:%d:* -> %d:%d:*\n",
> +			tunnel_group_drv_id(tunnel->group->drv_group_id),
> +			tunnel_group_id(tunnel->group->drv_group_id),
> +			tunnel_group_drv_id(new_drv_group_id),
> +			tunnel_group_id(new_drv_group_id));
> +
> +		ret = false;
> +	}
> +
> +	if (!tunnel->bw_alloc_supported)
> +		return ret;
> +
> +	if (tunnel->bw_granularity != tunnel_reg_bw_granularity(regs)) {
> +		tun_dbg(tunnel,
> +			"BW granularity has changed: %d -> %d Mb/s\n",
> +			DPTUN_BW_ARG(tunnel->bw_granularity),
> +			DPTUN_BW_ARG(tunnel_reg_bw_granularity(regs)));
> +
> +		ret = false;
> +	}
> +
> +	/*
> +	 * On some devices at least the BW alloc mode enabled status is always
> +	 * reported as 0, so skip checking that here.
> +	 */

So it's reported as supported and we enable it, but it's never
reported back as being enabled?

> +
> +	if (!(flags & ALLOW_ALLOCATED_BW_CHANGE) &&
> +	    tunnel->allocated_bw !=
> +	    tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity) {
> +		tun_dbg(tunnel,
> +			"Allocated BW has changed: %d -> %d Mb/s\n",
> +			DPTUN_BW_ARG(tunnel->allocated_bw),
> +			DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity));
> +
> +		ret = false;
> +	}
> +
> +	return ret;
> +}
> +
> +static int
> +read_and_verify_tunnel_regs(struct drm_dp_tunnel *tunnel,
> +			    struct drm_dp_tunnel_regs *regs,
> +			    unsigned int flags)
> +{
> +	int err;
> +
> +	err = read_tunnel_regs(tunnel->aux, regs);
> +	if (err < 0) {
> +		drm_dp_tunnel_set_io_error(tunnel);
> +
> +		return err;
> +	}
> +
> +	if (!tunnel_regs_are_valid(tunnel->group->mgr, regs, flags))
> +		return -EINVAL;
> +
> +	if (!tunnel_info_changes_are_valid(tunnel, regs, flags))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static bool update_dprx_caps(struct drm_dp_tunnel *tunnel, const struct drm_dp_tunnel_regs *regs)
> +{
> +	bool changed = false;
> +
> +	if (tunnel_reg_max_dprx_rate(regs) != tunnel->max_dprx_rate) {
> +		tunnel->max_dprx_rate = tunnel_reg_max_dprx_rate(regs);
> +		changed = true;
> +	}
> +
> +	if (tunnel_reg_max_dprx_lane_count(regs) != tunnel->max_dprx_lane_count) {
> +		tunnel->max_dprx_lane_count = tunnel_reg_max_dprx_lane_count(regs);
> +		changed = true;
> +	}
> +
> +	return changed;
> +}
> +
> +static int dev_id_len(const u8 *dev_id, int max_len)
> +{
> +	while (max_len && dev_id[max_len - 1] == '\0')
> +		max_len--;
> +
> +	return max_len;
> +}
> +
> +static int get_max_dprx_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +	int bw = drm_dp_max_dprx_data_rate(tunnel->max_dprx_rate,
> +					   tunnel->max_dprx_lane_count);
> +
> +	return min(roundup(bw, tunnel->bw_granularity),

Should this round down?

> +		   MAX_DP_REQUEST_BW * tunnel->bw_granularity);
> +}
> +
> +static int get_max_tunnel_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +	return min(get_max_dprx_bw(tunnel), tunnel->group->available_bw);
> +}
> +
> +/**
> + * drm_dp_tunnel_detect - Detect DP tunnel on the link
> + * @mgr: Tunnel manager
> + * @aux: DP AUX on which the tunnel will be detected
> + *
> + * Detect if there is any DP tunnel on the link and add it to the tunnel
> + * group's tunnel list.
> + *
> + * Returns 0 on success, negative error code on failure.
> + */
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> +		       struct drm_dp_aux *aux)
> +{
> +	struct drm_dp_tunnel_regs regs;
> +	struct drm_dp_tunnel *tunnel;
> +	int err;
> +
> +	err = read_tunnel_regs(aux, &regs);
> +	if (err)
> +		return ERR_PTR(err);
> +
> +	if (!(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
> +	      DP_TUNNELING_SUPPORT))
> +		return ERR_PTR(-ENODEV);
> +
> +	/* The DPRX caps are valid only after enabling BW alloc mode. */
> +	if (!tunnel_regs_are_valid(mgr, &regs, SKIP_DPRX_CAPS_CHECK))
> +		return ERR_PTR(-EINVAL);
> +
> +	tunnel = create_tunnel(mgr, aux, &regs);
> +	if (!tunnel)
> +		return ERR_PTR(-ENOMEM);
> +
> +	tun_dbg(tunnel,
> +		"OUI:%*phD DevID:%*pE Rev-HW:%d.%d SW:%d.%d PR-Sup:%c BWA-Sup:%c BWA-En:%c\n",
> +		DP_TUNNELING_OUI_BYTES,
> +			tunnel_reg_ptr(&regs, DP_TUNNELING_OUI),
> +		dev_id_len(tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID), DP_TUNNELING_DEV_ID_BYTES),
> +			tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID),
> +		(tunnel_reg(&regs, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MAJOR_MASK) >>
> +			DP_TUNNELING_HW_REV_MAJOR_SHIFT,
> +		(tunnel_reg(&regs, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MINOR_MASK) >>
> +			DP_TUNNELING_HW_REV_MINOR_SHIFT,
> +		tunnel_reg(&regs, DP_TUNNELING_SW_REV_MAJOR),
> +		tunnel_reg(&regs, DP_TUNNELING_SW_REV_MINOR),
> +		yes_no_chr(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
> +			   DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT),
> +		yes_no_chr(tunnel->bw_alloc_supported),
> +		yes_no_chr(tunnel->bw_alloc_enabled));
> +
> +	return tunnel;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_detect);
> +
> +/**
> + * drm_dp_tunnel_destroy - Destroy tunnel object
> + * @tunnel: Tunnel object
> + *
> + * Remove the tunnel from the tunnel topology and destroy it.
> + */
> +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> +{
> +	if (drm_WARN_ON(tunnel->group->mgr->dev, tunnel->destroyed))
> +		return -ENODEV;
> +
> +	tun_dbg(tunnel, "destroying\n");
> +
> +	tunnel->destroyed = true;
> +	destroy_tunnel(tunnel);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_destroy);
> +
> +static int check_tunnel(const struct drm_dp_tunnel *tunnel)
> +{
> +	if (tunnel->destroyed)
> +		return -ENODEV;
> +
> +	if (tunnel->has_io_error)
> +		return -EIO;
> +
> +	return 0;
> +}
> +
> +static int group_allocated_bw(struct drm_dp_tunnel_group *group)
> +{
> +	struct drm_dp_tunnel *tunnel;
> +	int group_allocated_bw = 0;
> +
> +	for_each_tunnel_in_group(group, tunnel) {
> +		if (check_tunnel(tunnel) == 0 &&
> +		    tunnel->bw_alloc_enabled)
> +			group_allocated_bw += tunnel->allocated_bw;
> +	}
> +
> +	return group_allocated_bw;
> +}
> +
> +static int calc_group_available_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +	return group_allocated_bw(tunnel->group) -
> +	       tunnel->allocated_bw +
> +	       tunnel->estimated_bw;

Hmm. So the estimated_bw=actually_free_bw + tunnel->allocated_bw?
Ie. how much bw might be available for this tunnel right now?
And here we're trying to deduce the total bandwidth available by
adding in the allocated_bw of all the other tunnels in the group?
Rather weird that we can't just get that number directly...

> +}
> +
> +static int update_group_available_bw(struct drm_dp_tunnel *tunnel,
> +				     const struct drm_dp_tunnel_regs *regs)
> +{
> +	struct drm_dp_tunnel *tunnel_iter;
> +	int group_available_bw;
> +	bool changed;
> +
> +	tunnel->estimated_bw = tunnel_reg(regs, DP_ESTIMATED_BW) * tunnel->bw_granularity;
> +
> +	if (calc_group_available_bw(tunnel) == tunnel->group->available_bw)
> +		return 0;
> +
> +	for_each_tunnel_in_group(tunnel->group, tunnel_iter) {
> +		int err;
> +
> +		if (tunnel_iter == tunnel)
> +			continue;
> +
> +		if (check_tunnel(tunnel_iter) != 0 ||
> +		    !tunnel_iter->bw_alloc_enabled)
> +			continue;
> +
> +		err = drm_dp_dpcd_probe(tunnel_iter->aux, DP_DPCD_REV);
> +		if (err) {
> +			tun_dbg(tunnel_iter,
> +				"Probe failed, assume disconnected (err %pe)\n",
> +				ERR_PTR(err));
> +			drm_dp_tunnel_set_io_error(tunnel_iter);
> +		}
> +	}
> +
> +	group_available_bw = calc_group_available_bw(tunnel);
> +
> +	tun_dbg(tunnel, "Updated group available BW: %d->%d\n",
> +		DPTUN_BW_ARG(tunnel->group->available_bw),
> +		DPTUN_BW_ARG(group_available_bw));
> +
> +	changed = tunnel->group->available_bw != group_available_bw;
> +
> +	tunnel->group->available_bw = group_available_bw;
> +
> +	return changed ? 1 : 0;
> +}
> +
> +static int set_bw_alloc_mode(struct drm_dp_tunnel *tunnel, bool enable)
> +{
> +	u8 mask = DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE | DP_UNMASK_BW_ALLOCATION_IRQ;
> +	u8 val;
> +
> +	if (drm_dp_dpcd_readb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, &val) < 0)
> +		goto out_err;
> +
> +	if (enable)
> +		val |= mask;
> +	else
> +		val &= ~mask;
> +
> +	if (drm_dp_dpcd_writeb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, val) < 0)
> +		goto out_err;
> +
> +	tunnel->bw_alloc_enabled = enable;
> +
> +	return 0;
> +
> +out_err:
> +	drm_dp_tunnel_set_io_error(tunnel);
> +
> +	return -EIO;
> +}
> +
> +/**
> + * drm_dp_tunnel_enable_bw_alloc: Enable DP tunnel BW allocation mode
> + * @tunnel: Tunnel object
> + *
> + * Enable the DP tunnel BW allocation mode on @tunnel if it supports it.
> + *
> + * Returns 0 in case of success, negative error code otherwise.
> + */
> +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_regs regs;
> +	int err = check_tunnel(tunnel);
> +
> +	if (err)
> +		return err;
> +
> +	if (!tunnel->bw_alloc_supported)
> +		return -EOPNOTSUPP;
> +
> +	if (!tunnel_group_id(tunnel->group->drv_group_id))
> +		return -EINVAL;
> +
> +	err = set_bw_alloc_mode(tunnel, true);
> +	if (err)
> +		goto out;
> +
> +	err = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> +	if (err) {
> +		set_bw_alloc_mode(tunnel, false);
> +
> +		goto out;
> +	}
> +
> +	if (!tunnel->max_dprx_rate)
> +		update_dprx_caps(tunnel, &regs);
> +
> +	if (tunnel->group->available_bw == -1) {
> +		err = update_group_available_bw(tunnel, &regs);
> +		if (err > 0)
> +			err = 0;
> +	}
> +out:
> +	tun_dbg_stat(tunnel, err,
> +		     "Enabling BW alloc mode: DPRX:%dx%d Group alloc:%d/%d Mb/s",
> +		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
> +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> +
> +	return err;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_enable_bw_alloc);
> +
> +/**
> + * drm_dp_tunnel_disable_bw_alloc: Disable DP tunnel BW allocation mode
> + * @tunnel: Tunnel object
> + *
> + * Disable the DP tunnel BW allocation mode on @tunnel.
> + *
> + * Returns 0 in case of success, negative error code otherwise.
> + */
> +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> +{
> +	int err = check_tunnel(tunnel);
> +
> +	if (err)
> +		return err;
> +
> +	err = set_bw_alloc_mode(tunnel, false);
> +
> +	tun_dbg_stat(tunnel, err, "Disabling BW alloc mode");
> +
> +	return err;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_disable_bw_alloc);
> +
> +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
> +{
> +	return tunnel->bw_alloc_enabled;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_bw_alloc_is_enabled);
> +
> +static int bw_req_complete(struct drm_dp_aux *aux, bool *status_changed)
> +{
> +	u8 bw_req_mask = DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED;
> +	u8 status_change_mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
> +	u8 val;
> +
> +	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> +		return -EIO;
> +
> +	*status_changed = val & status_change_mask;
> +
> +	val &= bw_req_mask;
> +
> +	if (!val)
> +		return -EAGAIN;
> +
> +	if (drm_dp_dpcd_writeb(aux, DP_TUNNELING_STATUS, val) < 0)
> +		return -EIO;
> +
> +	return val == DP_BW_REQUEST_SUCCEEDED ? 0 : -ENOSPC;
> +}
> +
> +static int allocate_tunnel_bw(struct drm_dp_tunnel *tunnel, int bw)
> +{
> +	struct drm_dp_tunnel_mgr *mgr = tunnel->group->mgr;
> +	int request_bw = DIV_ROUND_UP(bw, tunnel->bw_granularity);
> +	unsigned long wait_expires;
> +	DEFINE_WAIT(wait);
> +	int err;
> +
> +	/* Atomic check should prevent the following. */
> +	if (drm_WARN_ON(mgr->dev, request_bw > MAX_DP_REQUEST_BW)) {
> +		err = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (drm_dp_dpcd_writeb(tunnel->aux, DP_REQUEST_BW, request_bw) < 0) {
> +		err = -EIO;
> +		goto out;
> +	}
> +
> +	wait_expires = jiffies + msecs_to_jiffies(3000);
> +
> +	for (;;) {
> +		bool status_changed;
> +
> +		err = bw_req_complete(tunnel->aux, &status_changed);
> +		if (err != -EAGAIN)
> +			break;
> +
> +		if (status_changed) {
> +			struct drm_dp_tunnel_regs regs;
> +
> +			err = read_and_verify_tunnel_regs(tunnel, &regs,
> +							  ALLOW_ALLOCATED_BW_CHANGE);
> +			if (err)
> +				break;
> +		}
> +
> +		if (time_after(jiffies, wait_expires)) {
> +			err = -ETIMEDOUT;
> +			break;
> +		}
> +
> +		prepare_to_wait(&mgr->bw_req_queue, &wait, TASK_UNINTERRUPTIBLE);
> +		schedule_timeout(msecs_to_jiffies(200));
> +	};
> +
> +	finish_wait(&mgr->bw_req_queue, &wait);
> +
> +	if (err)
> +		goto out;
> +
> +	tunnel->allocated_bw = request_bw * tunnel->bw_granularity;
> +
> +out:
> +	tun_dbg_stat(tunnel, err, "Allocating %d/%d Mb/s for tunnel: Group alloc:%d/%d Mb/s",
> +		     DPTUN_BW_ARG(request_bw * tunnel->bw_granularity),
> +		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> +
> +	if (err == -EIO)
> +		drm_dp_tunnel_set_io_error(tunnel);
> +
> +	return err;
> +}
> +
> +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> +{
> +	int err = check_tunnel(tunnel);
> +
> +	if (err)
> +		return err;
> +
> +	return allocate_tunnel_bw(tunnel, bw);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_alloc_bw);
> +
> +static int check_and_clear_status_change(struct drm_dp_tunnel *tunnel)
> +{
> +	u8 mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
> +	u8 val;
> +
> +	if (drm_dp_dpcd_readb(tunnel->aux, DP_TUNNELING_STATUS, &val) < 0)
> +		goto out_err;
> +
> +	val &= mask;
> +
> +	if (val) {
> +		if (drm_dp_dpcd_writeb(tunnel->aux, DP_TUNNELING_STATUS, val) < 0)
> +			goto out_err;
> +
> +		return 1;
> +	}
> +
> +	if (!drm_dp_tunnel_bw_alloc_is_enabled(tunnel))
> +		return 0;
> +
> +	/*
> +	 * Check for estimated BW changes explicitly to account for lost
> +	 * BW change notifications.
> +	 */
> +	if (drm_dp_dpcd_readb(tunnel->aux, DP_ESTIMATED_BW, &val) < 0)
> +		goto out_err;
> +
> +	if (val * tunnel->bw_granularity != tunnel->estimated_bw)
> +		return 1;
> +
> +	return 0;
> +
> +out_err:
> +	drm_dp_tunnel_set_io_error(tunnel);
> +
> +	return -EIO;
> +}
> +
> +/**
> + * drm_dp_tunnel_update_state: Update DP tunnel SW state with the HW state
> + * @tunnel: Tunnel object
> + *
> + * Update the SW state of @tunnel with the HW state.
> + *
> + * Returns 0 if the state has not changed, 1 if it has changed and got updated
> + * successfully and a negative error code otherwise.
> + */
> +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_regs regs;
> +	bool changed = false;
> +	int ret = check_tunnel(tunnel);
> +
> +	if (ret < 0)
> +		return ret;
> +
> +	ret = check_and_clear_status_change(tunnel);
> +	if (ret < 0)
> +		goto out;
> +
> +	if (!ret)
> +		return 0;
> +
> +	ret = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> +	if (ret)
> +		goto out;
> +
> +	if (update_dprx_caps(tunnel, &regs))
> +		changed = true;
> +
> +	ret = update_group_available_bw(tunnel, &regs);
> +	if (ret == 1)
> +		changed = true;
> +
> +out:
> +	tun_dbg_stat(tunnel, ret < 0 ? ret : 0,
> +		     "State update: Changed:%c DPRX:%dx%d Tunnel alloc:%d/%d Group alloc:%d/%d Mb/s",
> +		     yes_no_chr(changed),
> +		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
> +		     DPTUN_BW_ARG(tunnel->allocated_bw),
> +		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> +
> +	if (ret < 0)
> +		return ret;
> +
> +	if (changed)
> +		return 1;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_update_state);
> +
> +/*
> + * Returns 0 if no re-probe is needed, 1 if a re-probe is needed,
> + * a negative error code otherwise.
> + */
> +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_aux *aux)
> +{
> +	u8 val;
> +
> +	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> +		return -EIO;
> +
> +	if (val & (DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED))
> +		wake_up_all(&mgr->bw_req_queue);
> +
> +	if (val & (DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED))
> +		return 1;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_handle_irq);
> +
> +/**
> + * drm_dp_tunnel_max_dprx_rate - Query the maximum rate of the tunnel's DPRX
> + * @tunnel: Tunnel object
> + *
> + * The function is used to query the maximum link rate of the DPRX connected
> + * to @tunnel. Note that this rate will not be limited by the BW limit of the
> + * tunnel, as opposed to the standard and extended DP_MAX_LINK_RATE DPCD
> + * registers.
> + *
> + * Returns the maximum link rate in 10 kbit/s units.
> + */
> +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> +{
> +	return tunnel->max_dprx_rate;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_rate);
> +
> +/**
> + * drm_dp_tunnel_max_dprx_lane_count - Query the maximum lane count of the tunnel's DPRX
> + * @tunnel: Tunnel object
> + *
> + * The function is used to query the maximum lane count of the DPRX connected
> + * to @tunnel. Note that this lane count will not be limited by the BW limit of
> + * the tunnel, as opposed to the standard and extended DP_MAX_LANE_COUNT DPCD
> + * registers.
> + *
> + * Returns the maximum lane count.
> + */
> +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
> +{
> +	return tunnel->max_dprx_lane_count;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_lane_count);
> +
> +/**
> + * drm_dp_tunnel_available_bw - Query the estimated total available BW of the tunnel
> + * @tunnel: Tunnel object
> + *
> + * This function is used to query the estimated total available BW of the
> + * tunnel. This includes the currently allocated and free BW for all the
> + * tunnels in @tunnel's group. The available BW is valid only after the BW
> + * allocation mode has been enabled for the tunnel and its state got updated
> + * calling drm_dp_tunnel_update_state().
> + *
> + * Returns the @tunnel group's estimated total available bandwidth in kB/s
> + * units, or -1 if the available BW isn't valid (the BW allocation mode is
> + * not enabled or the tunnel's state hasn't been updated).
> + */
> +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +	return tunnel->group->available_bw;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_available_bw);
> +
> +static struct drm_dp_tunnel_group_state *
> +drm_dp_tunnel_atomic_get_group_state(struct drm_atomic_state *state,
> +				     const struct drm_dp_tunnel *tunnel)
> +{
> +	return (struct drm_dp_tunnel_group_state *)
> +		drm_atomic_get_private_obj_state(state,
> +						 &tunnel->group->base);
> +}
> +
> +static struct drm_dp_tunnel_state *
> +add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> +		 struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	tun_dbg_atomic(tunnel,
> +		       "Adding state for tunnel %p to group state %p\n",
> +		       tunnel, group_state);
> +
> +	tunnel_state = kzalloc(sizeof(*tunnel_state), GFP_KERNEL);
> +	if (!tunnel_state)
> +		return NULL;
> +
> +	tunnel_state->group_state = group_state;
> +
> +	drm_dp_tunnel_ref_get(tunnel, &tunnel_state->tunnel_ref);
> +
> +	INIT_LIST_HEAD(&tunnel_state->node);
> +	list_add(&tunnel_state->node, &group_state->tunnel_states);
> +
> +	return tunnel_state;
> +}
> +
> +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state)
> +{
> +	tun_dbg_atomic(tunnel_state->tunnel_ref.tunnel,
> +		       "Clearing state for tunnel %p\n",
> +		       tunnel_state->tunnel_ref.tunnel);
> +
> +	list_del(&tunnel_state->node);
> +
> +	kfree(tunnel_state->stream_bw);
> +	drm_dp_tunnel_ref_put(&tunnel_state->tunnel_ref);
> +
> +	kfree(tunnel_state);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_clear_state);
> +
> +static void clear_tunnel_group_state(struct drm_dp_tunnel_group_state *group_state)
> +{
> +	struct drm_dp_tunnel_state *tunnel_state;
> +	struct drm_dp_tunnel_state *tunnel_state_tmp;
> +
> +	for_each_tunnel_state_safe(group_state, tunnel_state, tunnel_state_tmp)
> +		drm_dp_tunnel_atomic_clear_state(tunnel_state);
> +}
> +
> +static struct drm_dp_tunnel_state *
> +get_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> +		 const struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	for_each_tunnel_state(group_state, tunnel_state)
> +		if (tunnel_state->tunnel_ref.tunnel == tunnel)
> +			return tunnel_state;
> +
> +	return NULL;
> +}
> +
> +static struct drm_dp_tunnel_state *
> +get_or_add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> +			struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	tunnel_state = get_tunnel_state(group_state, tunnel);
> +	if (tunnel_state)
> +		return tunnel_state;
> +
> +	return add_tunnel_state(group_state, tunnel);
> +}
> +
> +static struct drm_private_state *
> +tunnel_group_duplicate_state(struct drm_private_obj *obj)
> +{
> +	struct drm_dp_tunnel_group_state *group_state = to_group_state(obj->state);
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> +	if (!group_state)
> +		return NULL;
> +
> +	INIT_LIST_HEAD(&group_state->tunnel_states);
> +
> +	__drm_atomic_helper_private_obj_duplicate_state(obj, &group_state->base);
> +
> +	for_each_tunnel_state(to_group_state(obj->state), tunnel_state) {
> +		struct drm_dp_tunnel_state *new_tunnel_state;
> +
> +		new_tunnel_state = get_or_add_tunnel_state(group_state,
> +							   tunnel_state->tunnel_ref.tunnel);
> +		if (!new_tunnel_state)
> +			goto out_free_state;
> +
> +		new_tunnel_state->stream_mask = tunnel_state->stream_mask;
> +		new_tunnel_state->stream_bw = kmemdup(tunnel_state->stream_bw,
> +						      sizeof(*tunnel_state->stream_bw) *
> +							hweight32(tunnel_state->stream_mask),
> +						      GFP_KERNEL);
> +
> +		if (!new_tunnel_state->stream_bw)
> +			goto out_free_state;
> +	}
> +
> +	return &group_state->base;
> +
> +out_free_state:
> +	clear_tunnel_group_state(group_state);
> +	kfree(group_state);
> +
> +	return NULL;
> +}
> +
> +static void tunnel_group_destroy_state(struct drm_private_obj *obj, struct drm_private_state *state)
> +{
> +	struct drm_dp_tunnel_group_state *group_state = to_group_state(state);
> +
> +	clear_tunnel_group_state(group_state);
> +	kfree(group_state);
> +}
> +
> +static const struct drm_private_state_funcs tunnel_group_funcs = {
> +	.atomic_duplicate_state = tunnel_group_duplicate_state,
> +	.atomic_destroy_state = tunnel_group_destroy_state,
> +};
> +
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> +			       struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_group_state *group_state =
> +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	if (IS_ERR(group_state))
> +		return ERR_CAST(group_state);
> +
> +	tunnel_state = get_or_add_tunnel_state(group_state, tunnel);
> +	if (!tunnel_state)
> +		return ERR_PTR(-ENOMEM);
> +
> +	return tunnel_state;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_state);
> +
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> +				   const struct drm_dp_tunnel *tunnel)
> +{
> +	struct drm_dp_tunnel_group_state *new_group_state;
> +	int i;
> +
> +	for_each_new_group_in_state(state, new_group_state, i)
> +		if (to_group(new_group_state->base.obj) == tunnel->group)
> +			return get_tunnel_state(new_group_state, tunnel);
> +
> +	return NULL;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_new_state);
> +
> +static bool init_group(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_tunnel_group *group)
> +{
> +	struct drm_dp_tunnel_group_state *group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> +
> +	if (!group_state)
> +		return false;
> +
> +	INIT_LIST_HEAD(&group_state->tunnel_states);
> +
> +	group->mgr = mgr;
> +	group->available_bw = -1;
> +	INIT_LIST_HEAD(&group->tunnels);
> +
> +	drm_atomic_private_obj_init(mgr->dev, &group->base, &group_state->base,
> +				    &tunnel_group_funcs);
> +
> +	return true;
> +}
> +
> +static void cleanup_group(struct drm_dp_tunnel_group *group)
> +{
> +	drm_atomic_private_obj_fini(&group->base);
> +}
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
> +{
> +	const struct drm_dp_tunnel_state *tunnel_state;
> +	u32 stream_mask = 0;
> +
> +	for_each_tunnel_state(group_state, tunnel_state) {
> +		drm_WARN(to_group(group_state->base.obj)->mgr->dev,
> +			 tunnel_state->stream_mask & stream_mask,
> +			 "[DPTUN %s]: conflicting stream IDs %x (IDs in other tunnels %x)\n",
> +			 tunnel_state->tunnel_ref.tunnel->name,
> +			 tunnel_state->stream_mask,
> +			 stream_mask);
> +
> +		stream_mask |= tunnel_state->stream_mask;
> +	}
> +}
> +#else
> +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
> +{
> +}
> +#endif
> +
> +static int stream_id_to_idx(u32 stream_mask, u8 stream_id)
> +{
> +	return hweight32(stream_mask & (BIT(stream_id) - 1));
> +}
> +
> +static int resize_bw_array(struct drm_dp_tunnel_state *tunnel_state,
> +			   unsigned long old_mask, unsigned long new_mask)
> +{
> +	unsigned long move_mask = old_mask & new_mask;
> +	int *new_bws = NULL;
> +	int id;
> +
> +	WARN_ON(!new_mask);
> +
> +	if (old_mask == new_mask)
> +		return 0;
> +
> +	new_bws = kcalloc(hweight32(new_mask), sizeof(*new_bws), GFP_KERNEL);
> +	if (!new_bws)
> +		return -ENOMEM;
> +
> +	for_each_set_bit(id, &move_mask, BITS_PER_TYPE(move_mask))
> +		new_bws[stream_id_to_idx(new_mask, id)] =
> +			tunnel_state->stream_bw[stream_id_to_idx(old_mask, id)];
> +
> +	kfree(tunnel_state->stream_bw);
> +	tunnel_state->stream_bw = new_bws;
> +	tunnel_state->stream_mask = new_mask;
> +
> +	return 0;
> +}
> +
> +static int set_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> +			 u8 stream_id, int bw)
> +{
> +	int err;
> +
> +	err = resize_bw_array(tunnel_state,
> +			      tunnel_state->stream_mask,
> +			      tunnel_state->stream_mask | BIT(stream_id));
> +	if (err)
> +		return err;
> +
> +	tunnel_state->stream_bw[stream_id_to_idx(tunnel_state->stream_mask, stream_id)] = bw;
> +
> +	return 0;
> +}
> +
> +static int clear_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> +			   u8 stream_id)
> +{
> +	if (!(tunnel_state->stream_mask & ~BIT(stream_id))) {
> +		drm_dp_tunnel_atomic_clear_state(tunnel_state);
> +		return 0;
> +	}
> +
> +	return resize_bw_array(tunnel_state,
> +			       tunnel_state->stream_mask,
> +			       tunnel_state->stream_mask & ~BIT(stream_id));
> +}
> +
> +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> +					 struct drm_dp_tunnel *tunnel,
> +					 u8 stream_id, int bw)
> +{
> +	struct drm_dp_tunnel_group_state *new_group_state =
> +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> +	struct drm_dp_tunnel_state *tunnel_state;
> +	int err;
> +
> +	if (drm_WARN_ON(tunnel->group->mgr->dev,
> +			stream_id > BITS_PER_TYPE(tunnel_state->stream_mask)))
> +		return -EINVAL;
> +
> +	tun_dbg(tunnel,
> +		"Setting %d Mb/s for stream %d\n",
> +		DPTUN_BW_ARG(bw), stream_id);
> +
> +	if (bw == 0) {
> +		tunnel_state = get_tunnel_state(new_group_state, tunnel);
> +		if (!tunnel_state)
> +			return 0;
> +
> +		return clear_stream_bw(tunnel_state, stream_id);
> +	}
> +
> +	tunnel_state = get_or_add_tunnel_state(new_group_state, tunnel);
> +	if (drm_WARN_ON(state->dev, !tunnel_state))
> +		return -EINVAL;
> +
> +	err = set_stream_bw(tunnel_state, stream_id, bw);
> +	if (err)
> +		return err;
> +
> +	check_unique_stream_ids(new_group_state);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_set_stream_bw);
> +
> +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
> +{
> +	int tunnel_bw = 0;
> +	int i;
> +
> +	for (i = 0; i < hweight32(tunnel_state->stream_mask); i++)
> +		tunnel_bw += tunnel_state->stream_bw[i];
> +
> +	return tunnel_bw;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_tunnel_bw);
> +
> +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> +						    const struct drm_dp_tunnel *tunnel,
> +						    u32 *stream_mask)
> +{
> +	struct drm_dp_tunnel_group_state *group_state =
> +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> +	struct drm_dp_tunnel_state *tunnel_state;
> +
> +	if (IS_ERR(group_state))
> +		return PTR_ERR(group_state);
> +
> +	*stream_mask = 0;
> +	for_each_tunnel_state(group_state, tunnel_state)
> +		*stream_mask |= tunnel_state->stream_mask;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_group_streams_in_state);
> +
> +static int
> +drm_dp_tunnel_atomic_check_group_bw(struct drm_dp_tunnel_group_state *new_group_state,
> +				    u32 *failed_stream_mask)
> +{
> +	struct drm_dp_tunnel_group *group = to_group(new_group_state->base.obj);
> +	struct drm_dp_tunnel_state *new_tunnel_state;
> +	u32 group_stream_mask = 0;
> +	int group_bw = 0;
> +
> +	for_each_tunnel_state(new_group_state, new_tunnel_state) {
> +		struct drm_dp_tunnel *tunnel = new_tunnel_state->tunnel_ref.tunnel;
> +		int max_dprx_bw = get_max_dprx_bw(tunnel);
> +		int tunnel_bw = drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state);
> +
> +		tun_dbg(tunnel,
> +			"%sRequired %d/%d Mb/s total for tunnel.\n",
> +			tunnel_bw > max_dprx_bw ? "Not enough BW: " : "",
> +			DPTUN_BW_ARG(tunnel_bw),
> +			DPTUN_BW_ARG(max_dprx_bw));
> +
> +		if (tunnel_bw > max_dprx_bw) {

I'm a bit confused why we're checking this here. Aren't we already
checking this somewhere else?

> +			*failed_stream_mask = new_tunnel_state->stream_mask;
> +			return -ENOSPC;
> +		}
> +
> +		group_bw += min(roundup(tunnel_bw, tunnel->bw_granularity),
> +				max_dprx_bw);
> +		group_stream_mask |= new_tunnel_state->stream_mask;
> +	}
> +
> +	tun_grp_dbg(group,
> +		    "%sRequired %d/%d Mb/s total for tunnel group.\n",
> +		    group_bw > group->available_bw ? "Not enough BW: " : "",
> +		    DPTUN_BW_ARG(group_bw),
> +		    DPTUN_BW_ARG(group->available_bw));
> +
> +	if (group_bw > group->available_bw) {
> +		*failed_stream_mask = group_stream_mask;
> +		return -ENOSPC;
> +	}
> +
> +	return 0;
> +}
> +
> +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> +					  u32 *failed_stream_mask)
> +{
> +	struct drm_dp_tunnel_group_state *new_group_state;
> +	int i;
> +
> +	for_each_new_group_in_state(state, new_group_state, i) {
> +		int ret;
> +
> +		ret = drm_dp_tunnel_atomic_check_group_bw(new_group_state,
> +							  failed_stream_mask);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_atomic_check_stream_bws);
> +
> +static void destroy_mgr(struct drm_dp_tunnel_mgr *mgr)
> +{
> +	int i;
> +
> +	for (i = 0; i < mgr->group_count; i++) {
> +		cleanup_group(&mgr->groups[i]);
> +		drm_WARN_ON(mgr->dev, !list_empty(&mgr->groups[i].tunnels));
> +	}
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +	ref_tracker_dir_exit(&mgr->ref_tracker);
> +#endif
> +
> +	kfree(mgr->groups);
> +	kfree(mgr);
> +}
> +
> +/**
> + * drm_dp_tunnel_mgr_create - Create a DP tunnel manager
> + * @i915: i915 driver object
> + *
> + * Creates a DP tunnel manager.
> + *
> + * Returns a pointer to the tunnel manager if created successfully or NULL in
> + * case of an error.
> + */
> +struct drm_dp_tunnel_mgr *
> +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
> +{
> +	struct drm_dp_tunnel_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
> +	int i;
> +
> +	if (!mgr)
> +		return NULL;
> +
> +	mgr->dev = dev;
> +	init_waitqueue_head(&mgr->bw_req_queue);
> +
> +	mgr->groups = kcalloc(max_group_count, sizeof(*mgr->groups), GFP_KERNEL);
> +	if (!mgr->groups) {
> +		kfree(mgr);
> +
> +		return NULL;
> +	}
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +	ref_tracker_dir_init(&mgr->ref_tracker, 16, "dptun");
> +#endif
> +
> +	for (i = 0; i < max_group_count; i++) {
> +		if (!init_group(mgr, &mgr->groups[i])) {
> +			destroy_mgr(mgr);
> +
> +			return NULL;
> +		}
> +
> +		mgr->group_count++;
> +	}
> +
> +	return mgr;
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_mgr_create);
> +
> +/**
> + * drm_dp_tunnel_mgr_destroy - Destroy DP tunnel manager
> + * @mgr: Tunnel manager object
> + *
> + * Destroy the tunnel manager.
> + */
> +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr)
> +{
> +	destroy_mgr(mgr);
> +}
> +EXPORT_SYMBOL(drm_dp_tunnel_mgr_destroy);
> diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
> index 281afff6ee4e5..8bfd5d007be8d 100644
> --- a/include/drm/display/drm_dp.h
> +++ b/include/drm/display/drm_dp.h
> @@ -1382,6 +1382,66 @@
>  #define DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET	0x69494
>  #define DP_HDCP_2_2_REG_DBG_OFFSET		0x69518
>  
> +/* DP-tunneling */
> +#define DP_TUNNELING_OUI				0xe0000
> +#define  DP_TUNNELING_OUI_BYTES				3
> +
> +#define DP_TUNNELING_DEV_ID				0xe0003
> +#define  DP_TUNNELING_DEV_ID_BYTES			6
> +
> +#define DP_TUNNELING_HW_REV				0xe0009
> +#define  DP_TUNNELING_HW_REV_MAJOR_SHIFT		4
> +#define  DP_TUNNELING_HW_REV_MAJOR_MASK			(0xf << DP_TUNNELING_HW_REV_MAJOR_SHIFT)
> +#define  DP_TUNNELING_HW_REV_MINOR_SHIFT		0
> +#define  DP_TUNNELING_HW_REV_MINOR_MASK			(0xf << DP_TUNNELING_HW_REV_MINOR_SHIFT)
> +
> +#define DP_TUNNELING_SW_REV_MAJOR			0xe000a
> +#define DP_TUNNELING_SW_REV_MINOR			0xe000b
> +
> +#define DP_TUNNELING_CAPABILITIES			0xe000d
> +#define  DP_IN_BW_ALLOCATION_MODE_SUPPORT		(1 << 7)
> +#define  DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT		(1 << 6)
> +#define  DP_TUNNELING_SUPPORT				(1 << 0)
> +
> +#define DP_IN_ADAPTER_INFO				0xe000e
> +#define  DP_IN_ADAPTER_NUMBER_BITS			7
> +#define  DP_IN_ADAPTER_NUMBER_MASK			((1 << DP_IN_ADAPTER_NUMBER_BITS) - 1)
> +
> +#define DP_USB4_DRIVER_ID				0xe000f
> +#define  DP_USB4_DRIVER_ID_BITS				4
> +#define  DP_USB4_DRIVER_ID_MASK				((1 << DP_USB4_DRIVER_ID_BITS) - 1)
> +
> +#define DP_USB4_DRIVER_BW_CAPABILITY			0xe0020
> +#define  DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT	(1 << 7)
> +
> +#define DP_IN_ADAPTER_TUNNEL_INFORMATION		0xe0021
> +#define  DP_GROUP_ID_BITS				3
> +#define  DP_GROUP_ID_MASK				((1 << DP_GROUP_ID_BITS) - 1)
> +
> +#define DP_BW_GRANULARITY				0xe0022
> +#define  DP_BW_GRANULARITY_MASK				0x3
> +
> +#define DP_ESTIMATED_BW					0xe0023
> +#define DP_ALLOCATED_BW					0xe0024
> +
> +#define DP_TUNNELING_STATUS				0xe0025
> +#define  DP_BW_ALLOCATION_CAPABILITY_CHANGED		(1 << 3)
> +#define  DP_ESTIMATED_BW_CHANGED			(1 << 2)
> +#define  DP_BW_REQUEST_SUCCEEDED			(1 << 1)
> +#define  DP_BW_REQUEST_FAILED				(1 << 0)
> +
> +#define DP_TUNNELING_MAX_LINK_RATE			0xe0028
> +
> +#define DP_TUNNELING_MAX_LANE_COUNT			0xe0029
> +#define  DP_TUNNELING_MAX_LANE_COUNT_MASK		0x1f
> +
> +#define DP_DPTX_BW_ALLOCATION_MODE_CONTROL		0xe0030
> +#define  DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE	(1 << 7)
> +#define  DP_UNMASK_BW_ALLOCATION_IRQ			(1 << 6)
> +
> +#define DP_REQUEST_BW					0xe0031
> +#define  MAX_DP_REQUEST_BW				255
> +
>  /* LTTPR: Link Training (LT)-tunable PHY Repeaters */
>  #define DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV 0xf0000 /* 1.3 */
>  #define DP_MAX_LINK_RATE_PHY_REPEATER			    0xf0001 /* 1.4a */
> diff --git a/include/drm/display/drm_dp_tunnel.h b/include/drm/display/drm_dp_tunnel.h
> new file mode 100644
> index 0000000000000..f6449b1b4e6e9
> --- /dev/null
> +++ b/include/drm/display/drm_dp_tunnel.h
> @@ -0,0 +1,270 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#ifndef __DRM_DP_TUNNEL_H__
> +#define __DRM_DP_TUNNEL_H__
> +
> +#include <linux/err.h>
> +#include <linux/errno.h>
> +#include <linux/types.h>
> +
> +struct drm_dp_aux;
> +
> +struct drm_device;
> +
> +struct drm_atomic_state;
> +struct drm_dp_tunnel_mgr;
> +struct drm_dp_tunnel_state;
> +
> +struct ref_tracker;
> +
> +struct drm_dp_tunnel_ref {
> +	struct drm_dp_tunnel *tunnel;
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +	struct ref_tracker *tracker;
> +#endif
> +};
> +
> +#ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel);
> +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel);
> +
> +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
> +
> +void
> +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
> +#else
> +#define drm_dp_tunnel_get(tunnel, tracker) \
> +	drm_dp_tunnel_get_untracked(tunnel)
> +
> +#define drm_dp_tunnel_put(tunnel, tracker) \
> +	drm_dp_tunnel_put_untracked(tunnel)
> +
> +#endif
> +
> +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
> +					   struct drm_dp_tunnel_ref *tunnel_ref)
> +{
> +	tunnel_ref->tunnel = drm_dp_tunnel_get(tunnel, &tunnel_ref->tracker);
> +}
> +
> +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref)
> +{
> +	drm_dp_tunnel_put(tunnel_ref->tunnel, &tunnel_ref->tracker);
> +}
> +
> +struct drm_dp_tunnel *
> +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> +		     struct drm_dp_aux *aux);
> +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel);
> +
> +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel);
> +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw);
> +int drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel);
> +
> +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel);
> +
> +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> +			     struct drm_dp_aux *aux);
> +
> +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel);
> +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel);
> +
> +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel);
> +
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> +			       struct drm_dp_tunnel *tunnel);
> +struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> +				   const struct drm_dp_tunnel *tunnel);
> +
> +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state);
> +
> +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> +				       struct drm_dp_tunnel *tunnel,
> +				       u8 stream_id, int bw);
> +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> +						    const struct drm_dp_tunnel *tunnel,
> +						    u32 *stream_mask);
> +
> +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> +					  u32 *failed_stream_mask);
> +
> +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state);
> +
> +struct drm_dp_tunnel_mgr *
> +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count);
> +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr);
> +
> +#else
> +
> +static inline struct drm_dp_tunnel *
> +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> +{
> +	return NULL;
> +}
> +
> +static inline void
> +drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel) {}
> +
> +static inline struct drm_dp_tunnel *
> +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker)
> +{
> +	return NULL;
> +}
> +
> +static inline void
> +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker) {}
> +
> +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
> +					   struct drm_dp_tunnel_ref *tunnel_ref) {}
> +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref) {}
> +
> +static inline struct drm_dp_tunnel *
> +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> +		     struct drm_dp_aux *aux)
> +{
> +	return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline int
> +drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> +{
> +	return 0;
> +}
> +
> +static inline int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
> +{
> +	return false;
> +}
> +
> +static inline int
> +drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel) {}
> +static inline int
> +drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> +			 struct drm_dp_aux *aux)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> +{
> +	return 0;
> +}
> +
> +static inline int
> +drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
> +{
> +	return 0;
> +}
> +
> +static inline int
> +drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> +{
> +	return -1;
> +}
> +
> +static inline const char *
> +drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
> +{
> +	return NULL;
> +}
> +
> +static inline struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> +			       struct drm_dp_tunnel *tunnel)
> +{
> +	return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline struct drm_dp_tunnel_state *
> +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> +				   const struct drm_dp_tunnel *tunnel)
> +{
> +	return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline void
> +drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state) {}
> +
> +static inline int
> +drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> +				   struct drm_dp_tunnel *tunnel,
> +				   u8 stream_id, int bw)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> +						const struct drm_dp_tunnel *tunnel,
> +						u32 *stream_mask)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> +				      u32 *failed_stream_mask)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int
> +drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
> +{
> +	return 0;
> +}
> +
> +static inline struct drm_dp_tunnel_mgr *
> +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
> +{
> +	return ERR_PTR(-EOPNOTSUPP);
> +}
> +
> +static inline
> +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr) {}
> +
> +
> +#endif /* CONFIG_DRM_DISPLAY_DP_TUNNEL */
> +
> +#endif /* __DRM_DP_TUNNEL_H__ */
> -- 
> 2.39.2

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-02-07 20:02   ` Ville Syrjälä
@ 2024-02-07 20:48     ` Imre Deak
  2024-02-07 21:02       ` Imre Deak
  2024-02-07 22:04       ` Imre Deak
  0 siblings, 2 replies; 61+ messages in thread
From: Imre Deak @ 2024-02-07 20:48 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, dri-devel, Mika Westerberg

On Wed, Feb 07, 2024 at 10:02:18PM +0200, Ville Syrjälä wrote:
> On Tue, Jan 23, 2024 at 12:28:33PM +0200, Imre Deak wrote:
> > +static char yes_no_chr(int val)
> > +{
> > +	return val ? 'Y' : 'N';
> > +}
> 
> We have str_yes_no() already.

Ok, will use this.

> v> +
> > +#define SKIP_DPRX_CAPS_CHECK		BIT(0)
> > +#define ALLOW_ALLOCATED_BW_CHANGE	BIT(1)
> > +
> > +static bool tunnel_regs_are_valid(struct drm_dp_tunnel_mgr *mgr,
> > +				  const struct drm_dp_tunnel_regs *regs,
> > +				  unsigned int flags)
> > +{
> > +	int drv_group_id = tunnel_reg_drv_group_id(regs);
> > +	bool check_dprx = !(flags & SKIP_DPRX_CAPS_CHECK);
> > +	bool ret = true;
> > +
> > +	if (!tunnel_reg_bw_alloc_supported(regs)) {
> > +		if (tunnel_group_id(drv_group_id)) {
> > +			drm_dbg_kms(mgr->dev,
> > +				    "DPTUN: A non-zero group ID is only allowed with BWA support\n");
> > +			ret = false;
> > +		}
> > +
> > +		if (tunnel_reg(regs, DP_ALLOCATED_BW)) {
> > +			drm_dbg_kms(mgr->dev,
> > +				    "DPTUN: BW is allocated without BWA support\n");
> > +			ret = false;
> > +		}
> > +
> > +		return ret;
> > +	}
> > +
> > +	if (!tunnel_group_id(drv_group_id)) {
> > +		drm_dbg_kms(mgr->dev,
> > +			    "DPTUN: BWA support requires a non-zero group ID\n");
> > +		ret = false;
> > +	}
> > +
> > +	if (check_dprx && hweight8(tunnel_reg_max_dprx_lane_count(regs)) != 1) {
> > +		drm_dbg_kms(mgr->dev,
> > +			    "DPTUN: Invalid DPRX lane count: %d\n",
> > +			    tunnel_reg_max_dprx_lane_count(regs));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	if (check_dprx && !tunnel_reg_max_dprx_rate(regs)) {
> > +		drm_dbg_kms(mgr->dev,
> > +			    "DPTUN: DPRX rate is 0\n");
> > +
> > +		ret = false;
> > +	}
> > +
> > +	if (tunnel_reg(regs, DP_ALLOCATED_BW) > tunnel_reg(regs, DP_ESTIMATED_BW)) {
> > +		drm_dbg_kms(mgr->dev,
> > +			    "DPTUN: Allocated BW %d > estimated BW %d Mb/s\n",
> > +			    DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) *
> > +					 tunnel_reg_bw_granularity(regs)),
> > +			    DPTUN_BW_ARG(tunnel_reg(regs, DP_ESTIMATED_BW) *
> > +					 tunnel_reg_bw_granularity(regs)));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	return ret;
> > +}
> > +
> > +static bool tunnel_info_changes_are_valid(struct drm_dp_tunnel *tunnel,
> > +					  const struct drm_dp_tunnel_regs *regs,
> > +					  unsigned int flags)
> > +{
> > +	int new_drv_group_id = tunnel_reg_drv_group_id(regs);
> > +	bool ret = true;
> > +
> > +	if (tunnel->bw_alloc_supported != tunnel_reg_bw_alloc_supported(regs)) {
> > +		tun_dbg(tunnel,
> > +			"BW alloc support has changed %c -> %c\n",
> > +			yes_no_chr(tunnel->bw_alloc_supported),
> > +			yes_no_chr(tunnel_reg_bw_alloc_supported(regs)));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	if (tunnel->group->drv_group_id != new_drv_group_id) {
> > +		tun_dbg(tunnel,
> > +			"Driver/group ID has changed %d:%d:* -> %d:%d:*\n",
> > +			tunnel_group_drv_id(tunnel->group->drv_group_id),
> > +			tunnel_group_id(tunnel->group->drv_group_id),
> > +			tunnel_group_drv_id(new_drv_group_id),
> > +			tunnel_group_id(new_drv_group_id));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	if (!tunnel->bw_alloc_supported)
> > +		return ret;
> > +
> > +	if (tunnel->bw_granularity != tunnel_reg_bw_granularity(regs)) {
> > +		tun_dbg(tunnel,
> > +			"BW granularity has changed: %d -> %d Mb/s\n",
> > +			DPTUN_BW_ARG(tunnel->bw_granularity),
> > +			DPTUN_BW_ARG(tunnel_reg_bw_granularity(regs)));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	/*
> > +	 * On some devices at least the BW alloc mode enabled status is always
> > +	 * reported as 0, so skip checking that here.
> > +	 */
> 
> So it's reported as supported and we enable it, but it's never
> reported back as being enabled?

Yes, at least using an engineering TBT (DP adapter) FW. I'll check if
this is fixed already on released platforms/FWs.

> > +
> > +	if (!(flags & ALLOW_ALLOCATED_BW_CHANGE) &&
> > +	    tunnel->allocated_bw !=
> > +	    tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity) {
> > +		tun_dbg(tunnel,
> > +			"Allocated BW has changed: %d -> %d Mb/s\n",
> > +			DPTUN_BW_ARG(tunnel->allocated_bw),
> > +			DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity));
> > +
> > +		ret = false;
> > +	}
> > +
> > +	return ret;
> > +}
> > +
> > +static int
> > +read_and_verify_tunnel_regs(struct drm_dp_tunnel *tunnel,
> > +			    struct drm_dp_tunnel_regs *regs,
> > +			    unsigned int flags)
> > +{
> > +	int err;
> > +
> > +	err = read_tunnel_regs(tunnel->aux, regs);
> > +	if (err < 0) {
> > +		drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +		return err;
> > +	}
> > +
> > +	if (!tunnel_regs_are_valid(tunnel->group->mgr, regs, flags))
> > +		return -EINVAL;
> > +
> > +	if (!tunnel_info_changes_are_valid(tunnel, regs, flags))
> > +		return -EINVAL;
> > +
> > +	return 0;
> > +}
> > +
> > +static bool update_dprx_caps(struct drm_dp_tunnel *tunnel, const struct drm_dp_tunnel_regs *regs)
> > +{
> > +	bool changed = false;
> > +
> > +	if (tunnel_reg_max_dprx_rate(regs) != tunnel->max_dprx_rate) {
> > +		tunnel->max_dprx_rate = tunnel_reg_max_dprx_rate(regs);
> > +		changed = true;
> > +	}
> > +
> > +	if (tunnel_reg_max_dprx_lane_count(regs) != tunnel->max_dprx_lane_count) {
> > +		tunnel->max_dprx_lane_count = tunnel_reg_max_dprx_lane_count(regs);
> > +		changed = true;
> > +	}
> > +
> > +	return changed;
> > +}
> > +
> > +static int dev_id_len(const u8 *dev_id, int max_len)
> > +{
> > +	while (max_len && dev_id[max_len - 1] == '\0')
> > +		max_len--;
> > +
> > +	return max_len;
> > +}
> > +
> > +static int get_max_dprx_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	int bw = drm_dp_max_dprx_data_rate(tunnel->max_dprx_rate,
> > +					   tunnel->max_dprx_lane_count);
> > +
> > +	return min(roundup(bw, tunnel->bw_granularity),
> 
> Should this round down?

This should round up: whereas in general the TBT CM (thunderbolt driver)
allocates exactly bw=C*bw_granularity in response to C written to
DP_REQUEST_BW, if this bw is above the max_dprx_bw=max_dprx_rate *
max_dprx_lane_count limit (also known to CM), the CM will allocate only
max_dprx_bw. (This is the only way max_dprx_bw can be allocated even if
it's not aligned to bw_granularity.) This needs a code comment.

> > +		   MAX_DP_REQUEST_BW * tunnel->bw_granularity);
> > +}
> > +
> > +static int get_max_tunnel_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return min(get_max_dprx_bw(tunnel), tunnel->group->available_bw);
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_detect - Detect DP tunnel on the link
> > + * @mgr: Tunnel manager
> > + * @aux: DP AUX on which the tunnel will be detected
> > + *
> > + * Detect if there is any DP tunnel on the link and add it to the tunnel
> > + * group's tunnel list.
> > + *
> > + * Returns 0 on success, negative error code on failure.
> > + */
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> > +		       struct drm_dp_aux *aux)
> > +{
> > +	struct drm_dp_tunnel_regs regs;
> > +	struct drm_dp_tunnel *tunnel;
> > +	int err;
> > +
> > +	err = read_tunnel_regs(aux, &regs);
> > +	if (err)
> > +		return ERR_PTR(err);
> > +
> > +	if (!(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
> > +	      DP_TUNNELING_SUPPORT))
> > +		return ERR_PTR(-ENODEV);
> > +
> > +	/* The DPRX caps are valid only after enabling BW alloc mode. */
> > +	if (!tunnel_regs_are_valid(mgr, &regs, SKIP_DPRX_CAPS_CHECK))
> > +		return ERR_PTR(-EINVAL);
> > +
> > +	tunnel = create_tunnel(mgr, aux, &regs);
> > +	if (!tunnel)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	tun_dbg(tunnel,
> > +		"OUI:%*phD DevID:%*pE Rev-HW:%d.%d SW:%d.%d PR-Sup:%c BWA-Sup:%c BWA-En:%c\n",
> > +		DP_TUNNELING_OUI_BYTES,
> > +			tunnel_reg_ptr(&regs, DP_TUNNELING_OUI),
> > +		dev_id_len(tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID), DP_TUNNELING_DEV_ID_BYTES),
> > +			tunnel_reg_ptr(&regs, DP_TUNNELING_DEV_ID),
> > +		(tunnel_reg(&regs, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MAJOR_MASK) >>
> > +			DP_TUNNELING_HW_REV_MAJOR_SHIFT,
> > +		(tunnel_reg(&regs, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MINOR_MASK) >>
> > +			DP_TUNNELING_HW_REV_MINOR_SHIFT,
> > +		tunnel_reg(&regs, DP_TUNNELING_SW_REV_MAJOR),
> > +		tunnel_reg(&regs, DP_TUNNELING_SW_REV_MINOR),
> > +		yes_no_chr(tunnel_reg(&regs, DP_TUNNELING_CAPABILITIES) &
> > +			   DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT),
> > +		yes_no_chr(tunnel->bw_alloc_supported),
> > +		yes_no_chr(tunnel->bw_alloc_enabled));
> > +
> > +	return tunnel;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_detect);
> > +
> > +/**
> > + * drm_dp_tunnel_destroy - Destroy tunnel object
> > + * @tunnel: Tunnel object
> > + *
> > + * Remove the tunnel from the tunnel topology and destroy it.
> > + */
> > +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> > +{
> > +	if (drm_WARN_ON(tunnel->group->mgr->dev, tunnel->destroyed))
> > +		return -ENODEV;
> > +
> > +	tun_dbg(tunnel, "destroying\n");
> > +
> > +	tunnel->destroyed = true;
> > +	destroy_tunnel(tunnel);
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_destroy);
> > +
> > +static int check_tunnel(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	if (tunnel->destroyed)
> > +		return -ENODEV;
> > +
> > +	if (tunnel->has_io_error)
> > +		return -EIO;
> > +
> > +	return 0;
> > +}
> > +
> > +static int group_allocated_bw(struct drm_dp_tunnel_group *group)
> > +{
> > +	struct drm_dp_tunnel *tunnel;
> > +	int group_allocated_bw = 0;
> > +
> > +	for_each_tunnel_in_group(group, tunnel) {
> > +		if (check_tunnel(tunnel) == 0 &&
> > +		    tunnel->bw_alloc_enabled)
> > +			group_allocated_bw += tunnel->allocated_bw;
> > +	}
> > +
> > +	return group_allocated_bw;
> > +}
> > +
> > +static int calc_group_available_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return group_allocated_bw(tunnel->group) -
> > +	       tunnel->allocated_bw +
> > +	       tunnel->estimated_bw;
> 
> Hmm. So the estimated_bw=actually_free_bw + tunnel->allocated_bw?

Yes.

> Ie. how much bw might be available for this tunnel right now?

Correct.

> And here we're trying to deduce the total bandwidth available by
> adding in the allocated_bw of all the other tunnels in the group?

Yes.

> Rather weird that we can't just get that number directly...

It is. Imo this could be simply communicated via a DPCD register
dedicated for this. Perhaps adding this should be requested from TBT
architects.

I assume this could also use a code comment.

> > +}
> > +
> > +static int update_group_available_bw(struct drm_dp_tunnel *tunnel,
> > +				     const struct drm_dp_tunnel_regs *regs)
> > +{
> > +	struct drm_dp_tunnel *tunnel_iter;
> > +	int group_available_bw;
> > +	bool changed;
> > +
> > +	tunnel->estimated_bw = tunnel_reg(regs, DP_ESTIMATED_BW) * tunnel->bw_granularity;
> > +
> > +	if (calc_group_available_bw(tunnel) == tunnel->group->available_bw)
> > +		return 0;
> > +
> > +	for_each_tunnel_in_group(tunnel->group, tunnel_iter) {
> > +		int err;
> > +
> > +		if (tunnel_iter == tunnel)
> > +			continue;
> > +
> > +		if (check_tunnel(tunnel_iter) != 0 ||
> > +		    !tunnel_iter->bw_alloc_enabled)
> > +			continue;
> > +
> > +		err = drm_dp_dpcd_probe(tunnel_iter->aux, DP_DPCD_REV);
> > +		if (err) {
> > +			tun_dbg(tunnel_iter,
> > +				"Probe failed, assume disconnected (err %pe)\n",
> > +				ERR_PTR(err));
> > +			drm_dp_tunnel_set_io_error(tunnel_iter);
> > +		}
> > +	}
> > +
> > +	group_available_bw = calc_group_available_bw(tunnel);
> > +
> > +	tun_dbg(tunnel, "Updated group available BW: %d->%d\n",
> > +		DPTUN_BW_ARG(tunnel->group->available_bw),
> > +		DPTUN_BW_ARG(group_available_bw));
> > +
> > +	changed = tunnel->group->available_bw != group_available_bw;
> > +
> > +	tunnel->group->available_bw = group_available_bw;
> > +
> > +	return changed ? 1 : 0;
> > +}
> > +
> > +static int set_bw_alloc_mode(struct drm_dp_tunnel *tunnel, bool enable)
> > +{
> > +	u8 mask = DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE | DP_UNMASK_BW_ALLOCATION_IRQ;
> > +	u8 val;
> > +
> > +	if (drm_dp_dpcd_readb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, &val) < 0)
> > +		goto out_err;
> > +
> > +	if (enable)
> > +		val |= mask;
> > +	else
> > +		val &= ~mask;
> > +
> > +	if (drm_dp_dpcd_writeb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, val) < 0)
> > +		goto out_err;
> > +
> > +	tunnel->bw_alloc_enabled = enable;
> > +
> > +	return 0;
> > +
> > +out_err:
> > +	drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +	return -EIO;
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_enable_bw_alloc: Enable DP tunnel BW allocation mode
> > + * @tunnel: Tunnel object
> > + *
> > + * Enable the DP tunnel BW allocation mode on @tunnel if it supports it.
> > + *
> > + * Returns 0 in case of success, negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_regs regs;
> > +	int err = check_tunnel(tunnel);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	if (!tunnel->bw_alloc_supported)
> > +		return -EOPNOTSUPP;
> > +
> > +	if (!tunnel_group_id(tunnel->group->drv_group_id))
> > +		return -EINVAL;
> > +
> > +	err = set_bw_alloc_mode(tunnel, true);
> > +	if (err)
> > +		goto out;
> > +
> > +	err = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> > +	if (err) {
> > +		set_bw_alloc_mode(tunnel, false);
> > +
> > +		goto out;
> > +	}
> > +
> > +	if (!tunnel->max_dprx_rate)
> > +		update_dprx_caps(tunnel, &regs);
> > +
> > +	if (tunnel->group->available_bw == -1) {
> > +		err = update_group_available_bw(tunnel, &regs);
> > +		if (err > 0)
> > +			err = 0;
> > +	}
> > +out:
> > +	tun_dbg_stat(tunnel, err,
> > +		     "Enabling BW alloc mode: DPRX:%dx%d Group alloc:%d/%d Mb/s",
> > +		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
> > +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> > +
> > +	return err;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_enable_bw_alloc);
> > +
> > +/**
> > + * drm_dp_tunnel_disable_bw_alloc: Disable DP tunnel BW allocation mode
> > + * @tunnel: Tunnel object
> > + *
> > + * Disable the DP tunnel BW allocation mode on @tunnel.
> > + *
> > + * Returns 0 in case of success, negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > +{
> > +	int err = check_tunnel(tunnel);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	err = set_bw_alloc_mode(tunnel, false);
> > +
> > +	tun_dbg_stat(tunnel, err, "Disabling BW alloc mode");
> > +
> > +	return err;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_disable_bw_alloc);
> > +
> > +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return tunnel->bw_alloc_enabled;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_bw_alloc_is_enabled);
> > +
> > +static int bw_req_complete(struct drm_dp_aux *aux, bool *status_changed)
> > +{
> > +	u8 bw_req_mask = DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED;
> > +	u8 status_change_mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
> > +	u8 val;
> > +
> > +	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> > +		return -EIO;
> > +
> > +	*status_changed = val & status_change_mask;
> > +
> > +	val &= bw_req_mask;
> > +
> > +	if (!val)
> > +		return -EAGAIN;
> > +
> > +	if (drm_dp_dpcd_writeb(aux, DP_TUNNELING_STATUS, val) < 0)
> > +		return -EIO;
> > +
> > +	return val == DP_BW_REQUEST_SUCCEEDED ? 0 : -ENOSPC;
> > +}
> > +
> > +static int allocate_tunnel_bw(struct drm_dp_tunnel *tunnel, int bw)
> > +{
> > +	struct drm_dp_tunnel_mgr *mgr = tunnel->group->mgr;
> > +	int request_bw = DIV_ROUND_UP(bw, tunnel->bw_granularity);
> > +	unsigned long wait_expires;
> > +	DEFINE_WAIT(wait);
> > +	int err;
> > +
> > +	/* Atomic check should prevent the following. */
> > +	if (drm_WARN_ON(mgr->dev, request_bw > MAX_DP_REQUEST_BW)) {
> > +		err = -EINVAL;
> > +		goto out;
> > +	}
> > +
> > +	if (drm_dp_dpcd_writeb(tunnel->aux, DP_REQUEST_BW, request_bw) < 0) {
> > +		err = -EIO;
> > +		goto out;
> > +	}
> > +
> > +	wait_expires = jiffies + msecs_to_jiffies(3000);
> > +
> > +	for (;;) {
> > +		bool status_changed;
> > +
> > +		err = bw_req_complete(tunnel->aux, &status_changed);
> > +		if (err != -EAGAIN)
> > +			break;
> > +
> > +		if (status_changed) {
> > +			struct drm_dp_tunnel_regs regs;
> > +
> > +			err = read_and_verify_tunnel_regs(tunnel, &regs,
> > +							  ALLOW_ALLOCATED_BW_CHANGE);
> > +			if (err)
> > +				break;
> > +		}
> > +
> > +		if (time_after(jiffies, wait_expires)) {
> > +			err = -ETIMEDOUT;
> > +			break;
> > +		}
> > +
> > +		prepare_to_wait(&mgr->bw_req_queue, &wait, TASK_UNINTERRUPTIBLE);
> > +		schedule_timeout(msecs_to_jiffies(200));
> > +	};
> > +
> > +	finish_wait(&mgr->bw_req_queue, &wait);
> > +
> > +	if (err)
> > +		goto out;
> > +
> > +	tunnel->allocated_bw = request_bw * tunnel->bw_granularity;
> > +
> > +out:
> > +	tun_dbg_stat(tunnel, err, "Allocating %d/%d Mb/s for tunnel: Group alloc:%d/%d Mb/s",
> > +		     DPTUN_BW_ARG(request_bw * tunnel->bw_granularity),
> > +		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> > +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> > +
> > +	if (err == -EIO)
> > +		drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +	return err;
> > +}
> > +
> > +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> > +{
> > +	int err = check_tunnel(tunnel);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	return allocate_tunnel_bw(tunnel, bw);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_alloc_bw);
> > +
> > +static int check_and_clear_status_change(struct drm_dp_tunnel *tunnel)
> > +{
> > +	u8 mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
> > +	u8 val;
> > +
> > +	if (drm_dp_dpcd_readb(tunnel->aux, DP_TUNNELING_STATUS, &val) < 0)
> > +		goto out_err;
> > +
> > +	val &= mask;
> > +
> > +	if (val) {
> > +		if (drm_dp_dpcd_writeb(tunnel->aux, DP_TUNNELING_STATUS, val) < 0)
> > +			goto out_err;
> > +
> > +		return 1;
> > +	}
> > +
> > +	if (!drm_dp_tunnel_bw_alloc_is_enabled(tunnel))
> > +		return 0;
> > +
> > +	/*
> > +	 * Check for estimated BW changes explicitly to account for lost
> > +	 * BW change notifications.
> > +	 */
> > +	if (drm_dp_dpcd_readb(tunnel->aux, DP_ESTIMATED_BW, &val) < 0)
> > +		goto out_err;
> > +
> > +	if (val * tunnel->bw_granularity != tunnel->estimated_bw)
> > +		return 1;
> > +
> > +	return 0;
> > +
> > +out_err:
> > +	drm_dp_tunnel_set_io_error(tunnel);
> > +
> > +	return -EIO;
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_update_state: Update DP tunnel SW state with the HW state
> > + * @tunnel: Tunnel object
> > + *
> > + * Update the SW state of @tunnel with the HW state.
> > + *
> > + * Returns 0 if the state has not changed, 1 if it has changed and got updated
> > + * successfully and a negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_regs regs;
> > +	bool changed = false;
> > +	int ret = check_tunnel(tunnel);
> > +
> > +	if (ret < 0)
> > +		return ret;
> > +
> > +	ret = check_and_clear_status_change(tunnel);
> > +	if (ret < 0)
> > +		goto out;
> > +
> > +	if (!ret)
> > +		return 0;
> > +
> > +	ret = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> > +	if (ret)
> > +		goto out;
> > +
> > +	if (update_dprx_caps(tunnel, &regs))
> > +		changed = true;
> > +
> > +	ret = update_group_available_bw(tunnel, &regs);
> > +	if (ret == 1)
> > +		changed = true;
> > +
> > +out:
> > +	tun_dbg_stat(tunnel, ret < 0 ? ret : 0,
> > +		     "State update: Changed:%c DPRX:%dx%d Tunnel alloc:%d/%d Group alloc:%d/%d Mb/s",
> > +		     yes_no_chr(changed),
> > +		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
> > +		     DPTUN_BW_ARG(tunnel->allocated_bw),
> > +		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> > +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> > +
> > +	if (ret < 0)
> > +		return ret;
> > +
> > +	if (changed)
> > +		return 1;
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_update_state);
> > +
> > +/*
> > + * Returns 0 if no re-probe is needed, 1 if a re-probe is needed,
> > + * a negative error code otherwise.
> > + */
> > +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_aux *aux)
> > +{
> > +	u8 val;
> > +
> > +	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> > +		return -EIO;
> > +
> > +	if (val & (DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED))
> > +		wake_up_all(&mgr->bw_req_queue);
> > +
> > +	if (val & (DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED))
> > +		return 1;
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_handle_irq);
> > +
> > +/**
> > + * drm_dp_tunnel_max_dprx_rate - Query the maximum rate of the tunnel's DPRX
> > + * @tunnel: Tunnel object
> > + *
> > + * The function is used to query the maximum link rate of the DPRX connected
> > + * to @tunnel. Note that this rate will not be limited by the BW limit of the
> > + * tunnel, as opposed to the standard and extended DP_MAX_LINK_RATE DPCD
> > + * registers.
> > + *
> > + * Returns the maximum link rate in 10 kbit/s units.
> > + */
> > +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return tunnel->max_dprx_rate;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_rate);
> > +
> > +/**
> > + * drm_dp_tunnel_max_dprx_lane_count - Query the maximum lane count of the tunnel's DPRX
> > + * @tunnel: Tunnel object
> > + *
> > + * The function is used to query the maximum lane count of the DPRX connected
> > + * to @tunnel. Note that this lane count will not be limited by the BW limit of
> > + * the tunnel, as opposed to the standard and extended DP_MAX_LANE_COUNT DPCD
> > + * registers.
> > + *
> > + * Returns the maximum lane count.
> > + */
> > +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return tunnel->max_dprx_lane_count;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_lane_count);
> > +
> > +/**
> > + * drm_dp_tunnel_available_bw - Query the estimated total available BW of the tunnel
> > + * @tunnel: Tunnel object
> > + *
> > + * This function is used to query the estimated total available BW of the
> > + * tunnel. This includes the currently allocated and free BW for all the
> > + * tunnels in @tunnel's group. The available BW is valid only after the BW
> > + * allocation mode has been enabled for the tunnel and its state got updated
> > + * calling drm_dp_tunnel_update_state().
> > + *
> > + * Returns the @tunnel group's estimated total available bandwidth in kB/s
> > + * units, or -1 if the available BW isn't valid (the BW allocation mode is
> > + * not enabled or the tunnel's state hasn't been updated).
> > + */
> > +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return tunnel->group->available_bw;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_available_bw);
> > +
> > +static struct drm_dp_tunnel_group_state *
> > +drm_dp_tunnel_atomic_get_group_state(struct drm_atomic_state *state,
> > +				     const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return (struct drm_dp_tunnel_group_state *)
> > +		drm_atomic_get_private_obj_state(state,
> > +						 &tunnel->group->base);
> > +}
> > +
> > +static struct drm_dp_tunnel_state *
> > +add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> > +		 struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	tun_dbg_atomic(tunnel,
> > +		       "Adding state for tunnel %p to group state %p\n",
> > +		       tunnel, group_state);
> > +
> > +	tunnel_state = kzalloc(sizeof(*tunnel_state), GFP_KERNEL);
> > +	if (!tunnel_state)
> > +		return NULL;
> > +
> > +	tunnel_state->group_state = group_state;
> > +
> > +	drm_dp_tunnel_ref_get(tunnel, &tunnel_state->tunnel_ref);
> > +
> > +	INIT_LIST_HEAD(&tunnel_state->node);
> > +	list_add(&tunnel_state->node, &group_state->tunnel_states);
> > +
> > +	return tunnel_state;
> > +}
> > +
> > +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state)
> > +{
> > +	tun_dbg_atomic(tunnel_state->tunnel_ref.tunnel,
> > +		       "Clearing state for tunnel %p\n",
> > +		       tunnel_state->tunnel_ref.tunnel);
> > +
> > +	list_del(&tunnel_state->node);
> > +
> > +	kfree(tunnel_state->stream_bw);
> > +	drm_dp_tunnel_ref_put(&tunnel_state->tunnel_ref);
> > +
> > +	kfree(tunnel_state);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_clear_state);
> > +
> > +static void clear_tunnel_group_state(struct drm_dp_tunnel_group_state *group_state)
> > +{
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +	struct drm_dp_tunnel_state *tunnel_state_tmp;
> > +
> > +	for_each_tunnel_state_safe(group_state, tunnel_state, tunnel_state_tmp)
> > +		drm_dp_tunnel_atomic_clear_state(tunnel_state);
> > +}
> > +
> > +static struct drm_dp_tunnel_state *
> > +get_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> > +		 const struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	for_each_tunnel_state(group_state, tunnel_state)
> > +		if (tunnel_state->tunnel_ref.tunnel == tunnel)
> > +			return tunnel_state;
> > +
> > +	return NULL;
> > +}
> > +
> > +static struct drm_dp_tunnel_state *
> > +get_or_add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> > +			struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	tunnel_state = get_tunnel_state(group_state, tunnel);
> > +	if (tunnel_state)
> > +		return tunnel_state;
> > +
> > +	return add_tunnel_state(group_state, tunnel);
> > +}
> > +
> > +static struct drm_private_state *
> > +tunnel_group_duplicate_state(struct drm_private_obj *obj)
> > +{
> > +	struct drm_dp_tunnel_group_state *group_state = to_group_state(obj->state);
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> > +	if (!group_state)
> > +		return NULL;
> > +
> > +	INIT_LIST_HEAD(&group_state->tunnel_states);
> > +
> > +	__drm_atomic_helper_private_obj_duplicate_state(obj, &group_state->base);
> > +
> > +	for_each_tunnel_state(to_group_state(obj->state), tunnel_state) {
> > +		struct drm_dp_tunnel_state *new_tunnel_state;
> > +
> > +		new_tunnel_state = get_or_add_tunnel_state(group_state,
> > +							   tunnel_state->tunnel_ref.tunnel);
> > +		if (!new_tunnel_state)
> > +			goto out_free_state;
> > +
> > +		new_tunnel_state->stream_mask = tunnel_state->stream_mask;
> > +		new_tunnel_state->stream_bw = kmemdup(tunnel_state->stream_bw,
> > +						      sizeof(*tunnel_state->stream_bw) *
> > +							hweight32(tunnel_state->stream_mask),
> > +						      GFP_KERNEL);
> > +
> > +		if (!new_tunnel_state->stream_bw)
> > +			goto out_free_state;
> > +	}
> > +
> > +	return &group_state->base;
> > +
> > +out_free_state:
> > +	clear_tunnel_group_state(group_state);
> > +	kfree(group_state);
> > +
> > +	return NULL;
> > +}
> > +
> > +static void tunnel_group_destroy_state(struct drm_private_obj *obj, struct drm_private_state *state)
> > +{
> > +	struct drm_dp_tunnel_group_state *group_state = to_group_state(state);
> > +
> > +	clear_tunnel_group_state(group_state);
> > +	kfree(group_state);
> > +}
> > +
> > +static const struct drm_private_state_funcs tunnel_group_funcs = {
> > +	.atomic_duplicate_state = tunnel_group_duplicate_state,
> > +	.atomic_destroy_state = tunnel_group_destroy_state,
> > +};
> > +
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > +			       struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_group_state *group_state =
> > +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	if (IS_ERR(group_state))
> > +		return ERR_CAST(group_state);
> > +
> > +	tunnel_state = get_or_add_tunnel_state(group_state, tunnel);
> > +	if (!tunnel_state)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	return tunnel_state;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_state);
> > +
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > +				   const struct drm_dp_tunnel *tunnel)
> > +{
> > +	struct drm_dp_tunnel_group_state *new_group_state;
> > +	int i;
> > +
> > +	for_each_new_group_in_state(state, new_group_state, i)
> > +		if (to_group(new_group_state->base.obj) == tunnel->group)
> > +			return get_tunnel_state(new_group_state, tunnel);
> > +
> > +	return NULL;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_new_state);
> > +
> > +static bool init_group(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_tunnel_group *group)
> > +{
> > +	struct drm_dp_tunnel_group_state *group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> > +
> > +	if (!group_state)
> > +		return false;
> > +
> > +	INIT_LIST_HEAD(&group_state->tunnel_states);
> > +
> > +	group->mgr = mgr;
> > +	group->available_bw = -1;
> > +	INIT_LIST_HEAD(&group->tunnels);
> > +
> > +	drm_atomic_private_obj_init(mgr->dev, &group->base, &group_state->base,
> > +				    &tunnel_group_funcs);
> > +
> > +	return true;
> > +}
> > +
> > +static void cleanup_group(struct drm_dp_tunnel_group *group)
> > +{
> > +	drm_atomic_private_obj_fini(&group->base);
> > +}
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
> > +{
> > +	const struct drm_dp_tunnel_state *tunnel_state;
> > +	u32 stream_mask = 0;
> > +
> > +	for_each_tunnel_state(group_state, tunnel_state) {
> > +		drm_WARN(to_group(group_state->base.obj)->mgr->dev,
> > +			 tunnel_state->stream_mask & stream_mask,
> > +			 "[DPTUN %s]: conflicting stream IDs %x (IDs in other tunnels %x)\n",
> > +			 tunnel_state->tunnel_ref.tunnel->name,
> > +			 tunnel_state->stream_mask,
> > +			 stream_mask);
> > +
> > +		stream_mask |= tunnel_state->stream_mask;
> > +	}
> > +}
> > +#else
> > +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
> > +{
> > +}
> > +#endif
> > +
> > +static int stream_id_to_idx(u32 stream_mask, u8 stream_id)
> > +{
> > +	return hweight32(stream_mask & (BIT(stream_id) - 1));
> > +}
> > +
> > +static int resize_bw_array(struct drm_dp_tunnel_state *tunnel_state,
> > +			   unsigned long old_mask, unsigned long new_mask)
> > +{
> > +	unsigned long move_mask = old_mask & new_mask;
> > +	int *new_bws = NULL;
> > +	int id;
> > +
> > +	WARN_ON(!new_mask);
> > +
> > +	if (old_mask == new_mask)
> > +		return 0;
> > +
> > +	new_bws = kcalloc(hweight32(new_mask), sizeof(*new_bws), GFP_KERNEL);
> > +	if (!new_bws)
> > +		return -ENOMEM;
> > +
> > +	for_each_set_bit(id, &move_mask, BITS_PER_TYPE(move_mask))
> > +		new_bws[stream_id_to_idx(new_mask, id)] =
> > +			tunnel_state->stream_bw[stream_id_to_idx(old_mask, id)];
> > +
> > +	kfree(tunnel_state->stream_bw);
> > +	tunnel_state->stream_bw = new_bws;
> > +	tunnel_state->stream_mask = new_mask;
> > +
> > +	return 0;
> > +}
> > +
> > +static int set_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> > +			 u8 stream_id, int bw)
> > +{
> > +	int err;
> > +
> > +	err = resize_bw_array(tunnel_state,
> > +			      tunnel_state->stream_mask,
> > +			      tunnel_state->stream_mask | BIT(stream_id));
> > +	if (err)
> > +		return err;
> > +
> > +	tunnel_state->stream_bw[stream_id_to_idx(tunnel_state->stream_mask, stream_id)] = bw;
> > +
> > +	return 0;
> > +}
> > +
> > +static int clear_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> > +			   u8 stream_id)
> > +{
> > +	if (!(tunnel_state->stream_mask & ~BIT(stream_id))) {
> > +		drm_dp_tunnel_atomic_clear_state(tunnel_state);
> > +		return 0;
> > +	}
> > +
> > +	return resize_bw_array(tunnel_state,
> > +			       tunnel_state->stream_mask,
> > +			       tunnel_state->stream_mask & ~BIT(stream_id));
> > +}
> > +
> > +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> > +					 struct drm_dp_tunnel *tunnel,
> > +					 u8 stream_id, int bw)
> > +{
> > +	struct drm_dp_tunnel_group_state *new_group_state =
> > +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +	int err;
> > +
> > +	if (drm_WARN_ON(tunnel->group->mgr->dev,
> > +			stream_id > BITS_PER_TYPE(tunnel_state->stream_mask)))
> > +		return -EINVAL;
> > +
> > +	tun_dbg(tunnel,
> > +		"Setting %d Mb/s for stream %d\n",
> > +		DPTUN_BW_ARG(bw), stream_id);
> > +
> > +	if (bw == 0) {
> > +		tunnel_state = get_tunnel_state(new_group_state, tunnel);
> > +		if (!tunnel_state)
> > +			return 0;
> > +
> > +		return clear_stream_bw(tunnel_state, stream_id);
> > +	}
> > +
> > +	tunnel_state = get_or_add_tunnel_state(new_group_state, tunnel);
> > +	if (drm_WARN_ON(state->dev, !tunnel_state))
> > +		return -EINVAL;
> > +
> > +	err = set_stream_bw(tunnel_state, stream_id, bw);
> > +	if (err)
> > +		return err;
> > +
> > +	check_unique_stream_ids(new_group_state);
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_set_stream_bw);
> > +
> > +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
> > +{
> > +	int tunnel_bw = 0;
> > +	int i;
> > +
> > +	for (i = 0; i < hweight32(tunnel_state->stream_mask); i++)
> > +		tunnel_bw += tunnel_state->stream_bw[i];
> > +
> > +	return tunnel_bw;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_tunnel_bw);
> > +
> > +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> > +						    const struct drm_dp_tunnel *tunnel,
> > +						    u32 *stream_mask)
> > +{
> > +	struct drm_dp_tunnel_group_state *group_state =
> > +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > +	struct drm_dp_tunnel_state *tunnel_state;
> > +
> > +	if (IS_ERR(group_state))
> > +		return PTR_ERR(group_state);
> > +
> > +	*stream_mask = 0;
> > +	for_each_tunnel_state(group_state, tunnel_state)
> > +		*stream_mask |= tunnel_state->stream_mask;
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_group_streams_in_state);
> > +
> > +static int
> > +drm_dp_tunnel_atomic_check_group_bw(struct drm_dp_tunnel_group_state *new_group_state,
> > +				    u32 *failed_stream_mask)
> > +{
> > +	struct drm_dp_tunnel_group *group = to_group(new_group_state->base.obj);
> > +	struct drm_dp_tunnel_state *new_tunnel_state;
> > +	u32 group_stream_mask = 0;
> > +	int group_bw = 0;
> > +
> > +	for_each_tunnel_state(new_group_state, new_tunnel_state) {
> > +		struct drm_dp_tunnel *tunnel = new_tunnel_state->tunnel_ref.tunnel;
> > +		int max_dprx_bw = get_max_dprx_bw(tunnel);
> > +		int tunnel_bw = drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state);
> > +
> > +		tun_dbg(tunnel,
> > +			"%sRequired %d/%d Mb/s total for tunnel.\n",
> > +			tunnel_bw > max_dprx_bw ? "Not enough BW: " : "",
> > +			DPTUN_BW_ARG(tunnel_bw),
> > +			DPTUN_BW_ARG(max_dprx_bw));
> > +
> > +		if (tunnel_bw > max_dprx_bw) {
> 
> I'm a bit confused why we're checking this here. Aren't we already
> checking this somewhere else?

Ah, yes this should be checked already by the encoder compute config +
the MST link BW check. It can be removed, thanks.

> > +			*failed_stream_mask = new_tunnel_state->stream_mask;
> > +			return -ENOSPC;
> > +		}
> > +
> > +		group_bw += min(roundup(tunnel_bw, tunnel->bw_granularity),
> > +				max_dprx_bw);
> > +		group_stream_mask |= new_tunnel_state->stream_mask;
> > +	}
> > +
> > +	tun_grp_dbg(group,
> > +		    "%sRequired %d/%d Mb/s total for tunnel group.\n",
> > +		    group_bw > group->available_bw ? "Not enough BW: " : "",
> > +		    DPTUN_BW_ARG(group_bw),
> > +		    DPTUN_BW_ARG(group->available_bw));
> > +
> > +	if (group_bw > group->available_bw) {
> > +		*failed_stream_mask = group_stream_mask;
> > +		return -ENOSPC;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> > +					  u32 *failed_stream_mask)
> > +{
> > +	struct drm_dp_tunnel_group_state *new_group_state;
> > +	int i;
> > +
> > +	for_each_new_group_in_state(state, new_group_state, i) {
> > +		int ret;
> > +
> > +		ret = drm_dp_tunnel_atomic_check_group_bw(new_group_state,
> > +							  failed_stream_mask);
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_check_stream_bws);
> > +
> > +static void destroy_mgr(struct drm_dp_tunnel_mgr *mgr)
> > +{
> > +	int i;
> > +
> > +	for (i = 0; i < mgr->group_count; i++) {
> > +		cleanup_group(&mgr->groups[i]);
> > +		drm_WARN_ON(mgr->dev, !list_empty(&mgr->groups[i].tunnels));
> > +	}
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +	ref_tracker_dir_exit(&mgr->ref_tracker);
> > +#endif
> > +
> > +	kfree(mgr->groups);
> > +	kfree(mgr);
> > +}
> > +
> > +/**
> > + * drm_dp_tunnel_mgr_create - Create a DP tunnel manager
> > + * @i915: i915 driver object
> > + *
> > + * Creates a DP tunnel manager.
> > + *
> > + * Returns a pointer to the tunnel manager if created successfully or NULL in
> > + * case of an error.
> > + */
> > +struct drm_dp_tunnel_mgr *
> > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
> > +{
> > +	struct drm_dp_tunnel_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
> > +	int i;
> > +
> > +	if (!mgr)
> > +		return NULL;
> > +
> > +	mgr->dev = dev;
> > +	init_waitqueue_head(&mgr->bw_req_queue);
> > +
> > +	mgr->groups = kcalloc(max_group_count, sizeof(*mgr->groups), GFP_KERNEL);
> > +	if (!mgr->groups) {
> > +		kfree(mgr);
> > +
> > +		return NULL;
> > +	}
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +	ref_tracker_dir_init(&mgr->ref_tracker, 16, "dptun");
> > +#endif
> > +
> > +	for (i = 0; i < max_group_count; i++) {
> > +		if (!init_group(mgr, &mgr->groups[i])) {
> > +			destroy_mgr(mgr);
> > +
> > +			return NULL;
> > +		}
> > +
> > +		mgr->group_count++;
> > +	}
> > +
> > +	return mgr;
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_mgr_create);
> > +
> > +/**
> > + * drm_dp_tunnel_mgr_destroy - Destroy DP tunnel manager
> > + * @mgr: Tunnel manager object
> > + *
> > + * Destroy the tunnel manager.
> > + */
> > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr)
> > +{
> > +	destroy_mgr(mgr);
> > +}
> > +EXPORT_SYMBOL(drm_dp_tunnel_mgr_destroy);
> > diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
> > index 281afff6ee4e5..8bfd5d007be8d 100644
> > --- a/include/drm/display/drm_dp.h
> > +++ b/include/drm/display/drm_dp.h
> > @@ -1382,6 +1382,66 @@
> >  #define DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET	0x69494
> >  #define DP_HDCP_2_2_REG_DBG_OFFSET		0x69518
> >  
> > +/* DP-tunneling */
> > +#define DP_TUNNELING_OUI				0xe0000
> > +#define  DP_TUNNELING_OUI_BYTES				3
> > +
> > +#define DP_TUNNELING_DEV_ID				0xe0003
> > +#define  DP_TUNNELING_DEV_ID_BYTES			6
> > +
> > +#define DP_TUNNELING_HW_REV				0xe0009
> > +#define  DP_TUNNELING_HW_REV_MAJOR_SHIFT		4
> > +#define  DP_TUNNELING_HW_REV_MAJOR_MASK			(0xf << DP_TUNNELING_HW_REV_MAJOR_SHIFT)
> > +#define  DP_TUNNELING_HW_REV_MINOR_SHIFT		0
> > +#define  DP_TUNNELING_HW_REV_MINOR_MASK			(0xf << DP_TUNNELING_HW_REV_MINOR_SHIFT)
> > +
> > +#define DP_TUNNELING_SW_REV_MAJOR			0xe000a
> > +#define DP_TUNNELING_SW_REV_MINOR			0xe000b
> > +
> > +#define DP_TUNNELING_CAPABILITIES			0xe000d
> > +#define  DP_IN_BW_ALLOCATION_MODE_SUPPORT		(1 << 7)
> > +#define  DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT		(1 << 6)
> > +#define  DP_TUNNELING_SUPPORT				(1 << 0)
> > +
> > +#define DP_IN_ADAPTER_INFO				0xe000e
> > +#define  DP_IN_ADAPTER_NUMBER_BITS			7
> > +#define  DP_IN_ADAPTER_NUMBER_MASK			((1 << DP_IN_ADAPTER_NUMBER_BITS) - 1)
> > +
> > +#define DP_USB4_DRIVER_ID				0xe000f
> > +#define  DP_USB4_DRIVER_ID_BITS				4
> > +#define  DP_USB4_DRIVER_ID_MASK				((1 << DP_USB4_DRIVER_ID_BITS) - 1)
> > +
> > +#define DP_USB4_DRIVER_BW_CAPABILITY			0xe0020
> > +#define  DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT	(1 << 7)
> > +
> > +#define DP_IN_ADAPTER_TUNNEL_INFORMATION		0xe0021
> > +#define  DP_GROUP_ID_BITS				3
> > +#define  DP_GROUP_ID_MASK				((1 << DP_GROUP_ID_BITS) - 1)
> > +
> > +#define DP_BW_GRANULARITY				0xe0022
> > +#define  DP_BW_GRANULARITY_MASK				0x3
> > +
> > +#define DP_ESTIMATED_BW					0xe0023
> > +#define DP_ALLOCATED_BW					0xe0024
> > +
> > +#define DP_TUNNELING_STATUS				0xe0025
> > +#define  DP_BW_ALLOCATION_CAPABILITY_CHANGED		(1 << 3)
> > +#define  DP_ESTIMATED_BW_CHANGED			(1 << 2)
> > +#define  DP_BW_REQUEST_SUCCEEDED			(1 << 1)
> > +#define  DP_BW_REQUEST_FAILED				(1 << 0)
> > +
> > +#define DP_TUNNELING_MAX_LINK_RATE			0xe0028
> > +
> > +#define DP_TUNNELING_MAX_LANE_COUNT			0xe0029
> > +#define  DP_TUNNELING_MAX_LANE_COUNT_MASK		0x1f
> > +
> > +#define DP_DPTX_BW_ALLOCATION_MODE_CONTROL		0xe0030
> > +#define  DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE	(1 << 7)
> > +#define  DP_UNMASK_BW_ALLOCATION_IRQ			(1 << 6)
> > +
> > +#define DP_REQUEST_BW					0xe0031
> > +#define  MAX_DP_REQUEST_BW				255
> > +
> >  /* LTTPR: Link Training (LT)-tunable PHY Repeaters */
> >  #define DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV 0xf0000 /* 1.3 */
> >  #define DP_MAX_LINK_RATE_PHY_REPEATER			    0xf0001 /* 1.4a */
> > diff --git a/include/drm/display/drm_dp_tunnel.h b/include/drm/display/drm_dp_tunnel.h
> > new file mode 100644
> > index 0000000000000..f6449b1b4e6e9
> > --- /dev/null
> > +++ b/include/drm/display/drm_dp_tunnel.h
> > @@ -0,0 +1,270 @@
> > +/* SPDX-License-Identifier: MIT */
> > +/*
> > + * Copyright © 2023 Intel Corporation
> > + */
> > +
> > +#ifndef __DRM_DP_TUNNEL_H__
> > +#define __DRM_DP_TUNNEL_H__
> > +
> > +#include <linux/err.h>
> > +#include <linux/errno.h>
> > +#include <linux/types.h>
> > +
> > +struct drm_dp_aux;
> > +
> > +struct drm_device;
> > +
> > +struct drm_atomic_state;
> > +struct drm_dp_tunnel_mgr;
> > +struct drm_dp_tunnel_state;
> > +
> > +struct ref_tracker;
> > +
> > +struct drm_dp_tunnel_ref {
> > +	struct drm_dp_tunnel *tunnel;
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +	struct ref_tracker *tracker;
> > +#endif
> > +};
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel);
> > +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel);
> > +
> > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
> > +
> > +void
> > +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
> > +#else
> > +#define drm_dp_tunnel_get(tunnel, tracker) \
> > +	drm_dp_tunnel_get_untracked(tunnel)
> > +
> > +#define drm_dp_tunnel_put(tunnel, tracker) \
> > +	drm_dp_tunnel_put_untracked(tunnel)
> > +
> > +#endif
> > +
> > +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
> > +					   struct drm_dp_tunnel_ref *tunnel_ref)
> > +{
> > +	tunnel_ref->tunnel = drm_dp_tunnel_get(tunnel, &tunnel_ref->tracker);
> > +}
> > +
> > +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref)
> > +{
> > +	drm_dp_tunnel_put(tunnel_ref->tunnel, &tunnel_ref->tracker);
> > +}
> > +
> > +struct drm_dp_tunnel *
> > +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> > +		     struct drm_dp_aux *aux);
> > +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel);
> > +
> > +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel);
> > +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw);
> > +int drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel);
> > +
> > +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel);
> > +
> > +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> > +			     struct drm_dp_aux *aux);
> > +
> > +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel);
> > +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel);
> > +
> > +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel);
> > +
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > +			       struct drm_dp_tunnel *tunnel);
> > +struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > +				   const struct drm_dp_tunnel *tunnel);
> > +
> > +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state);
> > +
> > +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> > +				       struct drm_dp_tunnel *tunnel,
> > +				       u8 stream_id, int bw);
> > +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> > +						    const struct drm_dp_tunnel *tunnel,
> > +						    u32 *stream_mask);
> > +
> > +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> > +					  u32 *failed_stream_mask);
> > +
> > +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state);
> > +
> > +struct drm_dp_tunnel_mgr *
> > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count);
> > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr);
> > +
> > +#else
> > +
> > +static inline struct drm_dp_tunnel *
> > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return NULL;
> > +}
> > +
> > +static inline void
> > +drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel) {}
> > +
> > +static inline struct drm_dp_tunnel *
> > +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker)
> > +{
> > +	return NULL;
> > +}
> > +
> > +static inline void
> > +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker) {}
> > +
> > +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
> > +					   struct drm_dp_tunnel_ref *tunnel_ref) {}
> > +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref) {}
> > +
> > +static inline struct drm_dp_tunnel *
> > +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> > +		     struct drm_dp_aux *aux)
> > +{
> > +	return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return false;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel) {}
> > +static inline int
> > +drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> > +			 struct drm_dp_aux *aux)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return -1;
> > +}
> > +
> > +static inline const char *
> > +drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return NULL;
> > +}
> > +
> > +static inline struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > +			       struct drm_dp_tunnel *tunnel)
> > +{
> > +	return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline struct drm_dp_tunnel_state *
> > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > +				   const struct drm_dp_tunnel *tunnel)
> > +{
> > +	return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline void
> > +drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state) {}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> > +				   struct drm_dp_tunnel *tunnel,
> > +				   u8 stream_id, int bw)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> > +						const struct drm_dp_tunnel *tunnel,
> > +						u32 *stream_mask)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> > +				      u32 *failed_stream_mask)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> > +static inline int
> > +drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline struct drm_dp_tunnel_mgr *
> > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
> > +{
> > +	return ERR_PTR(-EOPNOTSUPP);
> > +}
> > +
> > +static inline
> > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr) {}
> > +
> > +
> > +#endif /* CONFIG_DRM_DISPLAY_DP_TUNNEL */
> > +
> > +#endif /* __DRM_DP_TUNNEL_H__ */
> > -- 
> > 2.39.2
> 
> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-02-07 20:48     ` Imre Deak
@ 2024-02-07 21:02       ` Imre Deak
  2024-02-08 15:18         ` Ville Syrjälä
  2024-02-07 22:04       ` Imre Deak
  1 sibling, 1 reply; 61+ messages in thread
From: Imre Deak @ 2024-02-07 21:02 UTC (permalink / raw)
  To: Ville Syrjälä, intel-gfx, dri-devel, Mika Westerberg

On Wed, Feb 07, 2024 at 10:48:53PM +0200, Imre Deak wrote:
> On Wed, Feb 07, 2024 at 10:02:18PM +0200, Ville Syrjälä wrote:
> > > [...]
> > > +static int
> > > +drm_dp_tunnel_atomic_check_group_bw(struct drm_dp_tunnel_group_state *new_group_state,
> > > +				    u32 *failed_stream_mask)
> > > +{
> > > +	struct drm_dp_tunnel_group *group = to_group(new_group_state->base.obj);
> > > +	struct drm_dp_tunnel_state *new_tunnel_state;
> > > +	u32 group_stream_mask = 0;
> > > +	int group_bw = 0;
> > > +
> > > +	for_each_tunnel_state(new_group_state, new_tunnel_state) {
> > > +		struct drm_dp_tunnel *tunnel = new_tunnel_state->tunnel_ref.tunnel;
> > > +		int max_dprx_bw = get_max_dprx_bw(tunnel);
> > > +		int tunnel_bw = drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state);
> > > +
> > > +		tun_dbg(tunnel,
> > > +			"%sRequired %d/%d Mb/s total for tunnel.\n",
> > > +			tunnel_bw > max_dprx_bw ? "Not enough BW: " : "",
> > > +			DPTUN_BW_ARG(tunnel_bw),
> > > +			DPTUN_BW_ARG(max_dprx_bw));
> > > +
> > > +		if (tunnel_bw > max_dprx_bw) {
> > 
> > I'm a bit confused why we're checking this here. Aren't we already
> > checking this somewhere else?
> 
> Ah, yes this should be checked already by the encoder compute config +
> the MST link BW check. It can be removed, thanks.

Though neither of that is guaranteed for drivers in general, so
shouldn't it be here still?

> > > +			*failed_stream_mask = new_tunnel_state->stream_mask;
> > > +			return -ENOSPC;
> > > +		}
> > > +
> > > +		group_bw += min(roundup(tunnel_bw, tunnel->bw_granularity),
> > > +				max_dprx_bw);
> > > +		group_stream_mask |= new_tunnel_state->stream_mask;
> > > +	}
> > > +
> > > +	tun_grp_dbg(group,
> > > +		    "%sRequired %d/%d Mb/s total for tunnel group.\n",
> > > +		    group_bw > group->available_bw ? "Not enough BW: " : "",
> > > +		    DPTUN_BW_ARG(group_bw),
> > > +		    DPTUN_BW_ARG(group->available_bw));
> > > +
> > > +	if (group_bw > group->available_bw) {
> > > +		*failed_stream_mask = group_stream_mask;
> > > +		return -ENOSPC;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-02-07 20:48     ` Imre Deak
  2024-02-07 21:02       ` Imre Deak
@ 2024-02-07 22:04       ` Imre Deak
  1 sibling, 0 replies; 61+ messages in thread
From: Imre Deak @ 2024-02-07 22:04 UTC (permalink / raw)
  To: Ville Syrjälä, intel-gfx, dri-devel, Mika Westerberg

On Wed, Feb 07, 2024 at 10:48:43PM +0200, Imre Deak wrote:
> On Wed, Feb 07, 2024 at 10:02:18PM +0200, Ville Syrjälä wrote:
> > On Tue, Jan 23, 2024 at 12:28:33PM +0200, Imre Deak wrote:
> > > + [...]
> > > +static int group_allocated_bw(struct drm_dp_tunnel_group *group)
> > > +{
> > > +	struct drm_dp_tunnel *tunnel;
> > > +	int group_allocated_bw = 0;
> > > +
> > > +	for_each_tunnel_in_group(group, tunnel) {
> > > +		if (check_tunnel(tunnel) == 0 &&
> > > +		    tunnel->bw_alloc_enabled)
> > > +			group_allocated_bw += tunnel->allocated_bw;
> > > +	}
> > > +
> > > +	return group_allocated_bw;
> > > +}
> > > +
> > > +static int calc_group_available_bw(const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return group_allocated_bw(tunnel->group) -
> > > +	       tunnel->allocated_bw +
> > > +	       tunnel->estimated_bw;
> > 
> > Hmm. So the estimated_bw=actually_free_bw + tunnel->allocated_bw?
> 
> Yes.
> 
> > Ie. how much bw might be available for this tunnel right now?
> 
> Correct.
> 
> > And here we're trying to deduce the total bandwidth available by
> > adding in the allocated_bw of all the other tunnels in the group?
> 
> Yes.
> 
> > Rather weird that we can't just get that number directly...
> 
> It is. Imo this could be simply communicated via a DPCD register
> dedicated for this. Perhaps adding this should be requested from TBT
> architects.

One reason for this design can be that a host/driver may not see all the
tunnels in the group. In that case the tunnel's current usable BW will
be only its estimated_bw (that is it can't use the BW already allocated
by other tunnels in the group, until those are released by the other
host/driver).

> I assume this could also use a code comment.
> 
> > > +}
> > > +
> > > +static int update_group_available_bw(struct drm_dp_tunnel *tunnel,
> > > +				     const struct drm_dp_tunnel_regs *regs)
> > > +{
> > > +	struct drm_dp_tunnel *tunnel_iter;
> > > +	int group_available_bw;
> > > +	bool changed;
> > > +
> > > +	tunnel->estimated_bw = tunnel_reg(regs, DP_ESTIMATED_BW) * tunnel->bw_granularity;
> > > +
> > > +	if (calc_group_available_bw(tunnel) == tunnel->group->available_bw)
> > > +		return 0;
> > > +
> > > +	for_each_tunnel_in_group(tunnel->group, tunnel_iter) {
> > > +		int err;
> > > +
> > > +		if (tunnel_iter == tunnel)
> > > +			continue;
> > > +
> > > +		if (check_tunnel(tunnel_iter) != 0 ||
> > > +		    !tunnel_iter->bw_alloc_enabled)
> > > +			continue;
> > > +
> > > +		err = drm_dp_dpcd_probe(tunnel_iter->aux, DP_DPCD_REV);
> > > +		if (err) {
> > > +			tun_dbg(tunnel_iter,
> > > +				"Probe failed, assume disconnected (err %pe)\n",
> > > +				ERR_PTR(err));
> > > +			drm_dp_tunnel_set_io_error(tunnel_iter);
> > > +		}
> > > +	}
> > > +
> > > +	group_available_bw = calc_group_available_bw(tunnel);
> > > +
> > > +	tun_dbg(tunnel, "Updated group available BW: %d->%d\n",
> > > +		DPTUN_BW_ARG(tunnel->group->available_bw),
> > > +		DPTUN_BW_ARG(group_available_bw));
> > > +
> > > +	changed = tunnel->group->available_bw != group_available_bw;
> > > +
> > > +	tunnel->group->available_bw = group_available_bw;
> > > +
> > > +	return changed ? 1 : 0;
> > > +}
> > > +
> > > +static int set_bw_alloc_mode(struct drm_dp_tunnel *tunnel, bool enable)
> > > +{
> > > +	u8 mask = DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE | DP_UNMASK_BW_ALLOCATION_IRQ;
> > > +	u8 val;
> > > +
> > > +	if (drm_dp_dpcd_readb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, &val) < 0)
> > > +		goto out_err;
> > > +
> > > +	if (enable)
> > > +		val |= mask;
> > > +	else
> > > +		val &= ~mask;
> > > +
> > > +	if (drm_dp_dpcd_writeb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, val) < 0)
> > > +		goto out_err;
> > > +
> > > +	tunnel->bw_alloc_enabled = enable;
> > > +
> > > +	return 0;
> > > +
> > > +out_err:
> > > +	drm_dp_tunnel_set_io_error(tunnel);
> > > +
> > > +	return -EIO;
> > > +}
> > > +
> > > +/**
> > > + * drm_dp_tunnel_enable_bw_alloc: Enable DP tunnel BW allocation mode
> > > + * @tunnel: Tunnel object
> > > + *
> > > + * Enable the DP tunnel BW allocation mode on @tunnel if it supports it.
> > > + *
> > > + * Returns 0 in case of success, negative error code otherwise.
> > > + */
> > > +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	struct drm_dp_tunnel_regs regs;
> > > +	int err = check_tunnel(tunnel);
> > > +
> > > +	if (err)
> > > +		return err;
> > > +
> > > +	if (!tunnel->bw_alloc_supported)
> > > +		return -EOPNOTSUPP;
> > > +
> > > +	if (!tunnel_group_id(tunnel->group->drv_group_id))
> > > +		return -EINVAL;
> > > +
> > > +	err = set_bw_alloc_mode(tunnel, true);
> > > +	if (err)
> > > +		goto out;
> > > +
> > > +	err = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> > > +	if (err) {
> > > +		set_bw_alloc_mode(tunnel, false);
> > > +
> > > +		goto out;
> > > +	}
> > > +
> > > +	if (!tunnel->max_dprx_rate)
> > > +		update_dprx_caps(tunnel, &regs);
> > > +
> > > +	if (tunnel->group->available_bw == -1) {
> > > +		err = update_group_available_bw(tunnel, &regs);
> > > +		if (err > 0)
> > > +			err = 0;
> > > +	}
> > > +out:
> > > +	tun_dbg_stat(tunnel, err,
> > > +		     "Enabling BW alloc mode: DPRX:%dx%d Group alloc:%d/%d Mb/s",
> > > +		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
> > > +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > > +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> > > +
> > > +	return err;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_enable_bw_alloc);
> > > +
> > > +/**
> > > + * drm_dp_tunnel_disable_bw_alloc: Disable DP tunnel BW allocation mode
> > > + * @tunnel: Tunnel object
> > > + *
> > > + * Disable the DP tunnel BW allocation mode on @tunnel.
> > > + *
> > > + * Returns 0 in case of success, negative error code otherwise.
> > > + */
> > > +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	int err = check_tunnel(tunnel);
> > > +
> > > +	if (err)
> > > +		return err;
> > > +
> > > +	err = set_bw_alloc_mode(tunnel, false);
> > > +
> > > +	tun_dbg_stat(tunnel, err, "Disabling BW alloc mode");
> > > +
> > > +	return err;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_disable_bw_alloc);
> > > +
> > > +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return tunnel->bw_alloc_enabled;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_bw_alloc_is_enabled);
> > > +
> > > +static int bw_req_complete(struct drm_dp_aux *aux, bool *status_changed)
> > > +{
> > > +	u8 bw_req_mask = DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED;
> > > +	u8 status_change_mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
> > > +	u8 val;
> > > +
> > > +	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> > > +		return -EIO;
> > > +
> > > +	*status_changed = val & status_change_mask;
> > > +
> > > +	val &= bw_req_mask;
> > > +
> > > +	if (!val)
> > > +		return -EAGAIN;
> > > +
> > > +	if (drm_dp_dpcd_writeb(aux, DP_TUNNELING_STATUS, val) < 0)
> > > +		return -EIO;
> > > +
> > > +	return val == DP_BW_REQUEST_SUCCEEDED ? 0 : -ENOSPC;
> > > +}
> > > +
> > > +static int allocate_tunnel_bw(struct drm_dp_tunnel *tunnel, int bw)
> > > +{
> > > +	struct drm_dp_tunnel_mgr *mgr = tunnel->group->mgr;
> > > +	int request_bw = DIV_ROUND_UP(bw, tunnel->bw_granularity);
> > > +	unsigned long wait_expires;
> > > +	DEFINE_WAIT(wait);
> > > +	int err;
> > > +
> > > +	/* Atomic check should prevent the following. */
> > > +	if (drm_WARN_ON(mgr->dev, request_bw > MAX_DP_REQUEST_BW)) {
> > > +		err = -EINVAL;
> > > +		goto out;
> > > +	}
> > > +
> > > +	if (drm_dp_dpcd_writeb(tunnel->aux, DP_REQUEST_BW, request_bw) < 0) {
> > > +		err = -EIO;
> > > +		goto out;
> > > +	}
> > > +
> > > +	wait_expires = jiffies + msecs_to_jiffies(3000);
> > > +
> > > +	for (;;) {
> > > +		bool status_changed;
> > > +
> > > +		err = bw_req_complete(tunnel->aux, &status_changed);
> > > +		if (err != -EAGAIN)
> > > +			break;
> > > +
> > > +		if (status_changed) {
> > > +			struct drm_dp_tunnel_regs regs;
> > > +
> > > +			err = read_and_verify_tunnel_regs(tunnel, &regs,
> > > +							  ALLOW_ALLOCATED_BW_CHANGE);
> > > +			if (err)
> > > +				break;
> > > +		}
> > > +
> > > +		if (time_after(jiffies, wait_expires)) {
> > > +			err = -ETIMEDOUT;
> > > +			break;
> > > +		}
> > > +
> > > +		prepare_to_wait(&mgr->bw_req_queue, &wait, TASK_UNINTERRUPTIBLE);
> > > +		schedule_timeout(msecs_to_jiffies(200));
> > > +	};
> > > +
> > > +	finish_wait(&mgr->bw_req_queue, &wait);
> > > +
> > > +	if (err)
> > > +		goto out;
> > > +
> > > +	tunnel->allocated_bw = request_bw * tunnel->bw_granularity;
> > > +
> > > +out:
> > > +	tun_dbg_stat(tunnel, err, "Allocating %d/%d Mb/s for tunnel: Group alloc:%d/%d Mb/s",
> > > +		     DPTUN_BW_ARG(request_bw * tunnel->bw_granularity),
> > > +		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> > > +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > > +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> > > +
> > > +	if (err == -EIO)
> > > +		drm_dp_tunnel_set_io_error(tunnel);
> > > +
> > > +	return err;
> > > +}
> > > +
> > > +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> > > +{
> > > +	int err = check_tunnel(tunnel);
> > > +
> > > +	if (err)
> > > +		return err;
> > > +
> > > +	return allocate_tunnel_bw(tunnel, bw);
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_alloc_bw);
> > > +
> > > +static int check_and_clear_status_change(struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	u8 mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED;
> > > +	u8 val;
> > > +
> > > +	if (drm_dp_dpcd_readb(tunnel->aux, DP_TUNNELING_STATUS, &val) < 0)
> > > +		goto out_err;
> > > +
> > > +	val &= mask;
> > > +
> > > +	if (val) {
> > > +		if (drm_dp_dpcd_writeb(tunnel->aux, DP_TUNNELING_STATUS, val) < 0)
> > > +			goto out_err;
> > > +
> > > +		return 1;
> > > +	}
> > > +
> > > +	if (!drm_dp_tunnel_bw_alloc_is_enabled(tunnel))
> > > +		return 0;
> > > +
> > > +	/*
> > > +	 * Check for estimated BW changes explicitly to account for lost
> > > +	 * BW change notifications.
> > > +	 */
> > > +	if (drm_dp_dpcd_readb(tunnel->aux, DP_ESTIMATED_BW, &val) < 0)
> > > +		goto out_err;
> > > +
> > > +	if (val * tunnel->bw_granularity != tunnel->estimated_bw)
> > > +		return 1;
> > > +
> > > +	return 0;
> > > +
> > > +out_err:
> > > +	drm_dp_tunnel_set_io_error(tunnel);
> > > +
> > > +	return -EIO;
> > > +}
> > > +
> > > +/**
> > > + * drm_dp_tunnel_update_state: Update DP tunnel SW state with the HW state
> > > + * @tunnel: Tunnel object
> > > + *
> > > + * Update the SW state of @tunnel with the HW state.
> > > + *
> > > + * Returns 0 if the state has not changed, 1 if it has changed and got updated
> > > + * successfully and a negative error code otherwise.
> > > + */
> > > +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	struct drm_dp_tunnel_regs regs;
> > > +	bool changed = false;
> > > +	int ret = check_tunnel(tunnel);
> > > +
> > > +	if (ret < 0)
> > > +		return ret;
> > > +
> > > +	ret = check_and_clear_status_change(tunnel);
> > > +	if (ret < 0)
> > > +		goto out;
> > > +
> > > +	if (!ret)
> > > +		return 0;
> > > +
> > > +	ret = read_and_verify_tunnel_regs(tunnel, &regs, 0);
> > > +	if (ret)
> > > +		goto out;
> > > +
> > > +	if (update_dprx_caps(tunnel, &regs))
> > > +		changed = true;
> > > +
> > > +	ret = update_group_available_bw(tunnel, &regs);
> > > +	if (ret == 1)
> > > +		changed = true;
> > > +
> > > +out:
> > > +	tun_dbg_stat(tunnel, ret < 0 ? ret : 0,
> > > +		     "State update: Changed:%c DPRX:%dx%d Tunnel alloc:%d/%d Group alloc:%d/%d Mb/s",
> > > +		     yes_no_chr(changed),
> > > +		     tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count,
> > > +		     DPTUN_BW_ARG(tunnel->allocated_bw),
> > > +		     DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)),
> > > +		     DPTUN_BW_ARG(group_allocated_bw(tunnel->group)),
> > > +		     DPTUN_BW_ARG(tunnel->group->available_bw));
> > > +
> > > +	if (ret < 0)
> > > +		return ret;
> > > +
> > > +	if (changed)
> > > +		return 1;
> > > +
> > > +	return 0;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_update_state);
> > > +
> > > +/*
> > > + * Returns 0 if no re-probe is needed, 1 if a re-probe is needed,
> > > + * a negative error code otherwise.
> > > + */
> > > +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_aux *aux)
> > > +{
> > > +	u8 val;
> > > +
> > > +	if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0)
> > > +		return -EIO;
> > > +
> > > +	if (val & (DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED))
> > > +		wake_up_all(&mgr->bw_req_queue);
> > > +
> > > +	if (val & (DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED))
> > > +		return 1;
> > > +
> > > +	return 0;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_handle_irq);
> > > +
> > > +/**
> > > + * drm_dp_tunnel_max_dprx_rate - Query the maximum rate of the tunnel's DPRX
> > > + * @tunnel: Tunnel object
> > > + *
> > > + * The function is used to query the maximum link rate of the DPRX connected
> > > + * to @tunnel. Note that this rate will not be limited by the BW limit of the
> > > + * tunnel, as opposed to the standard and extended DP_MAX_LINK_RATE DPCD
> > > + * registers.
> > > + *
> > > + * Returns the maximum link rate in 10 kbit/s units.
> > > + */
> > > +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return tunnel->max_dprx_rate;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_rate);
> > > +
> > > +/**
> > > + * drm_dp_tunnel_max_dprx_lane_count - Query the maximum lane count of the tunnel's DPRX
> > > + * @tunnel: Tunnel object
> > > + *
> > > + * The function is used to query the maximum lane count of the DPRX connected
> > > + * to @tunnel. Note that this lane count will not be limited by the BW limit of
> > > + * the tunnel, as opposed to the standard and extended DP_MAX_LANE_COUNT DPCD
> > > + * registers.
> > > + *
> > > + * Returns the maximum lane count.
> > > + */
> > > +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return tunnel->max_dprx_lane_count;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_lane_count);
> > > +
> > > +/**
> > > + * drm_dp_tunnel_available_bw - Query the estimated total available BW of the tunnel
> > > + * @tunnel: Tunnel object
> > > + *
> > > + * This function is used to query the estimated total available BW of the
> > > + * tunnel. This includes the currently allocated and free BW for all the
> > > + * tunnels in @tunnel's group. The available BW is valid only after the BW
> > > + * allocation mode has been enabled for the tunnel and its state got updated
> > > + * calling drm_dp_tunnel_update_state().
> > > + *
> > > + * Returns the @tunnel group's estimated total available bandwidth in kB/s
> > > + * units, or -1 if the available BW isn't valid (the BW allocation mode is
> > > + * not enabled or the tunnel's state hasn't been updated).
> > > + */
> > > +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return tunnel->group->available_bw;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_available_bw);
> > > +
> > > +static struct drm_dp_tunnel_group_state *
> > > +drm_dp_tunnel_atomic_get_group_state(struct drm_atomic_state *state,
> > > +				     const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return (struct drm_dp_tunnel_group_state *)
> > > +		drm_atomic_get_private_obj_state(state,
> > > +						 &tunnel->group->base);
> > > +}
> > > +
> > > +static struct drm_dp_tunnel_state *
> > > +add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> > > +		 struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	struct drm_dp_tunnel_state *tunnel_state;
> > > +
> > > +	tun_dbg_atomic(tunnel,
> > > +		       "Adding state for tunnel %p to group state %p\n",
> > > +		       tunnel, group_state);
> > > +
> > > +	tunnel_state = kzalloc(sizeof(*tunnel_state), GFP_KERNEL);
> > > +	if (!tunnel_state)
> > > +		return NULL;
> > > +
> > > +	tunnel_state->group_state = group_state;
> > > +
> > > +	drm_dp_tunnel_ref_get(tunnel, &tunnel_state->tunnel_ref);
> > > +
> > > +	INIT_LIST_HEAD(&tunnel_state->node);
> > > +	list_add(&tunnel_state->node, &group_state->tunnel_states);
> > > +
> > > +	return tunnel_state;
> > > +}
> > > +
> > > +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state)
> > > +{
> > > +	tun_dbg_atomic(tunnel_state->tunnel_ref.tunnel,
> > > +		       "Clearing state for tunnel %p\n",
> > > +		       tunnel_state->tunnel_ref.tunnel);
> > > +
> > > +	list_del(&tunnel_state->node);
> > > +
> > > +	kfree(tunnel_state->stream_bw);
> > > +	drm_dp_tunnel_ref_put(&tunnel_state->tunnel_ref);
> > > +
> > > +	kfree(tunnel_state);
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_clear_state);
> > > +
> > > +static void clear_tunnel_group_state(struct drm_dp_tunnel_group_state *group_state)
> > > +{
> > > +	struct drm_dp_tunnel_state *tunnel_state;
> > > +	struct drm_dp_tunnel_state *tunnel_state_tmp;
> > > +
> > > +	for_each_tunnel_state_safe(group_state, tunnel_state, tunnel_state_tmp)
> > > +		drm_dp_tunnel_atomic_clear_state(tunnel_state);
> > > +}
> > > +
> > > +static struct drm_dp_tunnel_state *
> > > +get_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> > > +		 const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	struct drm_dp_tunnel_state *tunnel_state;
> > > +
> > > +	for_each_tunnel_state(group_state, tunnel_state)
> > > +		if (tunnel_state->tunnel_ref.tunnel == tunnel)
> > > +			return tunnel_state;
> > > +
> > > +	return NULL;
> > > +}
> > > +
> > > +static struct drm_dp_tunnel_state *
> > > +get_or_add_tunnel_state(struct drm_dp_tunnel_group_state *group_state,
> > > +			struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	struct drm_dp_tunnel_state *tunnel_state;
> > > +
> > > +	tunnel_state = get_tunnel_state(group_state, tunnel);
> > > +	if (tunnel_state)
> > > +		return tunnel_state;
> > > +
> > > +	return add_tunnel_state(group_state, tunnel);
> > > +}
> > > +
> > > +static struct drm_private_state *
> > > +tunnel_group_duplicate_state(struct drm_private_obj *obj)
> > > +{
> > > +	struct drm_dp_tunnel_group_state *group_state = to_group_state(obj->state);
> > > +	struct drm_dp_tunnel_state *tunnel_state;
> > > +
> > > +	group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> > > +	if (!group_state)
> > > +		return NULL;
> > > +
> > > +	INIT_LIST_HEAD(&group_state->tunnel_states);
> > > +
> > > +	__drm_atomic_helper_private_obj_duplicate_state(obj, &group_state->base);
> > > +
> > > +	for_each_tunnel_state(to_group_state(obj->state), tunnel_state) {
> > > +		struct drm_dp_tunnel_state *new_tunnel_state;
> > > +
> > > +		new_tunnel_state = get_or_add_tunnel_state(group_state,
> > > +							   tunnel_state->tunnel_ref.tunnel);
> > > +		if (!new_tunnel_state)
> > > +			goto out_free_state;
> > > +
> > > +		new_tunnel_state->stream_mask = tunnel_state->stream_mask;
> > > +		new_tunnel_state->stream_bw = kmemdup(tunnel_state->stream_bw,
> > > +						      sizeof(*tunnel_state->stream_bw) *
> > > +							hweight32(tunnel_state->stream_mask),
> > > +						      GFP_KERNEL);
> > > +
> > > +		if (!new_tunnel_state->stream_bw)
> > > +			goto out_free_state;
> > > +	}
> > > +
> > > +	return &group_state->base;
> > > +
> > > +out_free_state:
> > > +	clear_tunnel_group_state(group_state);
> > > +	kfree(group_state);
> > > +
> > > +	return NULL;
> > > +}
> > > +
> > > +static void tunnel_group_destroy_state(struct drm_private_obj *obj, struct drm_private_state *state)
> > > +{
> > > +	struct drm_dp_tunnel_group_state *group_state = to_group_state(state);
> > > +
> > > +	clear_tunnel_group_state(group_state);
> > > +	kfree(group_state);
> > > +}
> > > +
> > > +static const struct drm_private_state_funcs tunnel_group_funcs = {
> > > +	.atomic_duplicate_state = tunnel_group_duplicate_state,
> > > +	.atomic_destroy_state = tunnel_group_destroy_state,
> > > +};
> > > +
> > > +struct drm_dp_tunnel_state *
> > > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > > +			       struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	struct drm_dp_tunnel_group_state *group_state =
> > > +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > > +	struct drm_dp_tunnel_state *tunnel_state;
> > > +
> > > +	if (IS_ERR(group_state))
> > > +		return ERR_CAST(group_state);
> > > +
> > > +	tunnel_state = get_or_add_tunnel_state(group_state, tunnel);
> > > +	if (!tunnel_state)
> > > +		return ERR_PTR(-ENOMEM);
> > > +
> > > +	return tunnel_state;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_state);
> > > +
> > > +struct drm_dp_tunnel_state *
> > > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > > +				   const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	struct drm_dp_tunnel_group_state *new_group_state;
> > > +	int i;
> > > +
> > > +	for_each_new_group_in_state(state, new_group_state, i)
> > > +		if (to_group(new_group_state->base.obj) == tunnel->group)
> > > +			return get_tunnel_state(new_group_state, tunnel);
> > > +
> > > +	return NULL;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_new_state);
> > > +
> > > +static bool init_group(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_tunnel_group *group)
> > > +{
> > > +	struct drm_dp_tunnel_group_state *group_state = kzalloc(sizeof(*group_state), GFP_KERNEL);
> > > +
> > > +	if (!group_state)
> > > +		return false;
> > > +
> > > +	INIT_LIST_HEAD(&group_state->tunnel_states);
> > > +
> > > +	group->mgr = mgr;
> > > +	group->available_bw = -1;
> > > +	INIT_LIST_HEAD(&group->tunnels);
> > > +
> > > +	drm_atomic_private_obj_init(mgr->dev, &group->base, &group_state->base,
> > > +				    &tunnel_group_funcs);
> > > +
> > > +	return true;
> > > +}
> > > +
> > > +static void cleanup_group(struct drm_dp_tunnel_group *group)
> > > +{
> > > +	drm_atomic_private_obj_fini(&group->base);
> > > +}
> > > +
> > > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > > +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
> > > +{
> > > +	const struct drm_dp_tunnel_state *tunnel_state;
> > > +	u32 stream_mask = 0;
> > > +
> > > +	for_each_tunnel_state(group_state, tunnel_state) {
> > > +		drm_WARN(to_group(group_state->base.obj)->mgr->dev,
> > > +			 tunnel_state->stream_mask & stream_mask,
> > > +			 "[DPTUN %s]: conflicting stream IDs %x (IDs in other tunnels %x)\n",
> > > +			 tunnel_state->tunnel_ref.tunnel->name,
> > > +			 tunnel_state->stream_mask,
> > > +			 stream_mask);
> > > +
> > > +		stream_mask |= tunnel_state->stream_mask;
> > > +	}
> > > +}
> > > +#else
> > > +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state)
> > > +{
> > > +}
> > > +#endif
> > > +
> > > +static int stream_id_to_idx(u32 stream_mask, u8 stream_id)
> > > +{
> > > +	return hweight32(stream_mask & (BIT(stream_id) - 1));
> > > +}
> > > +
> > > +static int resize_bw_array(struct drm_dp_tunnel_state *tunnel_state,
> > > +			   unsigned long old_mask, unsigned long new_mask)
> > > +{
> > > +	unsigned long move_mask = old_mask & new_mask;
> > > +	int *new_bws = NULL;
> > > +	int id;
> > > +
> > > +	WARN_ON(!new_mask);
> > > +
> > > +	if (old_mask == new_mask)
> > > +		return 0;
> > > +
> > > +	new_bws = kcalloc(hweight32(new_mask), sizeof(*new_bws), GFP_KERNEL);
> > > +	if (!new_bws)
> > > +		return -ENOMEM;
> > > +
> > > +	for_each_set_bit(id, &move_mask, BITS_PER_TYPE(move_mask))
> > > +		new_bws[stream_id_to_idx(new_mask, id)] =
> > > +			tunnel_state->stream_bw[stream_id_to_idx(old_mask, id)];
> > > +
> > > +	kfree(tunnel_state->stream_bw);
> > > +	tunnel_state->stream_bw = new_bws;
> > > +	tunnel_state->stream_mask = new_mask;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int set_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> > > +			 u8 stream_id, int bw)
> > > +{
> > > +	int err;
> > > +
> > > +	err = resize_bw_array(tunnel_state,
> > > +			      tunnel_state->stream_mask,
> > > +			      tunnel_state->stream_mask | BIT(stream_id));
> > > +	if (err)
> > > +		return err;
> > > +
> > > +	tunnel_state->stream_bw[stream_id_to_idx(tunnel_state->stream_mask, stream_id)] = bw;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int clear_stream_bw(struct drm_dp_tunnel_state *tunnel_state,
> > > +			   u8 stream_id)
> > > +{
> > > +	if (!(tunnel_state->stream_mask & ~BIT(stream_id))) {
> > > +		drm_dp_tunnel_atomic_clear_state(tunnel_state);
> > > +		return 0;
> > > +	}
> > > +
> > > +	return resize_bw_array(tunnel_state,
> > > +			       tunnel_state->stream_mask,
> > > +			       tunnel_state->stream_mask & ~BIT(stream_id));
> > > +}
> > > +
> > > +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> > > +					 struct drm_dp_tunnel *tunnel,
> > > +					 u8 stream_id, int bw)
> > > +{
> > > +	struct drm_dp_tunnel_group_state *new_group_state =
> > > +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > > +	struct drm_dp_tunnel_state *tunnel_state;
> > > +	int err;
> > > +
> > > +	if (drm_WARN_ON(tunnel->group->mgr->dev,
> > > +			stream_id > BITS_PER_TYPE(tunnel_state->stream_mask)))
> > > +		return -EINVAL;
> > > +
> > > +	tun_dbg(tunnel,
> > > +		"Setting %d Mb/s for stream %d\n",
> > > +		DPTUN_BW_ARG(bw), stream_id);
> > > +
> > > +	if (bw == 0) {
> > > +		tunnel_state = get_tunnel_state(new_group_state, tunnel);
> > > +		if (!tunnel_state)
> > > +			return 0;
> > > +
> > > +		return clear_stream_bw(tunnel_state, stream_id);
> > > +	}
> > > +
> > > +	tunnel_state = get_or_add_tunnel_state(new_group_state, tunnel);
> > > +	if (drm_WARN_ON(state->dev, !tunnel_state))
> > > +		return -EINVAL;
> > > +
> > > +	err = set_stream_bw(tunnel_state, stream_id, bw);
> > > +	if (err)
> > > +		return err;
> > > +
> > > +	check_unique_stream_ids(new_group_state);
> > > +
> > > +	return 0;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_set_stream_bw);
> > > +
> > > +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
> > > +{
> > > +	int tunnel_bw = 0;
> > > +	int i;
> > > +
> > > +	for (i = 0; i < hweight32(tunnel_state->stream_mask); i++)
> > > +		tunnel_bw += tunnel_state->stream_bw[i];
> > > +
> > > +	return tunnel_bw;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_tunnel_bw);
> > > +
> > > +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> > > +						    const struct drm_dp_tunnel *tunnel,
> > > +						    u32 *stream_mask)
> > > +{
> > > +	struct drm_dp_tunnel_group_state *group_state =
> > > +		drm_dp_tunnel_atomic_get_group_state(state, tunnel);
> > > +	struct drm_dp_tunnel_state *tunnel_state;
> > > +
> > > +	if (IS_ERR(group_state))
> > > +		return PTR_ERR(group_state);
> > > +
> > > +	*stream_mask = 0;
> > > +	for_each_tunnel_state(group_state, tunnel_state)
> > > +		*stream_mask |= tunnel_state->stream_mask;
> > > +
> > > +	return 0;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_group_streams_in_state);
> > > +
> > > +static int
> > > +drm_dp_tunnel_atomic_check_group_bw(struct drm_dp_tunnel_group_state *new_group_state,
> > > +				    u32 *failed_stream_mask)
> > > +{
> > > +	struct drm_dp_tunnel_group *group = to_group(new_group_state->base.obj);
> > > +	struct drm_dp_tunnel_state *new_tunnel_state;
> > > +	u32 group_stream_mask = 0;
> > > +	int group_bw = 0;
> > > +
> > > +	for_each_tunnel_state(new_group_state, new_tunnel_state) {
> > > +		struct drm_dp_tunnel *tunnel = new_tunnel_state->tunnel_ref.tunnel;
> > > +		int max_dprx_bw = get_max_dprx_bw(tunnel);
> > > +		int tunnel_bw = drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state);
> > > +
> > > +		tun_dbg(tunnel,
> > > +			"%sRequired %d/%d Mb/s total for tunnel.\n",
> > > +			tunnel_bw > max_dprx_bw ? "Not enough BW: " : "",
> > > +			DPTUN_BW_ARG(tunnel_bw),
> > > +			DPTUN_BW_ARG(max_dprx_bw));
> > > +
> > > +		if (tunnel_bw > max_dprx_bw) {
> > 
> > I'm a bit confused why we're checking this here. Aren't we already
> > checking this somewhere else?
> 
> Ah, yes this should be checked already by the encoder compute config +
> the MST link BW check. It can be removed, thanks.
> 
> > > +			*failed_stream_mask = new_tunnel_state->stream_mask;
> > > +			return -ENOSPC;
> > > +		}
> > > +
> > > +		group_bw += min(roundup(tunnel_bw, tunnel->bw_granularity),
> > > +				max_dprx_bw);
> > > +		group_stream_mask |= new_tunnel_state->stream_mask;
> > > +	}
> > > +
> > > +	tun_grp_dbg(group,
> > > +		    "%sRequired %d/%d Mb/s total for tunnel group.\n",
> > > +		    group_bw > group->available_bw ? "Not enough BW: " : "",
> > > +		    DPTUN_BW_ARG(group_bw),
> > > +		    DPTUN_BW_ARG(group->available_bw));
> > > +
> > > +	if (group_bw > group->available_bw) {
> > > +		*failed_stream_mask = group_stream_mask;
> > > +		return -ENOSPC;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> > > +					  u32 *failed_stream_mask)
> > > +{
> > > +	struct drm_dp_tunnel_group_state *new_group_state;
> > > +	int i;
> > > +
> > > +	for_each_new_group_in_state(state, new_group_state, i) {
> > > +		int ret;
> > > +
> > > +		ret = drm_dp_tunnel_atomic_check_group_bw(new_group_state,
> > > +							  failed_stream_mask);
> > > +		if (ret)
> > > +			return ret;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_atomic_check_stream_bws);
> > > +
> > > +static void destroy_mgr(struct drm_dp_tunnel_mgr *mgr)
> > > +{
> > > +	int i;
> > > +
> > > +	for (i = 0; i < mgr->group_count; i++) {
> > > +		cleanup_group(&mgr->groups[i]);
> > > +		drm_WARN_ON(mgr->dev, !list_empty(&mgr->groups[i].tunnels));
> > > +	}
> > > +
> > > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > > +	ref_tracker_dir_exit(&mgr->ref_tracker);
> > > +#endif
> > > +
> > > +	kfree(mgr->groups);
> > > +	kfree(mgr);
> > > +}
> > > +
> > > +/**
> > > + * drm_dp_tunnel_mgr_create - Create a DP tunnel manager
> > > + * @i915: i915 driver object
> > > + *
> > > + * Creates a DP tunnel manager.
> > > + *
> > > + * Returns a pointer to the tunnel manager if created successfully or NULL in
> > > + * case of an error.
> > > + */
> > > +struct drm_dp_tunnel_mgr *
> > > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
> > > +{
> > > +	struct drm_dp_tunnel_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
> > > +	int i;
> > > +
> > > +	if (!mgr)
> > > +		return NULL;
> > > +
> > > +	mgr->dev = dev;
> > > +	init_waitqueue_head(&mgr->bw_req_queue);
> > > +
> > > +	mgr->groups = kcalloc(max_group_count, sizeof(*mgr->groups), GFP_KERNEL);
> > > +	if (!mgr->groups) {
> > > +		kfree(mgr);
> > > +
> > > +		return NULL;
> > > +	}
> > > +
> > > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > > +	ref_tracker_dir_init(&mgr->ref_tracker, 16, "dptun");
> > > +#endif
> > > +
> > > +	for (i = 0; i < max_group_count; i++) {
> > > +		if (!init_group(mgr, &mgr->groups[i])) {
> > > +			destroy_mgr(mgr);
> > > +
> > > +			return NULL;
> > > +		}
> > > +
> > > +		mgr->group_count++;
> > > +	}
> > > +
> > > +	return mgr;
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_mgr_create);
> > > +
> > > +/**
> > > + * drm_dp_tunnel_mgr_destroy - Destroy DP tunnel manager
> > > + * @mgr: Tunnel manager object
> > > + *
> > > + * Destroy the tunnel manager.
> > > + */
> > > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr)
> > > +{
> > > +	destroy_mgr(mgr);
> > > +}
> > > +EXPORT_SYMBOL(drm_dp_tunnel_mgr_destroy);
> > > diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
> > > index 281afff6ee4e5..8bfd5d007be8d 100644
> > > --- a/include/drm/display/drm_dp.h
> > > +++ b/include/drm/display/drm_dp.h
> > > @@ -1382,6 +1382,66 @@
> > >  #define DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET	0x69494
> > >  #define DP_HDCP_2_2_REG_DBG_OFFSET		0x69518
> > >  
> > > +/* DP-tunneling */
> > > +#define DP_TUNNELING_OUI				0xe0000
> > > +#define  DP_TUNNELING_OUI_BYTES				3
> > > +
> > > +#define DP_TUNNELING_DEV_ID				0xe0003
> > > +#define  DP_TUNNELING_DEV_ID_BYTES			6
> > > +
> > > +#define DP_TUNNELING_HW_REV				0xe0009
> > > +#define  DP_TUNNELING_HW_REV_MAJOR_SHIFT		4
> > > +#define  DP_TUNNELING_HW_REV_MAJOR_MASK			(0xf << DP_TUNNELING_HW_REV_MAJOR_SHIFT)
> > > +#define  DP_TUNNELING_HW_REV_MINOR_SHIFT		0
> > > +#define  DP_TUNNELING_HW_REV_MINOR_MASK			(0xf << DP_TUNNELING_HW_REV_MINOR_SHIFT)
> > > +
> > > +#define DP_TUNNELING_SW_REV_MAJOR			0xe000a
> > > +#define DP_TUNNELING_SW_REV_MINOR			0xe000b
> > > +
> > > +#define DP_TUNNELING_CAPABILITIES			0xe000d
> > > +#define  DP_IN_BW_ALLOCATION_MODE_SUPPORT		(1 << 7)
> > > +#define  DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT		(1 << 6)
> > > +#define  DP_TUNNELING_SUPPORT				(1 << 0)
> > > +
> > > +#define DP_IN_ADAPTER_INFO				0xe000e
> > > +#define  DP_IN_ADAPTER_NUMBER_BITS			7
> > > +#define  DP_IN_ADAPTER_NUMBER_MASK			((1 << DP_IN_ADAPTER_NUMBER_BITS) - 1)
> > > +
> > > +#define DP_USB4_DRIVER_ID				0xe000f
> > > +#define  DP_USB4_DRIVER_ID_BITS				4
> > > +#define  DP_USB4_DRIVER_ID_MASK				((1 << DP_USB4_DRIVER_ID_BITS) - 1)
> > > +
> > > +#define DP_USB4_DRIVER_BW_CAPABILITY			0xe0020
> > > +#define  DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT	(1 << 7)
> > > +
> > > +#define DP_IN_ADAPTER_TUNNEL_INFORMATION		0xe0021
> > > +#define  DP_GROUP_ID_BITS				3
> > > +#define  DP_GROUP_ID_MASK				((1 << DP_GROUP_ID_BITS) - 1)
> > > +
> > > +#define DP_BW_GRANULARITY				0xe0022
> > > +#define  DP_BW_GRANULARITY_MASK				0x3
> > > +
> > > +#define DP_ESTIMATED_BW					0xe0023
> > > +#define DP_ALLOCATED_BW					0xe0024
> > > +
> > > +#define DP_TUNNELING_STATUS				0xe0025
> > > +#define  DP_BW_ALLOCATION_CAPABILITY_CHANGED		(1 << 3)
> > > +#define  DP_ESTIMATED_BW_CHANGED			(1 << 2)
> > > +#define  DP_BW_REQUEST_SUCCEEDED			(1 << 1)
> > > +#define  DP_BW_REQUEST_FAILED				(1 << 0)
> > > +
> > > +#define DP_TUNNELING_MAX_LINK_RATE			0xe0028
> > > +
> > > +#define DP_TUNNELING_MAX_LANE_COUNT			0xe0029
> > > +#define  DP_TUNNELING_MAX_LANE_COUNT_MASK		0x1f
> > > +
> > > +#define DP_DPTX_BW_ALLOCATION_MODE_CONTROL		0xe0030
> > > +#define  DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE	(1 << 7)
> > > +#define  DP_UNMASK_BW_ALLOCATION_IRQ			(1 << 6)
> > > +
> > > +#define DP_REQUEST_BW					0xe0031
> > > +#define  MAX_DP_REQUEST_BW				255
> > > +
> > >  /* LTTPR: Link Training (LT)-tunable PHY Repeaters */
> > >  #define DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV 0xf0000 /* 1.3 */
> > >  #define DP_MAX_LINK_RATE_PHY_REPEATER			    0xf0001 /* 1.4a */
> > > diff --git a/include/drm/display/drm_dp_tunnel.h b/include/drm/display/drm_dp_tunnel.h
> > > new file mode 100644
> > > index 0000000000000..f6449b1b4e6e9
> > > --- /dev/null
> > > +++ b/include/drm/display/drm_dp_tunnel.h
> > > @@ -0,0 +1,270 @@
> > > +/* SPDX-License-Identifier: MIT */
> > > +/*
> > > + * Copyright © 2023 Intel Corporation
> > > + */
> > > +
> > > +#ifndef __DRM_DP_TUNNEL_H__
> > > +#define __DRM_DP_TUNNEL_H__
> > > +
> > > +#include <linux/err.h>
> > > +#include <linux/errno.h>
> > > +#include <linux/types.h>
> > > +
> > > +struct drm_dp_aux;
> > > +
> > > +struct drm_device;
> > > +
> > > +struct drm_atomic_state;
> > > +struct drm_dp_tunnel_mgr;
> > > +struct drm_dp_tunnel_state;
> > > +
> > > +struct ref_tracker;
> > > +
> > > +struct drm_dp_tunnel_ref {
> > > +	struct drm_dp_tunnel *tunnel;
> > > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > > +	struct ref_tracker *tracker;
> > > +#endif
> > > +};
> > > +
> > > +#ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL
> > > +
> > > +struct drm_dp_tunnel *
> > > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel);
> > > +void drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel);
> > > +
> > > +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE
> > > +struct drm_dp_tunnel *
> > > +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
> > > +
> > > +void
> > > +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker);
> > > +#else
> > > +#define drm_dp_tunnel_get(tunnel, tracker) \
> > > +	drm_dp_tunnel_get_untracked(tunnel)
> > > +
> > > +#define drm_dp_tunnel_put(tunnel, tracker) \
> > > +	drm_dp_tunnel_put_untracked(tunnel)
> > > +
> > > +#endif
> > > +
> > > +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
> > > +					   struct drm_dp_tunnel_ref *tunnel_ref)
> > > +{
> > > +	tunnel_ref->tunnel = drm_dp_tunnel_get(tunnel, &tunnel_ref->tracker);
> > > +}
> > > +
> > > +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref)
> > > +{
> > > +	drm_dp_tunnel_put(tunnel_ref->tunnel, &tunnel_ref->tracker);
> > > +}
> > > +
> > > +struct drm_dp_tunnel *
> > > +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> > > +		     struct drm_dp_aux *aux);
> > > +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel);
> > > +
> > > +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel);
> > > +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel);
> > > +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel);
> > > +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw);
> > > +int drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel);
> > > +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel);
> > > +
> > > +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel);
> > > +
> > > +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> > > +			     struct drm_dp_aux *aux);
> > > +
> > > +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel);
> > > +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel);
> > > +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel);
> > > +
> > > +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel);
> > > +
> > > +struct drm_dp_tunnel_state *
> > > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > > +			       struct drm_dp_tunnel *tunnel);
> > > +struct drm_dp_tunnel_state *
> > > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > > +				   const struct drm_dp_tunnel *tunnel);
> > > +
> > > +void drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state);
> > > +
> > > +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> > > +				       struct drm_dp_tunnel *tunnel,
> > > +				       u8 stream_id, int bw);
> > > +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> > > +						    const struct drm_dp_tunnel *tunnel,
> > > +						    u32 *stream_mask);
> > > +
> > > +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> > > +					  u32 *failed_stream_mask);
> > > +
> > > +int drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state);
> > > +
> > > +struct drm_dp_tunnel_mgr *
> > > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count);
> > > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr);
> > > +
> > > +#else
> > > +
> > > +static inline struct drm_dp_tunnel *
> > > +drm_dp_tunnel_get_untracked(struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return NULL;
> > > +}
> > > +
> > > +static inline void
> > > +drm_dp_tunnel_put_untracked(struct drm_dp_tunnel *tunnel) {}
> > > +
> > > +static inline struct drm_dp_tunnel *
> > > +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker)
> > > +{
> > > +	return NULL;
> > > +}
> > > +
> > > +static inline void
> > > +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker) {}
> > > +
> > > +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel,
> > > +					   struct drm_dp_tunnel_ref *tunnel_ref) {}
> > > +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref) {}
> > > +
> > > +static inline struct drm_dp_tunnel *
> > > +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr,
> > > +		     struct drm_dp_aux *aux)
> > > +{
> > > +	return ERR_PTR(-EOPNOTSUPP);
> > > +}
> > > +
> > > +static inline int
> > > +drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return 0;
> > > +}
> > > +
> > > +static inline int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return -EOPNOTSUPP;
> > > +}
> > > +
> > > +static inline int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return -EOPNOTSUPP;
> > > +}
> > > +
> > > +static inline bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return false;
> > > +}
> > > +
> > > +static inline int
> > > +drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw)
> > > +{
> > > +	return -EOPNOTSUPP;
> > > +}
> > > +
> > > +static inline int
> > > +drm_dp_tunnel_check_state(struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return -EOPNOTSUPP;
> > > +}
> > > +
> > > +static inline int
> > > +drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return -EOPNOTSUPP;
> > > +}
> > > +
> > > +static inline void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel) {}
> > > +static inline int
> > > +drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr,
> > > +			 struct drm_dp_aux *aux)
> > > +{
> > > +	return -EOPNOTSUPP;
> > > +}
> > > +
> > > +static inline int
> > > +drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return 0;
> > > +}
> > > +
> > > +static inline int
> > > +drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return 0;
> > > +}
> > > +
> > > +static inline int
> > > +drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return -1;
> > > +}
> > > +
> > > +static inline const char *
> > > +drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return NULL;
> > > +}
> > > +
> > > +static inline struct drm_dp_tunnel_state *
> > > +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state,
> > > +			       struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return ERR_PTR(-EOPNOTSUPP);
> > > +}
> > > +
> > > +static inline struct drm_dp_tunnel_state *
> > > +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state,
> > > +				   const struct drm_dp_tunnel *tunnel)
> > > +{
> > > +	return ERR_PTR(-EOPNOTSUPP);
> > > +}
> > > +
> > > +static inline void
> > > +drm_dp_tunnel_atomic_clear_state(struct drm_dp_tunnel_state *tunnel_state) {}
> > > +
> > > +static inline int
> > > +drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state,
> > > +				   struct drm_dp_tunnel *tunnel,
> > > +				   u8 stream_id, int bw)
> > > +{
> > > +	return -EOPNOTSUPP;
> > > +}
> > > +
> > > +static inline int
> > > +drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state,
> > > +						const struct drm_dp_tunnel *tunnel,
> > > +						u32 *stream_mask)
> > > +{
> > > +	return -EOPNOTSUPP;
> > > +}
> > > +
> > > +static inline int
> > > +drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state,
> > > +				      u32 *failed_stream_mask)
> > > +{
> > > +	return -EOPNOTSUPP;
> > > +}
> > > +
> > > +static inline int
> > > +drm_dp_tunnel_atomic_get_tunnel_bw(const struct drm_dp_tunnel_state *tunnel_state)
> > > +{
> > > +	return 0;
> > > +}
> > > +
> > > +static inline struct drm_dp_tunnel_mgr *
> > > +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
> > > +{
> > > +	return ERR_PTR(-EOPNOTSUPP);
> > > +}
> > > +
> > > +static inline
> > > +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr) {}
> > > +
> > > +
> > > +#endif /* CONFIG_DRM_DISPLAY_DP_TUNNEL */
> > > +
> > > +#endif /* __DRM_DP_TUNNEL_H__ */
> > > -- 
> > > 2.39.2
> > 
> > -- 
> > Ville Syrjälä
> > Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/19] drm/dp: Add support for DP tunneling
  2024-02-07 21:02       ` Imre Deak
@ 2024-02-08 15:18         ` Ville Syrjälä
  0 siblings, 0 replies; 61+ messages in thread
From: Ville Syrjälä @ 2024-02-08 15:18 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx, dri-devel, Mika Westerberg

On Wed, Feb 07, 2024 at 11:02:27PM +0200, Imre Deak wrote:
> On Wed, Feb 07, 2024 at 10:48:53PM +0200, Imre Deak wrote:
> > On Wed, Feb 07, 2024 at 10:02:18PM +0200, Ville Syrjälä wrote:
> > > > [...]
> > > > +static int
> > > > +drm_dp_tunnel_atomic_check_group_bw(struct drm_dp_tunnel_group_state *new_group_state,
> > > > +				    u32 *failed_stream_mask)
> > > > +{
> > > > +	struct drm_dp_tunnel_group *group = to_group(new_group_state->base.obj);
> > > > +	struct drm_dp_tunnel_state *new_tunnel_state;
> > > > +	u32 group_stream_mask = 0;
> > > > +	int group_bw = 0;
> > > > +
> > > > +	for_each_tunnel_state(new_group_state, new_tunnel_state) {
> > > > +		struct drm_dp_tunnel *tunnel = new_tunnel_state->tunnel_ref.tunnel;
> > > > +		int max_dprx_bw = get_max_dprx_bw(tunnel);
> > > > +		int tunnel_bw = drm_dp_tunnel_atomic_get_tunnel_bw(new_tunnel_state);
> > > > +
> > > > +		tun_dbg(tunnel,
> > > > +			"%sRequired %d/%d Mb/s total for tunnel.\n",
> > > > +			tunnel_bw > max_dprx_bw ? "Not enough BW: " : "",
> > > > +			DPTUN_BW_ARG(tunnel_bw),
> > > > +			DPTUN_BW_ARG(max_dprx_bw));
> > > > +
> > > > +		if (tunnel_bw > max_dprx_bw) {
> > > 
> > > I'm a bit confused why we're checking this here. Aren't we already
> > > checking this somewhere else?
> > 
> > Ah, yes this should be checked already by the encoder compute config +
> > the MST link BW check. It can be removed, thanks.
> 
> Though neither of that is guaranteed for drivers in general, so
> shouldn't it be here still?

I suppose there isn't any real harm in doing it here too.

> 
> > > > +			*failed_stream_mask = new_tunnel_state->stream_mask;
> > > > +			return -ENOSPC;
> > > > +		}
> > > > +
> > > > +		group_bw += min(roundup(tunnel_bw, tunnel->bw_granularity),
> > > > +				max_dprx_bw);
> > > > +		group_stream_mask |= new_tunnel_state->stream_mask;
> > > > +	}
> > > > +
> > > > +	tun_grp_dbg(group,
> > > > +		    "%sRequired %d/%d Mb/s total for tunnel group.\n",
> > > > +		    group_bw > group->available_bw ? "Not enough BW: " : "",
> > > > +		    DPTUN_BW_ARG(group_bw),
> > > > +		    DPTUN_BW_ARG(group->available_bw));
> > > > +
> > > > +	if (group_bw > group->available_bw) {
> > > > +		*failed_stream_mask = group_stream_mask;
> > > > +		return -ENOSPC;
> > > > +	}
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2024-02-08 15:18 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-23 10:28 [PATCH 00/19] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
2024-01-23 10:28 ` [PATCH 01/19] drm/dp: Add drm_dp_max_dprx_data_rate() Imre Deak
2024-01-26 11:36   ` Ville Syrjälä
2024-01-26 13:28     ` Imre Deak
2024-02-06 20:23       ` Shankar, Uma
2024-01-23 10:28 ` [PATCH 02/19] drm/dp: Add support for DP tunneling Imre Deak
2024-01-31 12:50   ` Hogander, Jouni
2024-01-31 13:58     ` Imre Deak
2024-01-31 16:09   ` Ville Syrjälä
2024-01-31 18:49     ` Imre Deak
2024-02-05 16:13       ` Ville Syrjälä
2024-02-05 17:15         ` Imre Deak
2024-02-05 22:17           ` Ville Syrjälä
2024-02-07 20:02   ` Ville Syrjälä
2024-02-07 20:48     ` Imre Deak
2024-02-07 21:02       ` Imre Deak
2024-02-08 15:18         ` Ville Syrjälä
2024-02-07 22:04       ` Imre Deak
2024-01-23 10:28 ` [PATCH 03/19] drm/i915/dp: Add support to notify MST connectors to retry modesets Imre Deak
2024-01-29 10:36   ` Hogander, Jouni
2024-01-29 11:00     ` Imre Deak
2024-01-23 10:28 ` [PATCH 04/19] drm/i915/dp: Use drm_dp_max_dprx_data_rate() Imre Deak
2024-02-06 20:27   ` Shankar, Uma
2024-01-23 10:28 ` [PATCH 05/19] drm/i915/dp: Factor out intel_dp_config_required_rate() Imre Deak
2024-02-06 20:32   ` Shankar, Uma
2024-01-23 10:28 ` [PATCH 06/19] drm/i915/dp: Export intel_dp_max_common_rate/lane_count() Imre Deak
2024-02-06 20:34   ` Shankar, Uma
2024-01-23 10:28 ` [PATCH 07/19] drm/i915/dp: Factor out intel_dp_update_sink_caps() Imre Deak
2024-02-06 20:35   ` Shankar, Uma
2024-01-23 10:28 ` [PATCH 08/19] drm/i915/dp: Factor out intel_dp_read_dprx_caps() Imre Deak
2024-02-06 20:36   ` Shankar, Uma
2024-01-23 10:28 ` [PATCH 09/19] drm/i915/dp: Add intel_dp_max_link_data_rate() Imre Deak
2024-02-06 20:37   ` Shankar, Uma
2024-01-23 10:28 ` [PATCH 10/19] drm/i915/dp: Add way to get active pipes with syncing commits Imre Deak
2024-01-23 10:28 ` [PATCH 11/19] drm/i915/dp: Add support for DP tunnel BW allocation Imre Deak
2024-02-05 22:47   ` Ville Syrjälä
2024-02-06 11:58     ` Imre Deak
2024-02-06 23:08   ` Ville Syrjälä
2024-02-07 12:09     ` Imre Deak
2024-01-23 10:28 ` [PATCH 12/19] drm/i915/dp: Add DP tunnel atomic state and check BW limit Imre Deak
2024-02-05 16:11   ` Ville Syrjälä
2024-02-05 17:52     ` Imre Deak
2024-01-23 10:28 ` [PATCH 13/19] drm/i915/dp: Account for tunnel BW limit in intel_dp_max_link_data_rate() Imre Deak
2024-02-06 20:42   ` Shankar, Uma
2024-01-23 10:28 ` [PATCH 14/19] drm/i915/dp: Compute DP tunel BW during encoder state computation Imre Deak
2024-02-06 20:44   ` Shankar, Uma
2024-02-06 23:25   ` Ville Syrjälä
2024-02-07 14:25     ` Imre Deak
2024-01-23 10:28 ` [PATCH 15/19] drm/i915/dp: Allocate/free DP tunnel BW in the encoder enable/disable hooks Imre Deak
2024-02-06 20:45   ` Shankar, Uma
2024-01-23 10:28 ` [PATCH 16/19] drm/i915/dp: Handle DP tunnel IRQs Imre Deak
2024-01-23 10:28 ` [PATCH 17/19] drm/i915/dp: Call intel_dp_sync_state() always for DDI DP encoders Imre Deak
2024-02-06 20:46   ` Shankar, Uma
2024-01-23 10:28 ` [PATCH 18/19] drm/i915/dp: Suspend/resume DP tunnels Imre Deak
2024-01-31 16:18   ` Ville Syrjälä
2024-01-31 16:59     ` Imre Deak
2024-01-23 10:28 ` [PATCH 19/19] drm/i915/dp: Enable DP tunnel BW allocation mode Imre Deak
2024-01-23 18:52 ` ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Add Display Port tunnel BW allocation support Patchwork
2024-01-23 18:52 ` ✗ Fi.CI.SPARSE: " Patchwork
2024-01-23 19:05 ` ✓ Fi.CI.BAT: success " Patchwork
2024-01-24  3:31 ` ✓ Fi.CI.IGT: " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.