All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v12 0/5] Add data-lanes and link-frequencies to dp_out endpoint
@ 2022-12-13 21:44 ` Kuogee Hsieh
  0 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: Kuogee Hsieh, quic_abhinavk, quic_sbillaka, freedreno,
	linux-arm-msm, linux-kernel

Add DP both data-lanes and link-frequencies property to dp_out endpoint and support
functions to DP driver.

Kuogee Hsieh (5):
  arm64: dts: qcom: add data-lanes and link-freuencies into dp_out
    endpoint
  dt-bindings: msm/dp: add data-lanes and link-frequencies property
  drm/msm/dp: parser data-lanes as property of dp_out endpoint
  drm/msm/dp: parser link-frequencies as property of dp_out endpoint
  drm/msm/dp: add support of max dp link rate

 .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++-
 arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi       |  6 ++-
 arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi     |  6 ++-
 drivers/gpu/drm/msm/dp/dp_display.c                |  4 ++
 drivers/gpu/drm/msm/dp/dp_panel.c                  |  7 +--
 drivers/gpu/drm/msm/dp/dp_panel.h                  |  1 +
 drivers/gpu/drm/msm/dp/dp_parser.c                 | 52 ++++++++++++++++++----
 drivers/gpu/drm/msm/dp/dp_parser.h                 |  2 +
 8 files changed, 93 insertions(+), 15 deletions(-)

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v12 0/5] Add data-lanes and link-frequencies to dp_out endpoint
@ 2022-12-13 21:44 ` Kuogee Hsieh
  0 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: quic_sbillaka, linux-arm-msm, quic_abhinavk, linux-kernel,
	Kuogee Hsieh, freedreno

Add DP both data-lanes and link-frequencies property to dp_out endpoint and support
functions to DP driver.

Kuogee Hsieh (5):
  arm64: dts: qcom: add data-lanes and link-freuencies into dp_out
    endpoint
  dt-bindings: msm/dp: add data-lanes and link-frequencies property
  drm/msm/dp: parser data-lanes as property of dp_out endpoint
  drm/msm/dp: parser link-frequencies as property of dp_out endpoint
  drm/msm/dp: add support of max dp link rate

 .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++-
 arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi       |  6 ++-
 arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi     |  6 ++-
 drivers/gpu/drm/msm/dp/dp_display.c                |  4 ++
 drivers/gpu/drm/msm/dp/dp_panel.c                  |  7 +--
 drivers/gpu/drm/msm/dp/dp_panel.h                  |  1 +
 drivers/gpu/drm/msm/dp/dp_parser.c                 | 52 ++++++++++++++++++----
 drivers/gpu/drm/msm/dp/dp_parser.h                 |  2 +
 8 files changed, 93 insertions(+), 15 deletions(-)

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v12 1/5] arm64: dts: qcom: add data-lanes and link-freuencies into dp_out endpoint
  2022-12-13 21:44 ` Kuogee Hsieh
@ 2022-12-13 21:44   ` Kuogee Hsieh
  -1 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: Kuogee Hsieh, quic_abhinavk, quic_sbillaka, freedreno,
	linux-arm-msm, linux-kernel

Move data-lanes property from mdss_dp node to dp_out endpoint. Also
add link-frequencies property into dp_out endpoint as well. The last
frequency specified at link-frequencies will be the max link rate
supported by DP.

Changes in v5:
-- revert changes at sc7180.dtsi and sc7280.dtsi
-- add &dp_out to sc7180-trogdor.dtsi and sc7280-herobrine.dtsi

Changes in v6:
-- add data-lanes and link-frequencies to yaml

Changes in v7:
-- change 160000000 to 1620000000
-- separate yaml to different patch

Changes in v8:
-- correct Bjorn mail address to kernel.org

Changes in v9:
-- use symbol rate (hz) for link-frequencies at dp_out at sc7180_trogdor.dtsi

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
---
 arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi   | 6 +++++-
 arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi | 6 +++++-
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
index eae22e6..93b0cde 100644
--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
@@ -814,7 +814,11 @@ hp_i2c: &i2c9 {
 	status = "okay";
 	pinctrl-names = "default";
 	pinctrl-0 = <&dp_hot_plug_det>;
-	data-lanes = <0 1>;
+};
+
+&dp_out {
+    data-lanes = <0  1>;
+    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000>;
 };
 
 &pm6150_adc {
diff --git a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
index c11e371..3c7a9d8 100644
--- a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
@@ -442,7 +442,11 @@ ap_i2c_tpm: &i2c14 {
 	status = "okay";
 	pinctrl-names = "default";
 	pinctrl-0 = <&dp_hot_plug_det>;
-	data-lanes = <0 1>;
+};
+
+&dp_out {
+	data-lanes = <0  1>;
+	link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
 };
 
 &mdss_mdp {
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v12 1/5] arm64: dts: qcom: add data-lanes and link-freuencies into dp_out endpoint
@ 2022-12-13 21:44   ` Kuogee Hsieh
  0 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: quic_sbillaka, linux-arm-msm, quic_abhinavk, linux-kernel,
	Kuogee Hsieh, freedreno

Move data-lanes property from mdss_dp node to dp_out endpoint. Also
add link-frequencies property into dp_out endpoint as well. The last
frequency specified at link-frequencies will be the max link rate
supported by DP.

Changes in v5:
-- revert changes at sc7180.dtsi and sc7280.dtsi
-- add &dp_out to sc7180-trogdor.dtsi and sc7280-herobrine.dtsi

Changes in v6:
-- add data-lanes and link-frequencies to yaml

Changes in v7:
-- change 160000000 to 1620000000
-- separate yaml to different patch

Changes in v8:
-- correct Bjorn mail address to kernel.org

Changes in v9:
-- use symbol rate (hz) for link-frequencies at dp_out at sc7180_trogdor.dtsi

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
---
 arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi   | 6 +++++-
 arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi | 6 +++++-
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
index eae22e6..93b0cde 100644
--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
@@ -814,7 +814,11 @@ hp_i2c: &i2c9 {
 	status = "okay";
 	pinctrl-names = "default";
 	pinctrl-0 = <&dp_hot_plug_det>;
-	data-lanes = <0 1>;
+};
+
+&dp_out {
+    data-lanes = <0  1>;
+    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000>;
 };
 
 &pm6150_adc {
diff --git a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
index c11e371..3c7a9d8 100644
--- a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
@@ -442,7 +442,11 @@ ap_i2c_tpm: &i2c14 {
 	status = "okay";
 	pinctrl-names = "default";
 	pinctrl-0 = <&dp_hot_plug_det>;
-	data-lanes = <0 1>;
+};
+
+&dp_out {
+	data-lanes = <0  1>;
+	link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
 };
 
 &mdss_mdp {
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-13 21:44 ` Kuogee Hsieh
@ 2022-12-13 21:44   ` Kuogee Hsieh
  -1 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: Kuogee Hsieh, quic_abhinavk, quic_sbillaka, freedreno,
	linux-arm-msm, linux-kernel

Add both data-lanes and link-frequencies property into endpoint

Changes in v7:
-- split yaml out of dtsi patch
-- link-frequencies from link rate to symbol rate
-- deprecation of old data-lanes property

Changes in v8:
-- correct Bjorn mail address to kernel.org

Changes in v10:
-- add menu item to data-lanes and link-frequecnis

Changes in v11:
-- add endpoint property at port@1

Changes in v12:
-- use enum for item at data-lanes and link-frequencies

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
---
 .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
index f2515af..8fb9fa5 100644
--- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
+++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
@@ -81,6 +81,7 @@ properties:
 
   data-lanes:
     $ref: /schemas/types.yaml#/definitions/uint32-array
+    deprecated: true
     minItems: 1
     maxItems: 4
     items:
@@ -96,14 +97,37 @@ properties:
 
   ports:
     $ref: /schemas/graph.yaml#/properties/ports
+
     properties:
       port@0:
-        $ref: /schemas/graph.yaml#/properties/port
+        $ref: "/schemas/graph.yaml#/$defs/port-base"
         description: Input endpoint of the controller
+        properties:
+          endpoint:
+            $ref: /schemas/media/video-interfaces.yaml#
 
       port@1:
-        $ref: /schemas/graph.yaml#/properties/port
+        $ref: "/schemas/graph.yaml#/$defs/port-base"
         description: Output endpoint of the controller
+        properties:
+          endpoint:
+            $ref: /schemas/media/video-interfaces.yaml#
+            properties:
+              data-lanes:
+                minItems: 1
+                maxItems: 4
+                items:
+                  enum: [ 0, 1, 2, 3 ]
+
+              link-frequencies:
+                minItems: 1
+                maxItems: 4
+                items:
+                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
+
+    required:
+      - port@0
+      - port@1
 
 required:
   - compatible
@@ -193,6 +217,8 @@ examples:
                 reg = <1>;
                 endpoint {
                     remote-endpoint = <&typec>;
+                    data-lanes = <0 1>;
+                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>; 
                 };
             };
         };
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-13 21:44   ` Kuogee Hsieh
  0 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: quic_sbillaka, linux-arm-msm, quic_abhinavk, linux-kernel,
	Kuogee Hsieh, freedreno

Add both data-lanes and link-frequencies property into endpoint

Changes in v7:
-- split yaml out of dtsi patch
-- link-frequencies from link rate to symbol rate
-- deprecation of old data-lanes property

Changes in v8:
-- correct Bjorn mail address to kernel.org

Changes in v10:
-- add menu item to data-lanes and link-frequecnis

Changes in v11:
-- add endpoint property at port@1

Changes in v12:
-- use enum for item at data-lanes and link-frequencies

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
---
 .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
index f2515af..8fb9fa5 100644
--- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
+++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
@@ -81,6 +81,7 @@ properties:
 
   data-lanes:
     $ref: /schemas/types.yaml#/definitions/uint32-array
+    deprecated: true
     minItems: 1
     maxItems: 4
     items:
@@ -96,14 +97,37 @@ properties:
 
   ports:
     $ref: /schemas/graph.yaml#/properties/ports
+
     properties:
       port@0:
-        $ref: /schemas/graph.yaml#/properties/port
+        $ref: "/schemas/graph.yaml#/$defs/port-base"
         description: Input endpoint of the controller
+        properties:
+          endpoint:
+            $ref: /schemas/media/video-interfaces.yaml#
 
       port@1:
-        $ref: /schemas/graph.yaml#/properties/port
+        $ref: "/schemas/graph.yaml#/$defs/port-base"
         description: Output endpoint of the controller
+        properties:
+          endpoint:
+            $ref: /schemas/media/video-interfaces.yaml#
+            properties:
+              data-lanes:
+                minItems: 1
+                maxItems: 4
+                items:
+                  enum: [ 0, 1, 2, 3 ]
+
+              link-frequencies:
+                minItems: 1
+                maxItems: 4
+                items:
+                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
+
+    required:
+      - port@0
+      - port@1
 
 required:
   - compatible
@@ -193,6 +217,8 @@ examples:
                 reg = <1>;
                 endpoint {
                     remote-endpoint = <&typec>;
+                    data-lanes = <0 1>;
+                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>; 
                 };
             };
         };
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v12 3/5] drm/msm/dp: parser data-lanes as property of dp_out endpoint
  2022-12-13 21:44 ` Kuogee Hsieh
@ 2022-12-13 21:44   ` Kuogee Hsieh
  -1 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: Kuogee Hsieh, quic_abhinavk, quic_sbillaka, freedreno,
	linux-arm-msm, linux-kernel

Add capability to parser data-lanes as property of dp_out endpoint.
Also retain the original capability to parser data-lanes as property
of mdss_dp node to handle legacy case.

Changes in v6:
-- first patch after split parser patch into two

Changes in v7:
-- check "data-lanes" from endpoint first

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
---
 drivers/gpu/drm/msm/dp/dp_parser.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/msm/dp/dp_parser.c b/drivers/gpu/drm/msm/dp/dp_parser.c
index dd73221..b5f7e70 100644
--- a/drivers/gpu/drm/msm/dp/dp_parser.c
+++ b/drivers/gpu/drm/msm/dp/dp_parser.c
@@ -94,16 +94,25 @@ static int dp_parser_ctrl_res(struct dp_parser *parser)
 static int dp_parser_misc(struct dp_parser *parser)
 {
 	struct device_node *of_node = parser->pdev->dev.of_node;
-	int len;
-
-	len = drm_of_get_data_lanes_count(of_node, 1, DP_MAX_NUM_DP_LANES);
-	if (len < 0) {
-		DRM_WARN("Invalid property \"data-lanes\", default max DP lanes = %d\n",
-			 DP_MAX_NUM_DP_LANES);
-		len = DP_MAX_NUM_DP_LANES;
+	int cnt;
+
+	/*
+	 * data-lanes is the property of dp_out endpoint
+	 */
+	cnt = drm_of_get_data_lanes_count_ep(of_node, 1, 0, 1, DP_MAX_NUM_DP_LANES);
+	if (cnt > 0)
+		parser->max_dp_lanes = cnt;
+	else {
+		/*
+		 * legacy code, data-lanes is the property of mdss_dp node
+		 */
+		cnt = drm_of_get_data_lanes_count(of_node, 1, DP_MAX_NUM_DP_LANES);
+		if (cnt > 0)
+			parser->max_dp_lanes = cnt;
+		else
+			parser->max_dp_lanes = DP_MAX_NUM_DP_LANES; /* 4 lanes */
 	}
 
-	parser->max_dp_lanes = len;
 	return 0;
 }
 
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v12 3/5] drm/msm/dp: parser data-lanes as property of dp_out endpoint
@ 2022-12-13 21:44   ` Kuogee Hsieh
  0 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: quic_sbillaka, linux-arm-msm, quic_abhinavk, linux-kernel,
	Kuogee Hsieh, freedreno

Add capability to parser data-lanes as property of dp_out endpoint.
Also retain the original capability to parser data-lanes as property
of mdss_dp node to handle legacy case.

Changes in v6:
-- first patch after split parser patch into two

Changes in v7:
-- check "data-lanes" from endpoint first

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
---
 drivers/gpu/drm/msm/dp/dp_parser.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/msm/dp/dp_parser.c b/drivers/gpu/drm/msm/dp/dp_parser.c
index dd73221..b5f7e70 100644
--- a/drivers/gpu/drm/msm/dp/dp_parser.c
+++ b/drivers/gpu/drm/msm/dp/dp_parser.c
@@ -94,16 +94,25 @@ static int dp_parser_ctrl_res(struct dp_parser *parser)
 static int dp_parser_misc(struct dp_parser *parser)
 {
 	struct device_node *of_node = parser->pdev->dev.of_node;
-	int len;
-
-	len = drm_of_get_data_lanes_count(of_node, 1, DP_MAX_NUM_DP_LANES);
-	if (len < 0) {
-		DRM_WARN("Invalid property \"data-lanes\", default max DP lanes = %d\n",
-			 DP_MAX_NUM_DP_LANES);
-		len = DP_MAX_NUM_DP_LANES;
+	int cnt;
+
+	/*
+	 * data-lanes is the property of dp_out endpoint
+	 */
+	cnt = drm_of_get_data_lanes_count_ep(of_node, 1, 0, 1, DP_MAX_NUM_DP_LANES);
+	if (cnt > 0)
+		parser->max_dp_lanes = cnt;
+	else {
+		/*
+		 * legacy code, data-lanes is the property of mdss_dp node
+		 */
+		cnt = drm_of_get_data_lanes_count(of_node, 1, DP_MAX_NUM_DP_LANES);
+		if (cnt > 0)
+			parser->max_dp_lanes = cnt;
+		else
+			parser->max_dp_lanes = DP_MAX_NUM_DP_LANES; /* 4 lanes */
 	}
 
-	parser->max_dp_lanes = len;
 	return 0;
 }
 
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v12 4/5] drm/msm/dp: parser link-frequencies as property of dp_out endpoint
  2022-12-13 21:44 ` Kuogee Hsieh
@ 2022-12-13 21:44   ` Kuogee Hsieh
  -1 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: Kuogee Hsieh, quic_abhinavk, quic_sbillaka, freedreno,
	linux-arm-msm, linux-kernel

Add capability to parser and retrieve max DP link supported rate from
link-frequencies property of dp_out endpoint.

Changes in v6:
-- second patch after split parser patch into two patches

Changes in v7:
-- without checking cnt against DP_MAX_NUM_DP_LANES to retrieve link rate

Changes in v9:
-- separate parser link-frequencies out of data-lanes

Changes in v10:
-- add dp_parser_link_frequencies()

Changes in v11:
-- return 0 if(!endpoint)

Changes in v12:
-- replace khz with kbytes at dp_parser.h

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
---
 drivers/gpu/drm/msm/dp/dp_parser.c | 27 +++++++++++++++++++++++++++
 drivers/gpu/drm/msm/dp/dp_parser.h |  2 ++
 2 files changed, 29 insertions(+)

diff --git a/drivers/gpu/drm/msm/dp/dp_parser.c b/drivers/gpu/drm/msm/dp/dp_parser.c
index b5f7e70..5549495 100644
--- a/drivers/gpu/drm/msm/dp/dp_parser.c
+++ b/drivers/gpu/drm/msm/dp/dp_parser.c
@@ -91,6 +91,29 @@ static int dp_parser_ctrl_res(struct dp_parser *parser)
 	return 0;
 }
 
+static u32 dp_parser_link_frequencies(struct device_node *of_node)
+{
+	struct device_node *endpoint;
+	u64 frequency = 0;
+	int cnt = 0;
+
+	endpoint = of_graph_get_endpoint_by_regs(of_node, 1, 0); /* port@1 */
+	if (!endpoint)
+		return 0;
+
+	cnt = of_property_count_u64_elems(endpoint, "link-frequencies");
+
+	if (cnt > 0)
+		of_property_read_u64_index(endpoint, "link-frequencies",
+						cnt - 1, &frequency);
+	of_node_put(endpoint);
+
+	frequency /= 10;	/* from symbol rate to link rate */
+	frequency /= 1000;	/* kbytes */
+
+	return frequency;
+}
+
 static int dp_parser_misc(struct dp_parser *parser)
 {
 	struct device_node *of_node = parser->pdev->dev.of_node;
@@ -113,6 +136,10 @@ static int dp_parser_misc(struct dp_parser *parser)
 			parser->max_dp_lanes = DP_MAX_NUM_DP_LANES; /* 4 lanes */
 	}
 
+	parser->max_dp_link_rate = dp_parser_link_frequencies(of_node);
+	if (!parser->max_dp_link_rate)
+                parser->max_dp_link_rate = DP_LINK_RATE_HBR2;
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/msm/dp/dp_parser.h b/drivers/gpu/drm/msm/dp/dp_parser.h
index 866c1a8..526131ee 100644
--- a/drivers/gpu/drm/msm/dp/dp_parser.h
+++ b/drivers/gpu/drm/msm/dp/dp_parser.h
@@ -15,6 +15,7 @@
 #define DP_LABEL "MDSS DP DISPLAY"
 #define DP_MAX_PIXEL_CLK_KHZ	675000
 #define DP_MAX_NUM_DP_LANES	4
+#define DP_LINK_RATE_HBR2	540000 /* kbytes */
 
 enum dp_pm_type {
 	DP_CORE_PM,
@@ -119,6 +120,7 @@ struct dp_parser {
 	struct dp_io io;
 	struct dp_display_data disp_data;
 	u32 max_dp_lanes;
+	u32 max_dp_link_rate;
 	struct drm_bridge *next_bridge;
 
 	int (*parse)(struct dp_parser *parser);
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v12 4/5] drm/msm/dp: parser link-frequencies as property of dp_out endpoint
@ 2022-12-13 21:44   ` Kuogee Hsieh
  0 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: quic_sbillaka, linux-arm-msm, quic_abhinavk, linux-kernel,
	Kuogee Hsieh, freedreno

Add capability to parser and retrieve max DP link supported rate from
link-frequencies property of dp_out endpoint.

Changes in v6:
-- second patch after split parser patch into two patches

Changes in v7:
-- without checking cnt against DP_MAX_NUM_DP_LANES to retrieve link rate

Changes in v9:
-- separate parser link-frequencies out of data-lanes

Changes in v10:
-- add dp_parser_link_frequencies()

Changes in v11:
-- return 0 if(!endpoint)

Changes in v12:
-- replace khz with kbytes at dp_parser.h

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
---
 drivers/gpu/drm/msm/dp/dp_parser.c | 27 +++++++++++++++++++++++++++
 drivers/gpu/drm/msm/dp/dp_parser.h |  2 ++
 2 files changed, 29 insertions(+)

diff --git a/drivers/gpu/drm/msm/dp/dp_parser.c b/drivers/gpu/drm/msm/dp/dp_parser.c
index b5f7e70..5549495 100644
--- a/drivers/gpu/drm/msm/dp/dp_parser.c
+++ b/drivers/gpu/drm/msm/dp/dp_parser.c
@@ -91,6 +91,29 @@ static int dp_parser_ctrl_res(struct dp_parser *parser)
 	return 0;
 }
 
+static u32 dp_parser_link_frequencies(struct device_node *of_node)
+{
+	struct device_node *endpoint;
+	u64 frequency = 0;
+	int cnt = 0;
+
+	endpoint = of_graph_get_endpoint_by_regs(of_node, 1, 0); /* port@1 */
+	if (!endpoint)
+		return 0;
+
+	cnt = of_property_count_u64_elems(endpoint, "link-frequencies");
+
+	if (cnt > 0)
+		of_property_read_u64_index(endpoint, "link-frequencies",
+						cnt - 1, &frequency);
+	of_node_put(endpoint);
+
+	frequency /= 10;	/* from symbol rate to link rate */
+	frequency /= 1000;	/* kbytes */
+
+	return frequency;
+}
+
 static int dp_parser_misc(struct dp_parser *parser)
 {
 	struct device_node *of_node = parser->pdev->dev.of_node;
@@ -113,6 +136,10 @@ static int dp_parser_misc(struct dp_parser *parser)
 			parser->max_dp_lanes = DP_MAX_NUM_DP_LANES; /* 4 lanes */
 	}
 
+	parser->max_dp_link_rate = dp_parser_link_frequencies(of_node);
+	if (!parser->max_dp_link_rate)
+                parser->max_dp_link_rate = DP_LINK_RATE_HBR2;
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/msm/dp/dp_parser.h b/drivers/gpu/drm/msm/dp/dp_parser.h
index 866c1a8..526131ee 100644
--- a/drivers/gpu/drm/msm/dp/dp_parser.h
+++ b/drivers/gpu/drm/msm/dp/dp_parser.h
@@ -15,6 +15,7 @@
 #define DP_LABEL "MDSS DP DISPLAY"
 #define DP_MAX_PIXEL_CLK_KHZ	675000
 #define DP_MAX_NUM_DP_LANES	4
+#define DP_LINK_RATE_HBR2	540000 /* kbytes */
 
 enum dp_pm_type {
 	DP_CORE_PM,
@@ -119,6 +120,7 @@ struct dp_parser {
 	struct dp_io io;
 	struct dp_display_data disp_data;
 	u32 max_dp_lanes;
+	u32 max_dp_link_rate;
 	struct drm_bridge *next_bridge;
 
 	int (*parse)(struct dp_parser *parser);
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v12 5/5] drm/msm/dp: add support of max dp link rate
  2022-12-13 21:44 ` Kuogee Hsieh
@ 2022-12-13 21:44   ` Kuogee Hsieh
  -1 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: Kuogee Hsieh, quic_abhinavk, quic_sbillaka, freedreno,
	linux-arm-msm, linux-kernel

By default, HBR2 (5.4G) is the max link link be supported. This patch uses the
actual limit specified by DT and removes the artificial limitation to 5.4 Gbps.
Supporting HBR3 is a consequence of that.

Changes in v2:
-- add max link rate from dtsi

Changes in v3:
-- parser max_data_lanes and max_dp_link_rate from dp_out endpoint

Changes in v4:
-- delete unnecessary pr_err

Changes in v5:
-- split parser function into different patch

Changes in v9:
-- revised commit test

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
---
 drivers/gpu/drm/msm/dp/dp_display.c | 4 ++++
 drivers/gpu/drm/msm/dp/dp_panel.c   | 7 ++++---
 drivers/gpu/drm/msm/dp/dp_panel.h   | 1 +
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
index bfd0aef..edee550 100644
--- a/drivers/gpu/drm/msm/dp/dp_display.c
+++ b/drivers/gpu/drm/msm/dp/dp_display.c
@@ -390,6 +390,10 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp)
 	struct edid *edid;
 
 	dp->panel->max_dp_lanes = dp->parser->max_dp_lanes;
+	dp->panel->max_dp_link_rate = dp->parser->max_dp_link_rate;
+
+	drm_dbg_dp(dp->drm_dev, "max_lanes=%d max_link_rate=%d\n",
+		dp->panel->max_dp_lanes, dp->panel->max_dp_link_rate);
 
 	rc = dp_panel_read_sink_caps(dp->panel, dp->dp_display.connector);
 	if (rc)
diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
index 5149ceb..933fa9c 100644
--- a/drivers/gpu/drm/msm/dp/dp_panel.c
+++ b/drivers/gpu/drm/msm/dp/dp_panel.c
@@ -75,12 +75,13 @@ static int dp_panel_read_dpcd(struct dp_panel *dp_panel)
 	link_info->rate = drm_dp_bw_code_to_link_rate(dpcd[DP_MAX_LINK_RATE]);
 	link_info->num_lanes = dpcd[DP_MAX_LANE_COUNT] & DP_MAX_LANE_COUNT_MASK;
 
+	/* Limit data lanes from data-lanes of endpoint properity of dtsi */
 	if (link_info->num_lanes > dp_panel->max_dp_lanes)
 		link_info->num_lanes = dp_panel->max_dp_lanes;
 
-	/* Limit support upto HBR2 until HBR3 support is added */
-	if (link_info->rate >= (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
-		link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
+	/* Limit link rate from link-frequencies of endpoint properity of dtsi */
+	if (link_info->rate > dp_panel->max_dp_link_rate)
+		link_info->rate = dp_panel->max_dp_link_rate;
 
 	drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
 	drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
diff --git a/drivers/gpu/drm/msm/dp/dp_panel.h b/drivers/gpu/drm/msm/dp/dp_panel.h
index d861197a..f04d021 100644
--- a/drivers/gpu/drm/msm/dp/dp_panel.h
+++ b/drivers/gpu/drm/msm/dp/dp_panel.h
@@ -50,6 +50,7 @@ struct dp_panel {
 
 	u32 vic;
 	u32 max_dp_lanes;
+	u32 max_dp_link_rate;
 
 	u32 max_bw_code;
 };
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v12 5/5] drm/msm/dp: add support of max dp link rate
@ 2022-12-13 21:44   ` Kuogee Hsieh
  0 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-13 21:44 UTC (permalink / raw)
  To: dri-devel, robdclark, sean, swboyd, dianders, vkoul, daniel,
	agross, dmitry.baryshkov, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: quic_sbillaka, linux-arm-msm, quic_abhinavk, linux-kernel,
	Kuogee Hsieh, freedreno

By default, HBR2 (5.4G) is the max link link be supported. This patch uses the
actual limit specified by DT and removes the artificial limitation to 5.4 Gbps.
Supporting HBR3 is a consequence of that.

Changes in v2:
-- add max link rate from dtsi

Changes in v3:
-- parser max_data_lanes and max_dp_link_rate from dp_out endpoint

Changes in v4:
-- delete unnecessary pr_err

Changes in v5:
-- split parser function into different patch

Changes in v9:
-- revised commit test

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
---
 drivers/gpu/drm/msm/dp/dp_display.c | 4 ++++
 drivers/gpu/drm/msm/dp/dp_panel.c   | 7 ++++---
 drivers/gpu/drm/msm/dp/dp_panel.h   | 1 +
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
index bfd0aef..edee550 100644
--- a/drivers/gpu/drm/msm/dp/dp_display.c
+++ b/drivers/gpu/drm/msm/dp/dp_display.c
@@ -390,6 +390,10 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp)
 	struct edid *edid;
 
 	dp->panel->max_dp_lanes = dp->parser->max_dp_lanes;
+	dp->panel->max_dp_link_rate = dp->parser->max_dp_link_rate;
+
+	drm_dbg_dp(dp->drm_dev, "max_lanes=%d max_link_rate=%d\n",
+		dp->panel->max_dp_lanes, dp->panel->max_dp_link_rate);
 
 	rc = dp_panel_read_sink_caps(dp->panel, dp->dp_display.connector);
 	if (rc)
diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
index 5149ceb..933fa9c 100644
--- a/drivers/gpu/drm/msm/dp/dp_panel.c
+++ b/drivers/gpu/drm/msm/dp/dp_panel.c
@@ -75,12 +75,13 @@ static int dp_panel_read_dpcd(struct dp_panel *dp_panel)
 	link_info->rate = drm_dp_bw_code_to_link_rate(dpcd[DP_MAX_LINK_RATE]);
 	link_info->num_lanes = dpcd[DP_MAX_LANE_COUNT] & DP_MAX_LANE_COUNT_MASK;
 
+	/* Limit data lanes from data-lanes of endpoint properity of dtsi */
 	if (link_info->num_lanes > dp_panel->max_dp_lanes)
 		link_info->num_lanes = dp_panel->max_dp_lanes;
 
-	/* Limit support upto HBR2 until HBR3 support is added */
-	if (link_info->rate >= (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
-		link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
+	/* Limit link rate from link-frequencies of endpoint properity of dtsi */
+	if (link_info->rate > dp_panel->max_dp_link_rate)
+		link_info->rate = dp_panel->max_dp_link_rate;
 
 	drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
 	drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
diff --git a/drivers/gpu/drm/msm/dp/dp_panel.h b/drivers/gpu/drm/msm/dp/dp_panel.h
index d861197a..f04d021 100644
--- a/drivers/gpu/drm/msm/dp/dp_panel.h
+++ b/drivers/gpu/drm/msm/dp/dp_panel.h
@@ -50,6 +50,7 @@ struct dp_panel {
 
 	u32 vic;
 	u32 max_dp_lanes;
+	u32 max_dp_link_rate;
 
 	u32 max_bw_code;
 };
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-13 21:44   ` Kuogee Hsieh
@ 2022-12-13 22:02     ` Dmitry Baryshkov
  -1 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-13 22:02 UTC (permalink / raw)
  To: Kuogee Hsieh, dri-devel, robdclark, sean, swboyd, dianders,
	vkoul, daniel, agross, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel



On 13 December 2022 23:44:05 EET, Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>Add both data-lanes and link-frequencies property into endpoint
>
>Changes in v7:
>-- split yaml out of dtsi patch
>-- link-frequencies from link rate to symbol rate
>-- deprecation of old data-lanes property
>
>Changes in v8:
>-- correct Bjorn mail address to kernel.org
>
>Changes in v10:
>-- add menu item to data-lanes and link-frequecnis
>
>Changes in v11:
>-- add endpoint property at port@1
>
>Changes in v12:
>-- use enum for item at data-lanes and link-frequencies

This is not a full list of changes

>
>Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
>---
> .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
> 1 file changed, 28 insertions(+), 2 deletions(-)
>
>diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>index f2515af..8fb9fa5 100644
>--- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>+++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>@@ -81,6 +81,7 @@ properties:
> 
>   data-lanes:
>     $ref: /schemas/types.yaml#/definitions/uint32-array
>+    deprecated: true
>     minItems: 1
>     maxItems: 4
>     items:
>@@ -96,14 +97,37 @@ properties:
> 
>   ports:
>     $ref: /schemas/graph.yaml#/properties/ports
>+
>     properties:
>       port@0:
>-        $ref: /schemas/graph.yaml#/properties/port
>+        $ref: "/schemas/graph.yaml#/$defs/port-base"
>         description: Input endpoint of the controller
>+        properties:
>+          endpoint:
>+            $ref: /schemas/media/video-interfaces.yaml#


I'd keep it as is. There are no video properties at this side of the graph.

> 
>       port@1:
>-        $ref: /schemas/graph.yaml#/properties/port
>+        $ref: "/schemas/graph.yaml#/$defs/port-base"
>         description: Output endpoint of the controller
>+        properties:
>+          endpoint:
>+            $ref: /schemas/media/video-interfaces.yaml#
>+            properties:
>+              data-lanes:
>+                minItems: 1
>+                maxItems: 4
>+                items:
>+                  enum: [ 0, 1, 2, 3 ]
>+
>+              link-frequencies:
>+                minItems: 1
>+                maxItems: 4
>+                items:
>+                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
>+
>+    required:
>+      - port@0
>+      - port@1
> 
> required:
>   - compatible
>@@ -193,6 +217,8 @@ examples:
>                 reg = <1>;
>                 endpoint {
>                     remote-endpoint = <&typec>;
>+                    data-lanes = <0 1>;
>+                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>; 
>                 };
>             };
>         };

-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-13 22:02     ` Dmitry Baryshkov
  0 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-13 22:02 UTC (permalink / raw)
  To: Kuogee Hsieh, dri-devel, robdclark, sean, swboyd, dianders,
	vkoul, daniel, agross, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel



On 13 December 2022 23:44:05 EET, Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>Add both data-lanes and link-frequencies property into endpoint
>
>Changes in v7:
>-- split yaml out of dtsi patch
>-- link-frequencies from link rate to symbol rate
>-- deprecation of old data-lanes property
>
>Changes in v8:
>-- correct Bjorn mail address to kernel.org
>
>Changes in v10:
>-- add menu item to data-lanes and link-frequecnis
>
>Changes in v11:
>-- add endpoint property at port@1
>
>Changes in v12:
>-- use enum for item at data-lanes and link-frequencies

This is not a full list of changes

>
>Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
>---
> .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
> 1 file changed, 28 insertions(+), 2 deletions(-)
>
>diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>index f2515af..8fb9fa5 100644
>--- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>+++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>@@ -81,6 +81,7 @@ properties:
> 
>   data-lanes:
>     $ref: /schemas/types.yaml#/definitions/uint32-array
>+    deprecated: true
>     minItems: 1
>     maxItems: 4
>     items:
>@@ -96,14 +97,37 @@ properties:
> 
>   ports:
>     $ref: /schemas/graph.yaml#/properties/ports
>+
>     properties:
>       port@0:
>-        $ref: /schemas/graph.yaml#/properties/port
>+        $ref: "/schemas/graph.yaml#/$defs/port-base"
>         description: Input endpoint of the controller
>+        properties:
>+          endpoint:
>+            $ref: /schemas/media/video-interfaces.yaml#


I'd keep it as is. There are no video properties at this side of the graph.

> 
>       port@1:
>-        $ref: /schemas/graph.yaml#/properties/port
>+        $ref: "/schemas/graph.yaml#/$defs/port-base"
>         description: Output endpoint of the controller
>+        properties:
>+          endpoint:
>+            $ref: /schemas/media/video-interfaces.yaml#
>+            properties:
>+              data-lanes:
>+                minItems: 1
>+                maxItems: 4
>+                items:
>+                  enum: [ 0, 1, 2, 3 ]
>+
>+              link-frequencies:
>+                minItems: 1
>+                maxItems: 4
>+                items:
>+                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
>+
>+    required:
>+      - port@0
>+      - port@1
> 
> required:
>   - compatible
>@@ -193,6 +217,8 @@ examples:
>                 reg = <1>;
>                 endpoint {
>                     remote-endpoint = <&typec>;
>+                    data-lanes = <0 1>;
>+                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>; 
>                 };
>             };
>         };

-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 5/5] drm/msm/dp: add support of max dp link rate
  2022-12-13 21:44   ` Kuogee Hsieh
@ 2022-12-13 22:06     ` Dmitry Baryshkov
  -1 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-13 22:06 UTC (permalink / raw)
  To: Kuogee Hsieh, dri-devel, robdclark, sean, swboyd, dianders,
	vkoul, daniel, agross, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel



On 13 December 2022 23:44:08 EET, Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>By default, HBR2 (5.4G) is the max link link be supported. This patch uses the
>actual limit specified by DT and removes the artificial limitation to 5.4 Gbps.
>Supporting HBR3 is a consequence of that.
>
>Changes in v2:
>-- add max link rate from dtsi
>
>Changes in v3:
>-- parser max_data_lanes and max_dp_link_rate from dp_out endpoint
>
>Changes in v4:
>-- delete unnecessary pr_err
>
>Changes in v5:
>-- split parser function into different patch
>
>Changes in v9:
>-- revised commit test
>
>Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
>Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
>---
> drivers/gpu/drm/msm/dp/dp_display.c | 4 ++++
> drivers/gpu/drm/msm/dp/dp_panel.c   | 7 ++++---
> drivers/gpu/drm/msm/dp/dp_panel.h   | 1 +
> 3 files changed, 9 insertions(+), 3 deletions(-)
>
>diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
>index bfd0aef..edee550 100644
>--- a/drivers/gpu/drm/msm/dp/dp_display.c
>+++ b/drivers/gpu/drm/msm/dp/dp_display.c
>@@ -390,6 +390,10 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp)
> 	struct edid *edid;
> 
> 	dp->panel->max_dp_lanes = dp->parser->max_dp_lanes;
>+	dp->panel->max_dp_link_rate = dp->parser->max_dp_link_rate;
>+
>+	drm_dbg_dp(dp->drm_dev, "max_lanes=%d max_link_rate=%d\n",
>+		dp->panel->max_dp_lanes, dp->panel->max_dp_link_rate);
> 
> 	rc = dp_panel_read_sink_caps(dp->panel, dp->dp_display.connector);
> 	if (rc)
>diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
>index 5149ceb..933fa9c 100644
>--- a/drivers/gpu/drm/msm/dp/dp_panel.c
>+++ b/drivers/gpu/drm/msm/dp/dp_panel.c
>@@ -75,12 +75,13 @@ static int dp_panel_read_dpcd(struct dp_panel *dp_panel)
> 	link_info->rate = drm_dp_bw_code_to_link_rate(dpcd[DP_MAX_LINK_RATE]);
> 	link_info->num_lanes = dpcd[DP_MAX_LANE_COUNT] & DP_MAX_LANE_COUNT_MASK;
> 
>+	/* Limit data lanes from data-lanes of endpoint properity of dtsi */

Nit: property. And below too.

> 	if (link_info->num_lanes > dp_panel->max_dp_lanes)
> 		link_info->num_lanes = dp_panel->max_dp_lanes;
> 
>-	/* Limit support upto HBR2 until HBR3 support is added */
>-	if (link_info->rate >= (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
>-		link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
>+	/* Limit link rate from link-frequencies of endpoint properity of dtsi */
>+	if (link_info->rate > dp_panel->max_dp_link_rate)
>+		link_info->rate = dp_panel->max_dp_link_rate;
> 
> 	drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
> 	drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
>diff --git a/drivers/gpu/drm/msm/dp/dp_panel.h b/drivers/gpu/drm/msm/dp/dp_panel.h
>index d861197a..f04d021 100644
>--- a/drivers/gpu/drm/msm/dp/dp_panel.h
>+++ b/drivers/gpu/drm/msm/dp/dp_panel.h
>@@ -50,6 +50,7 @@ struct dp_panel {
> 
> 	u32 vic;
> 	u32 max_dp_lanes;
>+	u32 max_dp_link_rate;
> 
> 	u32 max_bw_code;
> };

-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 5/5] drm/msm/dp: add support of max dp link rate
@ 2022-12-13 22:06     ` Dmitry Baryshkov
  0 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-13 22:06 UTC (permalink / raw)
  To: Kuogee Hsieh, dri-devel, robdclark, sean, swboyd, dianders,
	vkoul, daniel, agross, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel



On 13 December 2022 23:44:08 EET, Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>By default, HBR2 (5.4G) is the max link link be supported. This patch uses the
>actual limit specified by DT and removes the artificial limitation to 5.4 Gbps.
>Supporting HBR3 is a consequence of that.
>
>Changes in v2:
>-- add max link rate from dtsi
>
>Changes in v3:
>-- parser max_data_lanes and max_dp_link_rate from dp_out endpoint
>
>Changes in v4:
>-- delete unnecessary pr_err
>
>Changes in v5:
>-- split parser function into different patch
>
>Changes in v9:
>-- revised commit test
>
>Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
>Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
>---
> drivers/gpu/drm/msm/dp/dp_display.c | 4 ++++
> drivers/gpu/drm/msm/dp/dp_panel.c   | 7 ++++---
> drivers/gpu/drm/msm/dp/dp_panel.h   | 1 +
> 3 files changed, 9 insertions(+), 3 deletions(-)
>
>diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
>index bfd0aef..edee550 100644
>--- a/drivers/gpu/drm/msm/dp/dp_display.c
>+++ b/drivers/gpu/drm/msm/dp/dp_display.c
>@@ -390,6 +390,10 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp)
> 	struct edid *edid;
> 
> 	dp->panel->max_dp_lanes = dp->parser->max_dp_lanes;
>+	dp->panel->max_dp_link_rate = dp->parser->max_dp_link_rate;
>+
>+	drm_dbg_dp(dp->drm_dev, "max_lanes=%d max_link_rate=%d\n",
>+		dp->panel->max_dp_lanes, dp->panel->max_dp_link_rate);
> 
> 	rc = dp_panel_read_sink_caps(dp->panel, dp->dp_display.connector);
> 	if (rc)
>diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
>index 5149ceb..933fa9c 100644
>--- a/drivers/gpu/drm/msm/dp/dp_panel.c
>+++ b/drivers/gpu/drm/msm/dp/dp_panel.c
>@@ -75,12 +75,13 @@ static int dp_panel_read_dpcd(struct dp_panel *dp_panel)
> 	link_info->rate = drm_dp_bw_code_to_link_rate(dpcd[DP_MAX_LINK_RATE]);
> 	link_info->num_lanes = dpcd[DP_MAX_LANE_COUNT] & DP_MAX_LANE_COUNT_MASK;
> 
>+	/* Limit data lanes from data-lanes of endpoint properity of dtsi */

Nit: property. And below too.

> 	if (link_info->num_lanes > dp_panel->max_dp_lanes)
> 		link_info->num_lanes = dp_panel->max_dp_lanes;
> 
>-	/* Limit support upto HBR2 until HBR3 support is added */
>-	if (link_info->rate >= (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
>-		link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
>+	/* Limit link rate from link-frequencies of endpoint properity of dtsi */
>+	if (link_info->rate > dp_panel->max_dp_link_rate)
>+		link_info->rate = dp_panel->max_dp_link_rate;
> 
> 	drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
> 	drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
>diff --git a/drivers/gpu/drm/msm/dp/dp_panel.h b/drivers/gpu/drm/msm/dp/dp_panel.h
>index d861197a..f04d021 100644
>--- a/drivers/gpu/drm/msm/dp/dp_panel.h
>+++ b/drivers/gpu/drm/msm/dp/dp_panel.h
>@@ -50,6 +50,7 @@ struct dp_panel {
> 
> 	u32 vic;
> 	u32 max_dp_lanes;
>+	u32 max_dp_link_rate;
> 
> 	u32 max_bw_code;
> };

-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 4/5] drm/msm/dp: parser link-frequencies as property of dp_out endpoint
  2022-12-13 21:44   ` Kuogee Hsieh
@ 2022-12-13 22:07     ` Dmitry Baryshkov
  -1 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-13 22:07 UTC (permalink / raw)
  To: Kuogee Hsieh, dri-devel, robdclark, sean, swboyd, dianders,
	vkoul, daniel, agross, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel



On 13 December 2022 23:44:07 EET, Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>Add capability to parser and retrieve max DP link supported rate from
>link-frequencies property of dp_out endpoint.
>
>Changes in v6:
>-- second patch after split parser patch into two patches
>
>Changes in v7:
>-- without checking cnt against DP_MAX_NUM_DP_LANES to retrieve link rate
>
>Changes in v9:
>-- separate parser link-frequencies out of data-lanes
>
>Changes in v10:
>-- add dp_parser_link_frequencies()
>
>Changes in v11:
>-- return 0 if(!endpoint)
>
>Changes in v12:
>-- replace khz with kbytes at dp_parser.h
>
>Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>


Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>


>---
> drivers/gpu/drm/msm/dp/dp_parser.c | 27 +++++++++++++++++++++++++++
> drivers/gpu/drm/msm/dp/dp_parser.h |  2 ++
> 2 files changed, 29 insertions(+)


-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 4/5] drm/msm/dp: parser link-frequencies as property of dp_out endpoint
@ 2022-12-13 22:07     ` Dmitry Baryshkov
  0 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-13 22:07 UTC (permalink / raw)
  To: Kuogee Hsieh, dri-devel, robdclark, sean, swboyd, dianders,
	vkoul, daniel, agross, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel



On 13 December 2022 23:44:07 EET, Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>Add capability to parser and retrieve max DP link supported rate from
>link-frequencies property of dp_out endpoint.
>
>Changes in v6:
>-- second patch after split parser patch into two patches
>
>Changes in v7:
>-- without checking cnt against DP_MAX_NUM_DP_LANES to retrieve link rate
>
>Changes in v9:
>-- separate parser link-frequencies out of data-lanes
>
>Changes in v10:
>-- add dp_parser_link_frequencies()
>
>Changes in v11:
>-- return 0 if(!endpoint)
>
>Changes in v12:
>-- replace khz with kbytes at dp_parser.h
>
>Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>


Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>


>---
> drivers/gpu/drm/msm/dp/dp_parser.c | 27 +++++++++++++++++++++++++++
> drivers/gpu/drm/msm/dp/dp_parser.h |  2 ++
> 2 files changed, 29 insertions(+)


-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-13 21:44   ` Kuogee Hsieh
@ 2022-12-13 23:06     ` Stephen Boyd
  -1 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-13 23:06 UTC (permalink / raw)
  To: Kuogee Hsieh, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel

Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> Add both data-lanes and link-frequencies property into endpoint

Why do we care? Please tell us why it's important.

>
> Changes in v7:
> -- split yaml out of dtsi patch
> -- link-frequencies from link rate to symbol rate
> -- deprecation of old data-lanes property
>
> Changes in v8:
> -- correct Bjorn mail address to kernel.org
>
> Changes in v10:
> -- add menu item to data-lanes and link-frequecnis
>
> Changes in v11:
> -- add endpoint property at port@1
>
> Changes in v12:
> -- use enum for item at data-lanes and link-frequencies
>
> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
                                                       ^
Stray ` here? -----------------------------------------/

> ---
>  .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
>  1 file changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
> index f2515af..8fb9fa5 100644
> --- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
> +++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
> @@ -96,14 +97,37 @@ properties:
>
>    ports:
>      $ref: /schemas/graph.yaml#/properties/ports
> +
>      properties:
>        port@0:
> -        $ref: /schemas/graph.yaml#/properties/port
> +        $ref: "/schemas/graph.yaml#/$defs/port-base"
>          description: Input endpoint of the controller
> +        properties:
> +          endpoint:
> +            $ref: /schemas/media/video-interfaces.yaml#
>
>        port@1:
> -        $ref: /schemas/graph.yaml#/properties/port
> +        $ref: "/schemas/graph.yaml#/$defs/port-base"

I thought the quotes weren't needed?

>          description: Output endpoint of the controller
> +        properties:
> +          endpoint:
> +            $ref: /schemas/media/video-interfaces.yaml#

Does this need 'unevaluatedProperties: false' here?

> +            properties:
> +              data-lanes:
> +                minItems: 1
> +                maxItems: 4
> +                items:
> +                  enum: [ 0, 1, 2, 3 ]
> +
> +              link-frequencies:
> +                minItems: 1
> +                maxItems: 4
> +                items:
> +                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
> +
> +    required:
> +      - port@0
> +      - port@1
>
>  required:
>    - compatible
> @@ -193,6 +217,8 @@ examples:
>                  reg = <1>;
>                  endpoint {
>                      remote-endpoint = <&typec>;
> +                    data-lanes = <0 1>;
> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>                  };

So far we haven't used the output port on the DP controller in DT.

I'm still not clear on what we should do in general for DP because
there's a PHY that actually controls a lane count and lane mapping. In
my mental model of the SoC, this DP controller's output port is
connected to the DP PHY, which then sends the DP lanes out of the SoC to
the next downstream device (i.e. a DP connector or type-c muxer). Having
a remote-endpoint property with a phandle to typec doesn't fit my mental
model. I'd expect it to be the typec PHY.

That brings up the question: when we have 2 lanes vs. 4 lanes will we
duplicate the data-lanes property in the PHY binding? I suspect we'll
have to. Hopefully that sort of duplication is OK?

Similarly, we may have a redriver that limits the link-frequencies
property further (e.g. only support <= 2.7GHz). Having multiple
link-frequencies along the graph is OK, right? And isn't the
link-frequencies property known here by fact that the DP controller
tells us which SoC this controller is for, and thus we already know the
supported link frequencies?

Finally, I wonder if we should put any of this in the DP controller's
output endpoint, or if we can put these sorts of properties in the DP
PHY binding directly? Can't we do that and then when the DP controller
tries to set 4 lanes, the PHY immediately fails the call and the link
training algorithm does its thing and tries fewer lanes? And similarly,
if link-frequencies were in the PHY's binding, the PHY could fail to set
those frequencies during link training, returning an error to the DP
controller, letting the training move on to a lower frequency. If we did
that this patch series would largely be about modifying the PHY binding,
updating the PHY driver to enforce constraints, and handling errors
during link training in the DP controller (which may already be done? I
didn't check).

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-13 23:06     ` Stephen Boyd
  0 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-13 23:06 UTC (permalink / raw)
  To: Kuogee Hsieh, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel

Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> Add both data-lanes and link-frequencies property into endpoint

Why do we care? Please tell us why it's important.

>
> Changes in v7:
> -- split yaml out of dtsi patch
> -- link-frequencies from link rate to symbol rate
> -- deprecation of old data-lanes property
>
> Changes in v8:
> -- correct Bjorn mail address to kernel.org
>
> Changes in v10:
> -- add menu item to data-lanes and link-frequecnis
>
> Changes in v11:
> -- add endpoint property at port@1
>
> Changes in v12:
> -- use enum for item at data-lanes and link-frequencies
>
> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
                                                       ^
Stray ` here? -----------------------------------------/

> ---
>  .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
>  1 file changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
> index f2515af..8fb9fa5 100644
> --- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
> +++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
> @@ -96,14 +97,37 @@ properties:
>
>    ports:
>      $ref: /schemas/graph.yaml#/properties/ports
> +
>      properties:
>        port@0:
> -        $ref: /schemas/graph.yaml#/properties/port
> +        $ref: "/schemas/graph.yaml#/$defs/port-base"
>          description: Input endpoint of the controller
> +        properties:
> +          endpoint:
> +            $ref: /schemas/media/video-interfaces.yaml#
>
>        port@1:
> -        $ref: /schemas/graph.yaml#/properties/port
> +        $ref: "/schemas/graph.yaml#/$defs/port-base"

I thought the quotes weren't needed?

>          description: Output endpoint of the controller
> +        properties:
> +          endpoint:
> +            $ref: /schemas/media/video-interfaces.yaml#

Does this need 'unevaluatedProperties: false' here?

> +            properties:
> +              data-lanes:
> +                minItems: 1
> +                maxItems: 4
> +                items:
> +                  enum: [ 0, 1, 2, 3 ]
> +
> +              link-frequencies:
> +                minItems: 1
> +                maxItems: 4
> +                items:
> +                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
> +
> +    required:
> +      - port@0
> +      - port@1
>
>  required:
>    - compatible
> @@ -193,6 +217,8 @@ examples:
>                  reg = <1>;
>                  endpoint {
>                      remote-endpoint = <&typec>;
> +                    data-lanes = <0 1>;
> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>                  };

So far we haven't used the output port on the DP controller in DT.

I'm still not clear on what we should do in general for DP because
there's a PHY that actually controls a lane count and lane mapping. In
my mental model of the SoC, this DP controller's output port is
connected to the DP PHY, which then sends the DP lanes out of the SoC to
the next downstream device (i.e. a DP connector or type-c muxer). Having
a remote-endpoint property with a phandle to typec doesn't fit my mental
model. I'd expect it to be the typec PHY.

That brings up the question: when we have 2 lanes vs. 4 lanes will we
duplicate the data-lanes property in the PHY binding? I suspect we'll
have to. Hopefully that sort of duplication is OK?

Similarly, we may have a redriver that limits the link-frequencies
property further (e.g. only support <= 2.7GHz). Having multiple
link-frequencies along the graph is OK, right? And isn't the
link-frequencies property known here by fact that the DP controller
tells us which SoC this controller is for, and thus we already know the
supported link frequencies?

Finally, I wonder if we should put any of this in the DP controller's
output endpoint, or if we can put these sorts of properties in the DP
PHY binding directly? Can't we do that and then when the DP controller
tries to set 4 lanes, the PHY immediately fails the call and the link
training algorithm does its thing and tries fewer lanes? And similarly,
if link-frequencies were in the PHY's binding, the PHY could fail to set
those frequencies during link training, returning an error to the DP
controller, letting the training move on to a lower frequency. If we did
that this patch series would largely be about modifying the PHY binding,
updating the PHY driver to enforce constraints, and handling errors
during link training in the DP controller (which may already be done? I
didn't check).

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 3/5] drm/msm/dp: parser data-lanes as property of dp_out endpoint
  2022-12-13 21:44   ` Kuogee Hsieh
@ 2022-12-13 23:11     ` Stephen Boyd
  -1 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-13 23:11 UTC (permalink / raw)
  To: Kuogee Hsieh, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel

Quoting Kuogee Hsieh (2022-12-13 13:44:06)
> Add capability to parser data-lanes as property of dp_out endpoint.
> Also retain the original capability to parser data-lanes as property
> of mdss_dp node to handle legacy case.
>
> Changes in v6:
> -- first patch after split parser patch into two
>
> Changes in v7:
> -- check "data-lanes" from endpoint first
>
> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>

Subject says "parser" when it probably should say "parse"?

> ---
>  drivers/gpu/drm/msm/dp/dp_parser.c | 25 +++++++++++++++++--------
>  1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/dp/dp_parser.c b/drivers/gpu/drm/msm/dp/dp_parser.c
> index dd73221..b5f7e70 100644
> --- a/drivers/gpu/drm/msm/dp/dp_parser.c
> +++ b/drivers/gpu/drm/msm/dp/dp_parser.c
> @@ -94,16 +94,25 @@ static int dp_parser_ctrl_res(struct dp_parser *parser)
>  static int dp_parser_misc(struct dp_parser *parser)
>  {
>         struct device_node *of_node = parser->pdev->dev.of_node;
> -       int len;
> -
> -       len = drm_of_get_data_lanes_count(of_node, 1, DP_MAX_NUM_DP_LANES);
> -       if (len < 0) {
> -               DRM_WARN("Invalid property \"data-lanes\", default max DP lanes = %d\n",
> -                        DP_MAX_NUM_DP_LANES);
> -               len = DP_MAX_NUM_DP_LANES;
> +       int cnt;
> +
> +       /*
> +        * data-lanes is the property of dp_out endpoint
> +        */
> +       cnt = drm_of_get_data_lanes_count_ep(of_node, 1, 0, 1, DP_MAX_NUM_DP_LANES);
> +       if (cnt > 0)
> +               parser->max_dp_lanes = cnt;
> +       else {

Please add brackets to the above if to match the else.

> +               /*
> +                * legacy code, data-lanes is the property of mdss_dp node
> +                */
> +               cnt = drm_of_get_data_lanes_count(of_node, 1, DP_MAX_NUM_DP_LANES);
> +               if (cnt > 0)
> +                       parser->max_dp_lanes = cnt;
> +               else
> +                       parser->max_dp_lanes = DP_MAX_NUM_DP_LANES; /* 4 lanes */
>         }
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 3/5] drm/msm/dp: parser data-lanes as property of dp_out endpoint
@ 2022-12-13 23:11     ` Stephen Boyd
  0 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-13 23:11 UTC (permalink / raw)
  To: Kuogee Hsieh, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel

Quoting Kuogee Hsieh (2022-12-13 13:44:06)
> Add capability to parser data-lanes as property of dp_out endpoint.
> Also retain the original capability to parser data-lanes as property
> of mdss_dp node to handle legacy case.
>
> Changes in v6:
> -- first patch after split parser patch into two
>
> Changes in v7:
> -- check "data-lanes" from endpoint first
>
> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>

Subject says "parser" when it probably should say "parse"?

> ---
>  drivers/gpu/drm/msm/dp/dp_parser.c | 25 +++++++++++++++++--------
>  1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/dp/dp_parser.c b/drivers/gpu/drm/msm/dp/dp_parser.c
> index dd73221..b5f7e70 100644
> --- a/drivers/gpu/drm/msm/dp/dp_parser.c
> +++ b/drivers/gpu/drm/msm/dp/dp_parser.c
> @@ -94,16 +94,25 @@ static int dp_parser_ctrl_res(struct dp_parser *parser)
>  static int dp_parser_misc(struct dp_parser *parser)
>  {
>         struct device_node *of_node = parser->pdev->dev.of_node;
> -       int len;
> -
> -       len = drm_of_get_data_lanes_count(of_node, 1, DP_MAX_NUM_DP_LANES);
> -       if (len < 0) {
> -               DRM_WARN("Invalid property \"data-lanes\", default max DP lanes = %d\n",
> -                        DP_MAX_NUM_DP_LANES);
> -               len = DP_MAX_NUM_DP_LANES;
> +       int cnt;
> +
> +       /*
> +        * data-lanes is the property of dp_out endpoint
> +        */
> +       cnt = drm_of_get_data_lanes_count_ep(of_node, 1, 0, 1, DP_MAX_NUM_DP_LANES);
> +       if (cnt > 0)
> +               parser->max_dp_lanes = cnt;
> +       else {

Please add brackets to the above if to match the else.

> +               /*
> +                * legacy code, data-lanes is the property of mdss_dp node
> +                */
> +               cnt = drm_of_get_data_lanes_count(of_node, 1, DP_MAX_NUM_DP_LANES);
> +               if (cnt > 0)
> +                       parser->max_dp_lanes = cnt;
> +               else
> +                       parser->max_dp_lanes = DP_MAX_NUM_DP_LANES; /* 4 lanes */
>         }
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 4/5] drm/msm/dp: parser link-frequencies as property of dp_out endpoint
  2022-12-13 21:44   ` Kuogee Hsieh
@ 2022-12-13 23:18     ` Stephen Boyd
  -1 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-13 23:18 UTC (permalink / raw)
  To: Kuogee Hsieh, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel

Quoting Kuogee Hsieh (2022-12-13 13:44:07)
> Add capability to parser and retrieve max DP link supported rate from

to parse

> link-frequencies property of dp_out endpoint.
>
> Changes in v6:
> -- second patch after split parser patch into two patches
>
> Changes in v7:
> -- without checking cnt against DP_MAX_NUM_DP_LANES to retrieve link rate
>
> Changes in v9:
> -- separate parser link-frequencies out of data-lanes
>
> Changes in v10:
> -- add dp_parser_link_frequencies()
>
> Changes in v11:
> -- return 0 if(!endpoint)
>
> Changes in v12:
> -- replace khz with kbytes at dp_parser.h
>
> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
> ---

Same parser in subject.

>  drivers/gpu/drm/msm/dp/dp_parser.c | 27 +++++++++++++++++++++++++++
>  drivers/gpu/drm/msm/dp/dp_parser.h |  2 ++
>  2 files changed, 29 insertions(+)
>
> diff --git a/drivers/gpu/drm/msm/dp/dp_parser.c b/drivers/gpu/drm/msm/dp/dp_parser.c
> index b5f7e70..5549495 100644
> --- a/drivers/gpu/drm/msm/dp/dp_parser.c
> +++ b/drivers/gpu/drm/msm/dp/dp_parser.c
> @@ -91,6 +91,29 @@ static int dp_parser_ctrl_res(struct dp_parser *parser)
>         return 0;
>  }
>
> +static u32 dp_parser_link_frequencies(struct device_node *of_node)
> +{
> +       struct device_node *endpoint;
> +       u64 frequency = 0;
> +       int cnt = 0;

'cnt' doesn't need to be initialized here.

> +
> +       endpoint = of_graph_get_endpoint_by_regs(of_node, 1, 0); /* port@1 */
> +       if (!endpoint)
> +               return 0;
> +
> +       cnt = of_property_count_u64_elems(endpoint, "link-frequencies");
> +
> +       if (cnt > 0)
> +               of_property_read_u64_index(endpoint, "link-frequencies",
> +                                               cnt - 1, &frequency);
> +       of_node_put(endpoint);
> +
> +       frequency /= 10;        /* from symbol rate to link rate */
> +       frequency /= 1000;      /* kbytes */

Use do_div(frequency, 10 * 1000)? If you want comments it could maybe be
like this:

	do_div(frequency,
	       10 * /* from symbol rate to link rate */
	       1000); /* kbytes */

> +
> +       return frequency;
> +}
> +

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 4/5] drm/msm/dp: parser link-frequencies as property of dp_out endpoint
@ 2022-12-13 23:18     ` Stephen Boyd
  0 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-13 23:18 UTC (permalink / raw)
  To: Kuogee Hsieh, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel

Quoting Kuogee Hsieh (2022-12-13 13:44:07)
> Add capability to parser and retrieve max DP link supported rate from

to parse

> link-frequencies property of dp_out endpoint.
>
> Changes in v6:
> -- second patch after split parser patch into two patches
>
> Changes in v7:
> -- without checking cnt against DP_MAX_NUM_DP_LANES to retrieve link rate
>
> Changes in v9:
> -- separate parser link-frequencies out of data-lanes
>
> Changes in v10:
> -- add dp_parser_link_frequencies()
>
> Changes in v11:
> -- return 0 if(!endpoint)
>
> Changes in v12:
> -- replace khz with kbytes at dp_parser.h
>
> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
> ---

Same parser in subject.

>  drivers/gpu/drm/msm/dp/dp_parser.c | 27 +++++++++++++++++++++++++++
>  drivers/gpu/drm/msm/dp/dp_parser.h |  2 ++
>  2 files changed, 29 insertions(+)
>
> diff --git a/drivers/gpu/drm/msm/dp/dp_parser.c b/drivers/gpu/drm/msm/dp/dp_parser.c
> index b5f7e70..5549495 100644
> --- a/drivers/gpu/drm/msm/dp/dp_parser.c
> +++ b/drivers/gpu/drm/msm/dp/dp_parser.c
> @@ -91,6 +91,29 @@ static int dp_parser_ctrl_res(struct dp_parser *parser)
>         return 0;
>  }
>
> +static u32 dp_parser_link_frequencies(struct device_node *of_node)
> +{
> +       struct device_node *endpoint;
> +       u64 frequency = 0;
> +       int cnt = 0;

'cnt' doesn't need to be initialized here.

> +
> +       endpoint = of_graph_get_endpoint_by_regs(of_node, 1, 0); /* port@1 */
> +       if (!endpoint)
> +               return 0;
> +
> +       cnt = of_property_count_u64_elems(endpoint, "link-frequencies");
> +
> +       if (cnt > 0)
> +               of_property_read_u64_index(endpoint, "link-frequencies",
> +                                               cnt - 1, &frequency);
> +       of_node_put(endpoint);
> +
> +       frequency /= 10;        /* from symbol rate to link rate */
> +       frequency /= 1000;      /* kbytes */

Use do_div(frequency, 10 * 1000)? If you want comments it could maybe be
like this:

	do_div(frequency,
	       10 * /* from symbol rate to link rate */
	       1000); /* kbytes */

> +
> +       return frequency;
> +}
> +

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 1/5] arm64: dts: qcom: add data-lanes and link-freuencies into dp_out endpoint
  2022-12-13 21:44   ` Kuogee Hsieh
@ 2022-12-13 23:31     ` Dmitry Baryshkov
  -1 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-13 23:31 UTC (permalink / raw)
  To: Kuogee Hsieh, dri-devel, robdclark, sean, swboyd, dianders,
	vkoul, daniel, agross, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel



On 13 December 2022 23:44:04 EET, Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>Move data-lanes property from mdss_dp node to dp_out endpoint. Also
>add link-frequencies property into dp_out endpoint as well. The last
>frequency specified at link-frequencies will be the max link rate
>supported by DP.
>
>Changes in v5:
>-- revert changes at sc7180.dtsi and sc7280.dtsi
>-- add &dp_out to sc7180-trogdor.dtsi and sc7280-herobrine.dtsi
>
>Changes in v6:
>-- add data-lanes and link-frequencies to yaml
>
>Changes in v7:
>-- change 160000000 to 1620000000
>-- separate yaml to different patch
>
>Changes in v8:
>-- correct Bjorn mail address to kernel.org
>
>Changes in v9:
>-- use symbol rate (hz) for link-frequencies at dp_out at sc7180_trogdor.dtsi
>
>Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
>---
> arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi   | 6 +++++-
> arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi | 6 +++++-
> 2 files changed, 10 insertions(+), 2 deletions(-)
>
>diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
>index eae22e6..93b0cde 100644
>--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
>+++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
>@@ -814,7 +814,11 @@ hp_i2c: &i2c9 {
> 	status = "okay";
> 	pinctrl-names = "default";
> 	pinctrl-0 = <&dp_hot_plug_det>;
>-	data-lanes = <0 1>;
>+};
>+
>+&dp_out {
>+    data-lanes = <0  1>;

Quoting Krzysztof from v12:


Why adding two spaces? Just cut previous line and paste it, don't change it.

>+    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000>;
> };
> 
> &pm6150_adc {
>diff --git a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
>index c11e371..3c7a9d8 100644
>--- a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
>+++ b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
>@@ -442,7 +442,11 @@ ap_i2c_tpm: &i2c14 {
> 	status = "okay";
> 	pinctrl-names = "default";
> 	pinctrl-0 = <&dp_hot_plug_det>;
>-	data-lanes = <0 1>;
>+};
>+
>+&dp_out {
>+	data-lanes = <0  1>;
>+	link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
> };
> 
> &mdss_mdp {

-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 1/5] arm64: dts: qcom: add data-lanes and link-freuencies into dp_out endpoint
@ 2022-12-13 23:31     ` Dmitry Baryshkov
  0 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-13 23:31 UTC (permalink / raw)
  To: Kuogee Hsieh, dri-devel, robdclark, sean, swboyd, dianders,
	vkoul, daniel, agross, andersson, konrad.dybcio, robh+dt,
	krzysztof.kozlowski+dt, devicetree, airlied
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel



On 13 December 2022 23:44:04 EET, Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>Move data-lanes property from mdss_dp node to dp_out endpoint. Also
>add link-frequencies property into dp_out endpoint as well. The last
>frequency specified at link-frequencies will be the max link rate
>supported by DP.
>
>Changes in v5:
>-- revert changes at sc7180.dtsi and sc7280.dtsi
>-- add &dp_out to sc7180-trogdor.dtsi and sc7280-herobrine.dtsi
>
>Changes in v6:
>-- add data-lanes and link-frequencies to yaml
>
>Changes in v7:
>-- change 160000000 to 1620000000
>-- separate yaml to different patch
>
>Changes in v8:
>-- correct Bjorn mail address to kernel.org
>
>Changes in v9:
>-- use symbol rate (hz) for link-frequencies at dp_out at sc7180_trogdor.dtsi
>
>Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
>---
> arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi   | 6 +++++-
> arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi | 6 +++++-
> 2 files changed, 10 insertions(+), 2 deletions(-)
>
>diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
>index eae22e6..93b0cde 100644
>--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
>+++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
>@@ -814,7 +814,11 @@ hp_i2c: &i2c9 {
> 	status = "okay";
> 	pinctrl-names = "default";
> 	pinctrl-0 = <&dp_hot_plug_det>;
>-	data-lanes = <0 1>;
>+};
>+
>+&dp_out {
>+    data-lanes = <0  1>;

Quoting Krzysztof from v12:


Why adding two spaces? Just cut previous line and paste it, don't change it.

>+    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000>;
> };
> 
> &pm6150_adc {
>diff --git a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
>index c11e371..3c7a9d8 100644
>--- a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
>+++ b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
>@@ -442,7 +442,11 @@ ap_i2c_tpm: &i2c14 {
> 	status = "okay";
> 	pinctrl-names = "default";
> 	pinctrl-0 = <&dp_hot_plug_det>;
>-	data-lanes = <0 1>;
>+};
>+
>+&dp_out {
>+	data-lanes = <0  1>;
>+	link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
> };
> 
> &mdss_mdp {

-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-13 23:06     ` Stephen Boyd
@ 2022-12-14 22:56       ` Kuogee Hsieh
  -1 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-14 22:56 UTC (permalink / raw)
  To: Stephen Boyd, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel


On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
>> Add both data-lanes and link-frequencies property into endpoint
> Why do we care? Please tell us why it's important.
>
>> Changes in v7:
>> -- split yaml out of dtsi patch
>> -- link-frequencies from link rate to symbol rate
>> -- deprecation of old data-lanes property
>>
>> Changes in v8:
>> -- correct Bjorn mail address to kernel.org
>>
>> Changes in v10:
>> -- add menu item to data-lanes and link-frequecnis
>>
>> Changes in v11:
>> -- add endpoint property at port@1
>>
>> Changes in v12:
>> -- use enum for item at data-lanes and link-frequencies
>>
>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
>                                                         ^
> Stray ` here? -----------------------------------------/
>
>> ---
>>   .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
>>   1 file changed, 28 insertions(+), 2 deletions(-)
>>
>> diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>> index f2515af..8fb9fa5 100644
>> --- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>> +++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>> @@ -96,14 +97,37 @@ properties:
>>
>>     ports:
>>       $ref: /schemas/graph.yaml#/properties/ports
>> +
>>       properties:
>>         port@0:
>> -        $ref: /schemas/graph.yaml#/properties/port
>> +        $ref: "/schemas/graph.yaml#/$defs/port-base"
>>           description: Input endpoint of the controller
>> +        properties:
>> +          endpoint:
>> +            $ref: /schemas/media/video-interfaces.yaml#
>>
>>         port@1:
>> -        $ref: /schemas/graph.yaml#/properties/port
>> +        $ref: "/schemas/graph.yaml#/$defs/port-base"
> I thought the quotes weren't needed?
>
>>           description: Output endpoint of the controller
>> +        properties:
>> +          endpoint:
>> +            $ref: /schemas/media/video-interfaces.yaml#
> Does this need 'unevaluatedProperties: false' here?
>
>> +            properties:
>> +              data-lanes:
>> +                minItems: 1
>> +                maxItems: 4
>> +                items:
>> +                  enum: [ 0, 1, 2, 3 ]
>> +
>> +              link-frequencies:
>> +                minItems: 1
>> +                maxItems: 4
>> +                items:
>> +                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
>> +
>> +    required:
>> +      - port@0
>> +      - port@1
>>
>>   required:
>>     - compatible
>> @@ -193,6 +217,8 @@ examples:
>>                   reg = <1>;
>>                   endpoint {
>>                       remote-endpoint = <&typec>;
>> +                    data-lanes = <0 1>;
>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>>                   };
> So far we haven't used the output port on the DP controller in DT.
>
> I'm still not clear on what we should do in general for DP because
> there's a PHY that actually controls a lane count and lane mapping. In
> my mental model of the SoC, this DP controller's output port is
> connected to the DP PHY, which then sends the DP lanes out of the SoC to
> the next downstream device (i.e. a DP connector or type-c muxer). Having
> a remote-endpoint property with a phandle to typec doesn't fit my mental
> model. I'd expect it to be the typec PHY.
ack
>
> That brings up the question: when we have 2 lanes vs. 4 lanes will we
> duplicate the data-lanes property in the PHY binding? I suspect we'll
> have to. Hopefully that sort of duplication is OK?
Current we have limitation by reserve 2 data lanes for usb2, i am not 
sure duplication to 4 lanes will work automatically.
>
> Similarly, we may have a redriver that limits the link-frequencies
> property further (e.g. only support <= 2.7GHz). Having multiple
> link-frequencies along the graph is OK, right? And isn't the
> link-frequencies property known here by fact that the DP controller
> tells us which SoC this controller is for, and thus we already know the
> supported link frequencies?
>
> Finally, I wonder if we should put any of this in the DP controller's
> output endpoint, or if we can put these sorts of properties in the DP
> PHY binding directly? Can't we do that and then when the DP controller
> tries to set 4 lanes, the PHY immediately fails the call and the link
> training algorithm does its thing and tries fewer lanes? And similarly,
> if link-frequencies were in the PHY's binding, the PHY could fail to set
> those frequencies during link training, returning an error to the DP
> controller, letting the training move on to a lower frequency. If we did
> that this patch series would largely be about modifying the PHY binding,
> updating the PHY driver to enforce constraints, and handling errors
> during link training in the DP controller (which may already be done? I
> didn't check).


phy/pll have different configuration base on link lanes and rate.

it has to be set up before link training can start.

Once link training start, then there are no any interactions between 
controller and phy during link training session.

Link training only happen between dp controller and sink since link 
status is reported by sink (read back from sink's dpcd register directly).

T achieve link symbol locked, link training will start from reduce link 
rate until lowest rate, if it still failed, then it will reduce lanes 
with highest rate and start training  again.

it will repeat same process until lowest lane (one lane), if it still 
failed, then it will give up and declare link training failed.

Therefore I think add data-lanes and link-frequencies properties in the 
DP PHY binding directly will not helps.



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-14 22:56       ` Kuogee Hsieh
  0 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-14 22:56 UTC (permalink / raw)
  To: Stephen Boyd, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel


On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
>> Add both data-lanes and link-frequencies property into endpoint
> Why do we care? Please tell us why it's important.
>
>> Changes in v7:
>> -- split yaml out of dtsi patch
>> -- link-frequencies from link rate to symbol rate
>> -- deprecation of old data-lanes property
>>
>> Changes in v8:
>> -- correct Bjorn mail address to kernel.org
>>
>> Changes in v10:
>> -- add menu item to data-lanes and link-frequecnis
>>
>> Changes in v11:
>> -- add endpoint property at port@1
>>
>> Changes in v12:
>> -- use enum for item at data-lanes and link-frequencies
>>
>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
>                                                         ^
> Stray ` here? -----------------------------------------/
>
>> ---
>>   .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
>>   1 file changed, 28 insertions(+), 2 deletions(-)
>>
>> diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>> index f2515af..8fb9fa5 100644
>> --- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>> +++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>> @@ -96,14 +97,37 @@ properties:
>>
>>     ports:
>>       $ref: /schemas/graph.yaml#/properties/ports
>> +
>>       properties:
>>         port@0:
>> -        $ref: /schemas/graph.yaml#/properties/port
>> +        $ref: "/schemas/graph.yaml#/$defs/port-base"
>>           description: Input endpoint of the controller
>> +        properties:
>> +          endpoint:
>> +            $ref: /schemas/media/video-interfaces.yaml#
>>
>>         port@1:
>> -        $ref: /schemas/graph.yaml#/properties/port
>> +        $ref: "/schemas/graph.yaml#/$defs/port-base"
> I thought the quotes weren't needed?
>
>>           description: Output endpoint of the controller
>> +        properties:
>> +          endpoint:
>> +            $ref: /schemas/media/video-interfaces.yaml#
> Does this need 'unevaluatedProperties: false' here?
>
>> +            properties:
>> +              data-lanes:
>> +                minItems: 1
>> +                maxItems: 4
>> +                items:
>> +                  enum: [ 0, 1, 2, 3 ]
>> +
>> +              link-frequencies:
>> +                minItems: 1
>> +                maxItems: 4
>> +                items:
>> +                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
>> +
>> +    required:
>> +      - port@0
>> +      - port@1
>>
>>   required:
>>     - compatible
>> @@ -193,6 +217,8 @@ examples:
>>                   reg = <1>;
>>                   endpoint {
>>                       remote-endpoint = <&typec>;
>> +                    data-lanes = <0 1>;
>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>>                   };
> So far we haven't used the output port on the DP controller in DT.
>
> I'm still not clear on what we should do in general for DP because
> there's a PHY that actually controls a lane count and lane mapping. In
> my mental model of the SoC, this DP controller's output port is
> connected to the DP PHY, which then sends the DP lanes out of the SoC to
> the next downstream device (i.e. a DP connector or type-c muxer). Having
> a remote-endpoint property with a phandle to typec doesn't fit my mental
> model. I'd expect it to be the typec PHY.
ack
>
> That brings up the question: when we have 2 lanes vs. 4 lanes will we
> duplicate the data-lanes property in the PHY binding? I suspect we'll
> have to. Hopefully that sort of duplication is OK?
Current we have limitation by reserve 2 data lanes for usb2, i am not 
sure duplication to 4 lanes will work automatically.
>
> Similarly, we may have a redriver that limits the link-frequencies
> property further (e.g. only support <= 2.7GHz). Having multiple
> link-frequencies along the graph is OK, right? And isn't the
> link-frequencies property known here by fact that the DP controller
> tells us which SoC this controller is for, and thus we already know the
> supported link frequencies?
>
> Finally, I wonder if we should put any of this in the DP controller's
> output endpoint, or if we can put these sorts of properties in the DP
> PHY binding directly? Can't we do that and then when the DP controller
> tries to set 4 lanes, the PHY immediately fails the call and the link
> training algorithm does its thing and tries fewer lanes? And similarly,
> if link-frequencies were in the PHY's binding, the PHY could fail to set
> those frequencies during link training, returning an error to the DP
> controller, letting the training move on to a lower frequency. If we did
> that this patch series would largely be about modifying the PHY binding,
> updating the PHY driver to enforce constraints, and handling errors
> during link training in the DP controller (which may already be done? I
> didn't check).


phy/pll have different configuration base on link lanes and rate.

it has to be set up before link training can start.

Once link training start, then there are no any interactions between 
controller and phy during link training session.

Link training only happen between dp controller and sink since link 
status is reported by sink (read back from sink's dpcd register directly).

T achieve link symbol locked, link training will start from reduce link 
rate until lowest rate, if it still failed, then it will reduce lanes 
with highest rate and start training  again.

it will repeat same process until lowest lane (one lane), if it still 
failed, then it will give up and declare link training failed.

Therefore I think add data-lanes and link-frequencies properties in the 
DP PHY binding directly will not helps.



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-14 22:56       ` Kuogee Hsieh
@ 2022-12-15  0:38         ` Stephen Boyd
  -1 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-15  0:38 UTC (permalink / raw)
  To: Kuogee Hsieh, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel

Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>
> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> >> Add both data-lanes and link-frequencies property into endpoint
> > Why do we care? Please tell us why it's important.

Any response?

> >> @@ -193,6 +217,8 @@ examples:
> >>                   reg = <1>;
> >>                   endpoint {
> >>                       remote-endpoint = <&typec>;
> >> +                    data-lanes = <0 1>;
> >> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
> >>                   };
> > So far we haven't used the output port on the DP controller in DT.
> >
> > I'm still not clear on what we should do in general for DP because
> > there's a PHY that actually controls a lane count and lane mapping. In
> > my mental model of the SoC, this DP controller's output port is
> > connected to the DP PHY, which then sends the DP lanes out of the SoC to
> > the next downstream device (i.e. a DP connector or type-c muxer). Having
> > a remote-endpoint property with a phandle to typec doesn't fit my mental
> > model. I'd expect it to be the typec PHY.
> ack
> >
> > That brings up the question: when we have 2 lanes vs. 4 lanes will we
> > duplicate the data-lanes property in the PHY binding? I suspect we'll
> > have to. Hopefully that sort of duplication is OK?
> Current we have limitation by reserve 2 data lanes for usb2, i am not
> sure duplication to 4 lanes will work automatically.
> >
> > Similarly, we may have a redriver that limits the link-frequencies
> > property further (e.g. only support <= 2.7GHz). Having multiple
> > link-frequencies along the graph is OK, right? And isn't the
> > link-frequencies property known here by fact that the DP controller
> > tells us which SoC this controller is for, and thus we already know the
> > supported link frequencies?
> >
> > Finally, I wonder if we should put any of this in the DP controller's
> > output endpoint, or if we can put these sorts of properties in the DP
> > PHY binding directly? Can't we do that and then when the DP controller
> > tries to set 4 lanes, the PHY immediately fails the call and the link
> > training algorithm does its thing and tries fewer lanes? And similarly,
> > if link-frequencies were in the PHY's binding, the PHY could fail to set
> > those frequencies during link training, returning an error to the DP
> > controller, letting the training move on to a lower frequency. If we did
> > that this patch series would largely be about modifying the PHY binding,
> > updating the PHY driver to enforce constraints, and handling errors
> > during link training in the DP controller (which may already be done? I
> > didn't check).
>
>
> phy/pll have different configuration base on link lanes and rate.
>
> it has to be set up before link training can start.
>
> Once link training start, then there are no any interactions between
> controller and phy during link training session.

What do you mean? The DP controller calls phy_configure() and changes
the link rate. The return value from phy_configure() should be checked
and link training should skip link rates that aren't supported and/or
number of lanes that aren't supported.

>
> Link training only happen between dp controller and sink since link
> status is reported by sink (read back from sink's dpcd register directly).
>
> T achieve link symbol locked, link training will start from reduce link
> rate until lowest rate, if it still failed, then it will reduce lanes
> with highest rate and start training  again.
>
> it will repeat same process until lowest lane (one lane), if it still
> failed, then it will give up and declare link training failed.

Yes, that describes the link training algorithm. I don't see why
phy_configure() return value can't be checked and either number of lanes
or link frequencies be checked. If only two lanes are supported, then
phy_configure() will fail for the 4 link rates and the algorithm will
reduce the number of lanes and go back to the highest rate. Then when
the highest rate isn't supported it will drop link rate until the link
rate is supported.

>
> Therefore I think add data-lanes and link-frequencies properties in the
> DP PHY binding directly will not helps.
>

I didn't follow your logic. Sorry.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-15  0:38         ` Stephen Boyd
  0 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-15  0:38 UTC (permalink / raw)
  To: Kuogee Hsieh, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel

Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>
> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> >> Add both data-lanes and link-frequencies property into endpoint
> > Why do we care? Please tell us why it's important.

Any response?

> >> @@ -193,6 +217,8 @@ examples:
> >>                   reg = <1>;
> >>                   endpoint {
> >>                       remote-endpoint = <&typec>;
> >> +                    data-lanes = <0 1>;
> >> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
> >>                   };
> > So far we haven't used the output port on the DP controller in DT.
> >
> > I'm still not clear on what we should do in general for DP because
> > there's a PHY that actually controls a lane count and lane mapping. In
> > my mental model of the SoC, this DP controller's output port is
> > connected to the DP PHY, which then sends the DP lanes out of the SoC to
> > the next downstream device (i.e. a DP connector or type-c muxer). Having
> > a remote-endpoint property with a phandle to typec doesn't fit my mental
> > model. I'd expect it to be the typec PHY.
> ack
> >
> > That brings up the question: when we have 2 lanes vs. 4 lanes will we
> > duplicate the data-lanes property in the PHY binding? I suspect we'll
> > have to. Hopefully that sort of duplication is OK?
> Current we have limitation by reserve 2 data lanes for usb2, i am not
> sure duplication to 4 lanes will work automatically.
> >
> > Similarly, we may have a redriver that limits the link-frequencies
> > property further (e.g. only support <= 2.7GHz). Having multiple
> > link-frequencies along the graph is OK, right? And isn't the
> > link-frequencies property known here by fact that the DP controller
> > tells us which SoC this controller is for, and thus we already know the
> > supported link frequencies?
> >
> > Finally, I wonder if we should put any of this in the DP controller's
> > output endpoint, or if we can put these sorts of properties in the DP
> > PHY binding directly? Can't we do that and then when the DP controller
> > tries to set 4 lanes, the PHY immediately fails the call and the link
> > training algorithm does its thing and tries fewer lanes? And similarly,
> > if link-frequencies were in the PHY's binding, the PHY could fail to set
> > those frequencies during link training, returning an error to the DP
> > controller, letting the training move on to a lower frequency. If we did
> > that this patch series would largely be about modifying the PHY binding,
> > updating the PHY driver to enforce constraints, and handling errors
> > during link training in the DP controller (which may already be done? I
> > didn't check).
>
>
> phy/pll have different configuration base on link lanes and rate.
>
> it has to be set up before link training can start.
>
> Once link training start, then there are no any interactions between
> controller and phy during link training session.

What do you mean? The DP controller calls phy_configure() and changes
the link rate. The return value from phy_configure() should be checked
and link training should skip link rates that aren't supported and/or
number of lanes that aren't supported.

>
> Link training only happen between dp controller and sink since link
> status is reported by sink (read back from sink's dpcd register directly).
>
> T achieve link symbol locked, link training will start from reduce link
> rate until lowest rate, if it still failed, then it will reduce lanes
> with highest rate and start training  again.
>
> it will repeat same process until lowest lane (one lane), if it still
> failed, then it will give up and declare link training failed.

Yes, that describes the link training algorithm. I don't see why
phy_configure() return value can't be checked and either number of lanes
or link frequencies be checked. If only two lanes are supported, then
phy_configure() will fail for the 4 link rates and the algorithm will
reduce the number of lanes and go back to the highest rate. Then when
the highest rate isn't supported it will drop link rate until the link
rate is supported.

>
> Therefore I think add data-lanes and link-frequencies properties in the
> DP PHY binding directly will not helps.
>

I didn't follow your logic. Sorry.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-15  0:38         ` Stephen Boyd
@ 2022-12-15 17:08           ` Kuogee Hsieh
  -1 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-15 17:08 UTC (permalink / raw)
  To: Stephen Boyd, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel


On 12/14/2022 4:38 PM, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
>>>> Add both data-lanes and link-frequencies property into endpoint
>>> Why do we care? Please tell us why it's important.
> Any response?
yes, i did that at my local patch already.
>
>>>> @@ -193,6 +217,8 @@ examples:
>>>>                    reg = <1>;
>>>>                    endpoint {
>>>>                        remote-endpoint = <&typec>;
>>>> +                    data-lanes = <0 1>;
>>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>>>>                    };
>>> So far we haven't used the output port on the DP controller in DT.
>>>
>>> I'm still not clear on what we should do in general for DP because
>>> there's a PHY that actually controls a lane count and lane mapping. In
>>> my mental model of the SoC, this DP controller's output port is
>>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
>>> the next downstream device (i.e. a DP connector or type-c muxer). Having
>>> a remote-endpoint property with a phandle to typec doesn't fit my mental
>>> model. I'd expect it to be the typec PHY.
>> ack
>>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
>>> duplicate the data-lanes property in the PHY binding? I suspect we'll
>>> have to. Hopefully that sort of duplication is OK?
>> Current we have limitation by reserve 2 data lanes for usb2, i am not
>> sure duplication to 4 lanes will work automatically.
>>> Similarly, we may have a redriver that limits the link-frequencies
>>> property further (e.g. only support <= 2.7GHz). Having multiple
>>> link-frequencies along the graph is OK, right? And isn't the
>>> link-frequencies property known here by fact that the DP controller
>>> tells us which SoC this controller is for, and thus we already know the
>>> supported link frequencies?
>>>
>>> Finally, I wonder if we should put any of this in the DP controller's
>>> output endpoint, or if we can put these sorts of properties in the DP
>>> PHY binding directly? Can't we do that and then when the DP controller
>>> tries to set 4 lanes, the PHY immediately fails the call and the link
>>> training algorithm does its thing and tries fewer lanes? And similarly,
>>> if link-frequencies were in the PHY's binding, the PHY could fail to set
>>> those frequencies during link training, returning an error to the DP
>>> controller, letting the training move on to a lower frequency. If we did
>>> that this patch series would largely be about modifying the PHY binding,
>>> updating the PHY driver to enforce constraints, and handling errors
>>> during link training in the DP controller (which may already be done? I
>>> didn't check).
>>
>> phy/pll have different configuration base on link lanes and rate.
>>
>> it has to be set up before link training can start.
>>
>> Once link training start, then there are no any interactions between
>> controller and phy during link training session.
> What do you mean? The DP controller calls phy_configure() and changes
> the link rate. The return value from phy_configure() should be checked
> and link training should skip link rates that aren't supported and/or
> number of lanes that aren't supported.
>
>> Link training only happen between dp controller and sink since link
>> status is reported by sink (read back from sink's dpcd register directly).
>>
>> T achieve link symbol locked, link training will start from reduce link
>> rate until lowest rate, if it still failed, then it will reduce lanes
>> with highest rate and start training  again.
>>
>> it will repeat same process until lowest lane (one lane), if it still
>> failed, then it will give up and declare link training failed.
> Yes, that describes the link training algorithm. I don't see why
> phy_configure() return value can't be checked and either number of lanes
> or link frequencies be checked. If only two lanes are supported, then
> phy_configure() will fail for the 4 link rates and the algorithm will
> reduce the number of lanes and go back to the highest rate. Then when
> the highest rate isn't supported it will drop link rate until the link
> rate is supported.
>
>> Therefore I think add data-lanes and link-frequencies properties in the
>> DP PHY binding directly will not helps.
>>
> I didn't follow your logic. Sorry.

Sorry, probably i did not understand your proposal clearly.

1) move both data-lanes and link-frequencies property from dp controller 
endpoint to phy

2) phy_configure() return succeed if both data-lanes and link 
frequencies are supported. otherwise return failed.

is above two summary items correct?

Currently phy_configure()  is part of link training process and called 
if link lanes or rate changes.

however, since current phy_configure() implementation always return 0, 
the return value is not checking.

This proposal is new, can we discuss more detail at meeting and decide 
to implement it or not.

Meanwhile can we merge current implementation (both data-lanes and 
link-frequqncies at dp controller end point) first?





^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-15 17:08           ` Kuogee Hsieh
  0 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-15 17:08 UTC (permalink / raw)
  To: Stephen Boyd, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel


On 12/14/2022 4:38 PM, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
>>>> Add both data-lanes and link-frequencies property into endpoint
>>> Why do we care? Please tell us why it's important.
> Any response?
yes, i did that at my local patch already.
>
>>>> @@ -193,6 +217,8 @@ examples:
>>>>                    reg = <1>;
>>>>                    endpoint {
>>>>                        remote-endpoint = <&typec>;
>>>> +                    data-lanes = <0 1>;
>>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>>>>                    };
>>> So far we haven't used the output port on the DP controller in DT.
>>>
>>> I'm still not clear on what we should do in general for DP because
>>> there's a PHY that actually controls a lane count and lane mapping. In
>>> my mental model of the SoC, this DP controller's output port is
>>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
>>> the next downstream device (i.e. a DP connector or type-c muxer). Having
>>> a remote-endpoint property with a phandle to typec doesn't fit my mental
>>> model. I'd expect it to be the typec PHY.
>> ack
>>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
>>> duplicate the data-lanes property in the PHY binding? I suspect we'll
>>> have to. Hopefully that sort of duplication is OK?
>> Current we have limitation by reserve 2 data lanes for usb2, i am not
>> sure duplication to 4 lanes will work automatically.
>>> Similarly, we may have a redriver that limits the link-frequencies
>>> property further (e.g. only support <= 2.7GHz). Having multiple
>>> link-frequencies along the graph is OK, right? And isn't the
>>> link-frequencies property known here by fact that the DP controller
>>> tells us which SoC this controller is for, and thus we already know the
>>> supported link frequencies?
>>>
>>> Finally, I wonder if we should put any of this in the DP controller's
>>> output endpoint, or if we can put these sorts of properties in the DP
>>> PHY binding directly? Can't we do that and then when the DP controller
>>> tries to set 4 lanes, the PHY immediately fails the call and the link
>>> training algorithm does its thing and tries fewer lanes? And similarly,
>>> if link-frequencies were in the PHY's binding, the PHY could fail to set
>>> those frequencies during link training, returning an error to the DP
>>> controller, letting the training move on to a lower frequency. If we did
>>> that this patch series would largely be about modifying the PHY binding,
>>> updating the PHY driver to enforce constraints, and handling errors
>>> during link training in the DP controller (which may already be done? I
>>> didn't check).
>>
>> phy/pll have different configuration base on link lanes and rate.
>>
>> it has to be set up before link training can start.
>>
>> Once link training start, then there are no any interactions between
>> controller and phy during link training session.
> What do you mean? The DP controller calls phy_configure() and changes
> the link rate. The return value from phy_configure() should be checked
> and link training should skip link rates that aren't supported and/or
> number of lanes that aren't supported.
>
>> Link training only happen between dp controller and sink since link
>> status is reported by sink (read back from sink's dpcd register directly).
>>
>> T achieve link symbol locked, link training will start from reduce link
>> rate until lowest rate, if it still failed, then it will reduce lanes
>> with highest rate and start training  again.
>>
>> it will repeat same process until lowest lane (one lane), if it still
>> failed, then it will give up and declare link training failed.
> Yes, that describes the link training algorithm. I don't see why
> phy_configure() return value can't be checked and either number of lanes
> or link frequencies be checked. If only two lanes are supported, then
> phy_configure() will fail for the 4 link rates and the algorithm will
> reduce the number of lanes and go back to the highest rate. Then when
> the highest rate isn't supported it will drop link rate until the link
> rate is supported.
>
>> Therefore I think add data-lanes and link-frequencies properties in the
>> DP PHY binding directly will not helps.
>>
> I didn't follow your logic. Sorry.

Sorry, probably i did not understand your proposal clearly.

1) move both data-lanes and link-frequencies property from dp controller 
endpoint to phy

2) phy_configure() return succeed if both data-lanes and link 
frequencies are supported. otherwise return failed.

is above two summary items correct?

Currently phy_configure()  is part of link training process and called 
if link lanes or rate changes.

however, since current phy_configure() implementation always return 0, 
the return value is not checking.

This proposal is new, can we discuss more detail at meeting and decide 
to implement it or not.

Meanwhile can we merge current implementation (both data-lanes and 
link-frequqncies at dp controller end point) first?





^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-15 17:08           ` Kuogee Hsieh
@ 2022-12-15 20:12             ` Stephen Boyd
  -1 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-15 20:12 UTC (permalink / raw)
  To: Kuogee Hsieh, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel

Quoting Kuogee Hsieh (2022-12-15 09:08:04)
>
> On 12/14/2022 4:38 PM, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> >> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> >>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> >
> >> Therefore I think add data-lanes and link-frequencies properties in the
> >> DP PHY binding directly will not helps.
> >>
> > I didn't follow your logic. Sorry.
>
> Sorry, probably i did not understand your proposal clearly.
>
> 1) move both data-lanes and link-frequencies property from dp controller
> endpoint to phy
>
> 2) phy_configure() return succeed if both data-lanes and link
> frequencies are supported. otherwise return failed.
>
> is above two summary items correct?

Yes.

>
> Currently phy_configure()  is part of link training process and called
> if link lanes or rate changes.
>
> however, since current phy_configure() implementation always return 0,
> the return value is not checking.
>
> This proposal is new, can we discuss more detail at meeting and decide
> to implement it or not.
>
> Meanwhile can we merge current implementation (both data-lanes and
> link-frequqncies at dp controller end point) first?
>

I don't think we can merge this patch because it depends on a DT binding
change. If the PHY approach works then I'd prefer we just go with that.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-15 20:12             ` Stephen Boyd
  0 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-15 20:12 UTC (permalink / raw)
  To: Kuogee Hsieh, agross, airlied, andersson, daniel, devicetree,
	dianders, dmitry.baryshkov, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel

Quoting Kuogee Hsieh (2022-12-15 09:08:04)
>
> On 12/14/2022 4:38 PM, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> >> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> >>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> >
> >> Therefore I think add data-lanes and link-frequencies properties in the
> >> DP PHY binding directly will not helps.
> >>
> > I didn't follow your logic. Sorry.
>
> Sorry, probably i did not understand your proposal clearly.
>
> 1) move both data-lanes and link-frequencies property from dp controller
> endpoint to phy
>
> 2) phy_configure() return succeed if both data-lanes and link
> frequencies are supported. otherwise return failed.
>
> is above two summary items correct?

Yes.

>
> Currently phy_configure()  is part of link training process and called
> if link lanes or rate changes.
>
> however, since current phy_configure() implementation always return 0,
> the return value is not checking.
>
> This proposal is new, can we discuss more detail at meeting and decide
> to implement it or not.
>
> Meanwhile can we merge current implementation (both data-lanes and
> link-frequqncies at dp controller end point) first?
>

I don't think we can merge this patch because it depends on a DT binding
change. If the PHY approach works then I'd prefer we just go with that.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-15  0:38         ` Stephen Boyd
@ 2022-12-15 21:12           ` Dmitry Baryshkov
  -1 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-15 21:12 UTC (permalink / raw)
  To: Stephen Boyd, Kuogee Hsieh, agross, airlied, andersson, daniel,
	devicetree, dianders, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel

On 15/12/2022 02:38, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>>
>> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
>>>> Add both data-lanes and link-frequencies property into endpoint
>>> Why do we care? Please tell us why it's important.
> 
> Any response?
> 
>>>> @@ -193,6 +217,8 @@ examples:
>>>>                    reg = <1>;
>>>>                    endpoint {
>>>>                        remote-endpoint = <&typec>;
>>>> +                    data-lanes = <0 1>;
>>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>>>>                    };
>>> So far we haven't used the output port on the DP controller in DT.
>>>
>>> I'm still not clear on what we should do in general for DP because
>>> there's a PHY that actually controls a lane count and lane mapping. In
>>> my mental model of the SoC, this DP controller's output port is
>>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
>>> the next downstream device (i.e. a DP connector or type-c muxer). Having
>>> a remote-endpoint property with a phandle to typec doesn't fit my mental
>>> model. I'd expect it to be the typec PHY.
>> ack
>>>
>>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
>>> duplicate the data-lanes property in the PHY binding? I suspect we'll
>>> have to. Hopefully that sort of duplication is OK?
>> Current we have limitation by reserve 2 data lanes for usb2, i am not
>> sure duplication to 4 lanes will work automatically.
>>>
>>> Similarly, we may have a redriver that limits the link-frequencies
>>> property further (e.g. only support <= 2.7GHz). Having multiple
>>> link-frequencies along the graph is OK, right? And isn't the
>>> link-frequencies property known here by fact that the DP controller
>>> tells us which SoC this controller is for, and thus we already know the
>>> supported link frequencies?
>>>
>>> Finally, I wonder if we should put any of this in the DP controller's
>>> output endpoint, or if we can put these sorts of properties in the DP
>>> PHY binding directly? Can't we do that and then when the DP controller
>>> tries to set 4 lanes, the PHY immediately fails the call and the link
>>> training algorithm does its thing and tries fewer lanes? And similarly,
>>> if link-frequencies were in the PHY's binding, the PHY could fail to set
>>> those frequencies during link training, returning an error to the DP
>>> controller, letting the training move on to a lower frequency. If we did
>>> that this patch series would largely be about modifying the PHY binding,
>>> updating the PHY driver to enforce constraints, and handling errors
>>> during link training in the DP controller (which may already be done? I
>>> didn't check).
>>
>>
>> phy/pll have different configuration base on link lanes and rate.
>>
>> it has to be set up before link training can start.
>>
>> Once link training start, then there are no any interactions between
>> controller and phy during link training session.
> 
> What do you mean? The DP controller calls phy_configure() and changes
> the link rate. The return value from phy_configure() should be checked
> and link training should skip link rates that aren't supported and/or
> number of lanes that aren't supported.

I'd toss another coin into the argument. We have previously discussed 
using the link-frequencies property in the context of limiting link 
speeds for the DSI. There we have both hardware (SoC) limitations and 
the board limitations as in some cases the DSI lanes can not sustain 
some high rate. I still hope for these patches to materialize at some point.

For the DP this is more or less the same story. We have the hardware 
(SoC, PHY, etc) limitations, but also we have the board/device 
limitations. For example some of the board might not be able to support 
HBR3 e.g. because of the PCB design. And while it might be logical to 
also add the 'max bit rate' support to the eDP & combo PHYs, it 
definitely makes sense to be able to limit the rate on the DP <-> 
`something' link.

Now, for all the practical purposes this `something' for the DP is the 
DP connector, the eDP panel or the USB-C mux (with the possible 
redrivers in the middle).

Thus I'd support Kuogee's proposal to have link-frequencies in the DP's 
outbound endpoint. This is the link which will be driven by the data 
stream from the Linux point of view. The PHY is linked through the 
'phys' property, but it doesn't participate in the USB-C (or in the 
connector/panel) graph.

Now let's discuss the data lanes. Currently we have them in the DP 
property itself. Please correct me if I'm wrong, but I think that we can 
drop it for all the practical purposes. Judging by the DP compat string 
the driver can determine if it uses 2 lanes (eDP) or 4 lanes 
(full-featured DP). In case of USB-C when the altmode dictates whether 
to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the 
mode and pin configuration, then inform the DP controller about the 
selected amount of lanes. Then DP informs the PHY about the selection 
(note, PHY doesn't have control at all in this scenario).

The only problematic case is the mixed mode ports, which if I understand 
correctly, can be configured either to eDP or DP modes. I'm not sure who 
specifies and limits the amount of lanes available to the DP controller.

-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-15 21:12           ` Dmitry Baryshkov
  0 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-15 21:12 UTC (permalink / raw)
  To: Stephen Boyd, Kuogee Hsieh, agross, airlied, andersson, daniel,
	devicetree, dianders, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel

On 15/12/2022 02:38, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>>
>> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
>>>> Add both data-lanes and link-frequencies property into endpoint
>>> Why do we care? Please tell us why it's important.
> 
> Any response?
> 
>>>> @@ -193,6 +217,8 @@ examples:
>>>>                    reg = <1>;
>>>>                    endpoint {
>>>>                        remote-endpoint = <&typec>;
>>>> +                    data-lanes = <0 1>;
>>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>>>>                    };
>>> So far we haven't used the output port on the DP controller in DT.
>>>
>>> I'm still not clear on what we should do in general for DP because
>>> there's a PHY that actually controls a lane count and lane mapping. In
>>> my mental model of the SoC, this DP controller's output port is
>>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
>>> the next downstream device (i.e. a DP connector or type-c muxer). Having
>>> a remote-endpoint property with a phandle to typec doesn't fit my mental
>>> model. I'd expect it to be the typec PHY.
>> ack
>>>
>>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
>>> duplicate the data-lanes property in the PHY binding? I suspect we'll
>>> have to. Hopefully that sort of duplication is OK?
>> Current we have limitation by reserve 2 data lanes for usb2, i am not
>> sure duplication to 4 lanes will work automatically.
>>>
>>> Similarly, we may have a redriver that limits the link-frequencies
>>> property further (e.g. only support <= 2.7GHz). Having multiple
>>> link-frequencies along the graph is OK, right? And isn't the
>>> link-frequencies property known here by fact that the DP controller
>>> tells us which SoC this controller is for, and thus we already know the
>>> supported link frequencies?
>>>
>>> Finally, I wonder if we should put any of this in the DP controller's
>>> output endpoint, or if we can put these sorts of properties in the DP
>>> PHY binding directly? Can't we do that and then when the DP controller
>>> tries to set 4 lanes, the PHY immediately fails the call and the link
>>> training algorithm does its thing and tries fewer lanes? And similarly,
>>> if link-frequencies were in the PHY's binding, the PHY could fail to set
>>> those frequencies during link training, returning an error to the DP
>>> controller, letting the training move on to a lower frequency. If we did
>>> that this patch series would largely be about modifying the PHY binding,
>>> updating the PHY driver to enforce constraints, and handling errors
>>> during link training in the DP controller (which may already be done? I
>>> didn't check).
>>
>>
>> phy/pll have different configuration base on link lanes and rate.
>>
>> it has to be set up before link training can start.
>>
>> Once link training start, then there are no any interactions between
>> controller and phy during link training session.
> 
> What do you mean? The DP controller calls phy_configure() and changes
> the link rate. The return value from phy_configure() should be checked
> and link training should skip link rates that aren't supported and/or
> number of lanes that aren't supported.

I'd toss another coin into the argument. We have previously discussed 
using the link-frequencies property in the context of limiting link 
speeds for the DSI. There we have both hardware (SoC) limitations and 
the board limitations as in some cases the DSI lanes can not sustain 
some high rate. I still hope for these patches to materialize at some point.

For the DP this is more or less the same story. We have the hardware 
(SoC, PHY, etc) limitations, but also we have the board/device 
limitations. For example some of the board might not be able to support 
HBR3 e.g. because of the PCB design. And while it might be logical to 
also add the 'max bit rate' support to the eDP & combo PHYs, it 
definitely makes sense to be able to limit the rate on the DP <-> 
`something' link.

Now, for all the practical purposes this `something' for the DP is the 
DP connector, the eDP panel or the USB-C mux (with the possible 
redrivers in the middle).

Thus I'd support Kuogee's proposal to have link-frequencies in the DP's 
outbound endpoint. This is the link which will be driven by the data 
stream from the Linux point of view. The PHY is linked through the 
'phys' property, but it doesn't participate in the USB-C (or in the 
connector/panel) graph.

Now let's discuss the data lanes. Currently we have them in the DP 
property itself. Please correct me if I'm wrong, but I think that we can 
drop it for all the practical purposes. Judging by the DP compat string 
the driver can determine if it uses 2 lanes (eDP) or 4 lanes 
(full-featured DP). In case of USB-C when the altmode dictates whether 
to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the 
mode and pin configuration, then inform the DP controller about the 
selected amount of lanes. Then DP informs the PHY about the selection 
(note, PHY doesn't have control at all in this scenario).

The only problematic case is the mixed mode ports, which if I understand 
correctly, can be configured either to eDP or DP modes. I'm not sure who 
specifies and limits the amount of lanes available to the DP controller.

-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-15 21:12           ` Dmitry Baryshkov
@ 2022-12-15 22:57             ` Doug Anderson
  -1 siblings, 0 replies; 46+ messages in thread
From: Doug Anderson @ 2022-12-15 22:57 UTC (permalink / raw)
  To: Dmitry Baryshkov
  Cc: Stephen Boyd, Kuogee Hsieh, agross, airlied, andersson, daniel,
	devicetree, dri-devel, konrad.dybcio, krzysztof.kozlowski+dt,
	robdclark, robh+dt, sean, vkoul, quic_abhinavk, quic_sbillaka,
	freedreno, linux-arm-msm, linux-kernel

Hi,

On Thu, Dec 15, 2022 at 1:12 PM Dmitry Baryshkov
<dmitry.baryshkov@linaro.org> wrote:
>
> On 15/12/2022 02:38, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> >>
> >> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> >>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> >>>> Add both data-lanes and link-frequencies property into endpoint
> >>> Why do we care? Please tell us why it's important.
> >
> > Any response?
> >
> >>>> @@ -193,6 +217,8 @@ examples:
> >>>>                    reg = <1>;
> >>>>                    endpoint {
> >>>>                        remote-endpoint = <&typec>;
> >>>> +                    data-lanes = <0 1>;
> >>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
> >>>>                    };
> >>> So far we haven't used the output port on the DP controller in DT.
> >>>
> >>> I'm still not clear on what we should do in general for DP because
> >>> there's a PHY that actually controls a lane count and lane mapping. In
> >>> my mental model of the SoC, this DP controller's output port is
> >>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
> >>> the next downstream device (i.e. a DP connector or type-c muxer). Having
> >>> a remote-endpoint property with a phandle to typec doesn't fit my mental
> >>> model. I'd expect it to be the typec PHY.
> >> ack
> >>>
> >>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
> >>> duplicate the data-lanes property in the PHY binding? I suspect we'll
> >>> have to. Hopefully that sort of duplication is OK?
> >> Current we have limitation by reserve 2 data lanes for usb2, i am not
> >> sure duplication to 4 lanes will work automatically.
> >>>
> >>> Similarly, we may have a redriver that limits the link-frequencies
> >>> property further (e.g. only support <= 2.7GHz). Having multiple
> >>> link-frequencies along the graph is OK, right? And isn't the
> >>> link-frequencies property known here by fact that the DP controller
> >>> tells us which SoC this controller is for, and thus we already know the
> >>> supported link frequencies?
> >>>
> >>> Finally, I wonder if we should put any of this in the DP controller's
> >>> output endpoint, or if we can put these sorts of properties in the DP
> >>> PHY binding directly? Can't we do that and then when the DP controller
> >>> tries to set 4 lanes, the PHY immediately fails the call and the link
> >>> training algorithm does its thing and tries fewer lanes? And similarly,
> >>> if link-frequencies were in the PHY's binding, the PHY could fail to set
> >>> those frequencies during link training, returning an error to the DP
> >>> controller, letting the training move on to a lower frequency. If we did
> >>> that this patch series would largely be about modifying the PHY binding,
> >>> updating the PHY driver to enforce constraints, and handling errors
> >>> during link training in the DP controller (which may already be done? I
> >>> didn't check).
> >>
> >>
> >> phy/pll have different configuration base on link lanes and rate.
> >>
> >> it has to be set up before link training can start.
> >>
> >> Once link training start, then there are no any interactions between
> >> controller and phy during link training session.
> >
> > What do you mean? The DP controller calls phy_configure() and changes
> > the link rate. The return value from phy_configure() should be checked
> > and link training should skip link rates that aren't supported and/or
> > number of lanes that aren't supported.
>
> I'd toss another coin into the argument. We have previously discussed
> using the link-frequencies property in the context of limiting link
> speeds for the DSI. There we have both hardware (SoC) limitations and
> the board limitations as in some cases the DSI lanes can not sustain
> some high rate. I still hope for these patches to materialize at some point.
>
> For the DP this is more or less the same story. We have the hardware
> (SoC, PHY, etc) limitations, but also we have the board/device
> limitations. For example some of the board might not be able to support
> HBR3 e.g. because of the PCB design. And while it might be logical to
> also add the 'max bit rate' support to the eDP & combo PHYs, it
> definitely makes sense to be able to limit the rate on the DP <->
> `something' link.
>
> Now, for all the practical purposes this `something' for the DP is the
> DP connector, the eDP panel or the USB-C mux (with the possible
> redrivers in the middle).
>
> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
> outbound endpoint. This is the link which will be driven by the data
> stream from the Linux point of view. The PHY is linked through the
> 'phys' property, but it doesn't participate in the USB-C (or in the
> connector/panel) graph.
>
> Now let's discuss the data lanes. Currently we have them in the DP
> property itself. Please correct me if I'm wrong, but I think that we can
> drop it for all the practical purposes. Judging by the DP compat string
> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
> (full-featured DP). In case of USB-C when the altmode dictates whether
> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
> mode and pin configuration, then inform the DP controller about the
> selected amount of lanes. Then DP informs the PHY about the selection
> (note, PHY doesn't have control at all in this scenario).
>
> The only problematic case is the mixed mode ports, which if I understand
> correctly, can be configured either to eDP or DP modes. I'm not sure who
> specifies and limits the amount of lanes available to the DP controller.

For the most part, I'll let others debate the best way to represent
this data, but I'll comment that the above statement isn't really
correct. Specifically it's wrong to say that eDP is 2 lanes and DP is
2/4 lanes. I will say:

* An eDP display could support 1, 2, or 4 lanes.
* An eDP controller could support 1, 2, or 4 lanes.
* A board may wire up 1, 2, or 4 lanes.

Thus if you have an eDP controller that should be capable of 4 lanes
and an eDP panel that says it's capable of 4 lanes, you still might
need to use a 2 lane configuration because a board only wired up 2 of
the lanes. IMO the number of lanes that are wired up should be in the
device tree somewhere because that's where this board limit should be
defined.

Similarly, you could have an eDP controller that supports 4 lanes, you
may wire 4 lanes off the board, but an eDP panel may only support 1 or
2 lanes. This is handled by querying the panel and asking how many
lanes it supports.

-Doug

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-15 22:57             ` Doug Anderson
  0 siblings, 0 replies; 46+ messages in thread
From: Doug Anderson @ 2022-12-15 22:57 UTC (permalink / raw)
  To: Dmitry Baryshkov
  Cc: devicetree, quic_sbillaka, freedreno, krzysztof.kozlowski+dt,
	quic_abhinavk, sean, andersson, konrad.dybcio, vkoul, dri-devel,
	Kuogee Hsieh, robh+dt, agross, linux-arm-msm, Stephen Boyd,
	linux-kernel

Hi,

On Thu, Dec 15, 2022 at 1:12 PM Dmitry Baryshkov
<dmitry.baryshkov@linaro.org> wrote:
>
> On 15/12/2022 02:38, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> >>
> >> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> >>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> >>>> Add both data-lanes and link-frequencies property into endpoint
> >>> Why do we care? Please tell us why it's important.
> >
> > Any response?
> >
> >>>> @@ -193,6 +217,8 @@ examples:
> >>>>                    reg = <1>;
> >>>>                    endpoint {
> >>>>                        remote-endpoint = <&typec>;
> >>>> +                    data-lanes = <0 1>;
> >>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
> >>>>                    };
> >>> So far we haven't used the output port on the DP controller in DT.
> >>>
> >>> I'm still not clear on what we should do in general for DP because
> >>> there's a PHY that actually controls a lane count and lane mapping. In
> >>> my mental model of the SoC, this DP controller's output port is
> >>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
> >>> the next downstream device (i.e. a DP connector or type-c muxer). Having
> >>> a remote-endpoint property with a phandle to typec doesn't fit my mental
> >>> model. I'd expect it to be the typec PHY.
> >> ack
> >>>
> >>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
> >>> duplicate the data-lanes property in the PHY binding? I suspect we'll
> >>> have to. Hopefully that sort of duplication is OK?
> >> Current we have limitation by reserve 2 data lanes for usb2, i am not
> >> sure duplication to 4 lanes will work automatically.
> >>>
> >>> Similarly, we may have a redriver that limits the link-frequencies
> >>> property further (e.g. only support <= 2.7GHz). Having multiple
> >>> link-frequencies along the graph is OK, right? And isn't the
> >>> link-frequencies property known here by fact that the DP controller
> >>> tells us which SoC this controller is for, and thus we already know the
> >>> supported link frequencies?
> >>>
> >>> Finally, I wonder if we should put any of this in the DP controller's
> >>> output endpoint, or if we can put these sorts of properties in the DP
> >>> PHY binding directly? Can't we do that and then when the DP controller
> >>> tries to set 4 lanes, the PHY immediately fails the call and the link
> >>> training algorithm does its thing and tries fewer lanes? And similarly,
> >>> if link-frequencies were in the PHY's binding, the PHY could fail to set
> >>> those frequencies during link training, returning an error to the DP
> >>> controller, letting the training move on to a lower frequency. If we did
> >>> that this patch series would largely be about modifying the PHY binding,
> >>> updating the PHY driver to enforce constraints, and handling errors
> >>> during link training in the DP controller (which may already be done? I
> >>> didn't check).
> >>
> >>
> >> phy/pll have different configuration base on link lanes and rate.
> >>
> >> it has to be set up before link training can start.
> >>
> >> Once link training start, then there are no any interactions between
> >> controller and phy during link training session.
> >
> > What do you mean? The DP controller calls phy_configure() and changes
> > the link rate. The return value from phy_configure() should be checked
> > and link training should skip link rates that aren't supported and/or
> > number of lanes that aren't supported.
>
> I'd toss another coin into the argument. We have previously discussed
> using the link-frequencies property in the context of limiting link
> speeds for the DSI. There we have both hardware (SoC) limitations and
> the board limitations as in some cases the DSI lanes can not sustain
> some high rate. I still hope for these patches to materialize at some point.
>
> For the DP this is more or less the same story. We have the hardware
> (SoC, PHY, etc) limitations, but also we have the board/device
> limitations. For example some of the board might not be able to support
> HBR3 e.g. because of the PCB design. And while it might be logical to
> also add the 'max bit rate' support to the eDP & combo PHYs, it
> definitely makes sense to be able to limit the rate on the DP <->
> `something' link.
>
> Now, for all the practical purposes this `something' for the DP is the
> DP connector, the eDP panel or the USB-C mux (with the possible
> redrivers in the middle).
>
> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
> outbound endpoint. This is the link which will be driven by the data
> stream from the Linux point of view. The PHY is linked through the
> 'phys' property, but it doesn't participate in the USB-C (or in the
> connector/panel) graph.
>
> Now let's discuss the data lanes. Currently we have them in the DP
> property itself. Please correct me if I'm wrong, but I think that we can
> drop it for all the practical purposes. Judging by the DP compat string
> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
> (full-featured DP). In case of USB-C when the altmode dictates whether
> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
> mode and pin configuration, then inform the DP controller about the
> selected amount of lanes. Then DP informs the PHY about the selection
> (note, PHY doesn't have control at all in this scenario).
>
> The only problematic case is the mixed mode ports, which if I understand
> correctly, can be configured either to eDP or DP modes. I'm not sure who
> specifies and limits the amount of lanes available to the DP controller.

For the most part, I'll let others debate the best way to represent
this data, but I'll comment that the above statement isn't really
correct. Specifically it's wrong to say that eDP is 2 lanes and DP is
2/4 lanes. I will say:

* An eDP display could support 1, 2, or 4 lanes.
* An eDP controller could support 1, 2, or 4 lanes.
* A board may wire up 1, 2, or 4 lanes.

Thus if you have an eDP controller that should be capable of 4 lanes
and an eDP panel that says it's capable of 4 lanes, you still might
need to use a 2 lane configuration because a board only wired up 2 of
the lanes. IMO the number of lanes that are wired up should be in the
device tree somewhere because that's where this board limit should be
defined.

Similarly, you could have an eDP controller that supports 4 lanes, you
may wire 4 lanes off the board, but an eDP panel may only support 1 or
2 lanes. This is handled by querying the panel and asking how many
lanes it supports.

-Doug

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-15 22:57             ` Doug Anderson
@ 2022-12-15 23:02               ` Dmitry Baryshkov
  -1 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-15 23:02 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Stephen Boyd, Kuogee Hsieh, agross, airlied, andersson, daniel,
	devicetree, dri-devel, konrad.dybcio, krzysztof.kozlowski+dt,
	robdclark, robh+dt, sean, vkoul, quic_abhinavk, quic_sbillaka,
	freedreno, linux-arm-msm, linux-kernel

On Fri, 16 Dec 2022 at 00:57, Doug Anderson <dianders@chromium.org> wrote:
>
> Hi,
>
> On Thu, Dec 15, 2022 at 1:12 PM Dmitry Baryshkov
> <dmitry.baryshkov@linaro.org> wrote:
> >
> > On 15/12/2022 02:38, Stephen Boyd wrote:
> > > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> > >>
> > >> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> > >>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> > >>>> Add both data-lanes and link-frequencies property into endpoint
> > >>> Why do we care? Please tell us why it's important.
> > >
> > > Any response?
> > >
> > >>>> @@ -193,6 +217,8 @@ examples:
> > >>>>                    reg = <1>;
> > >>>>                    endpoint {
> > >>>>                        remote-endpoint = <&typec>;
> > >>>> +                    data-lanes = <0 1>;
> > >>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
> > >>>>                    };
> > >>> So far we haven't used the output port on the DP controller in DT.
> > >>>
> > >>> I'm still not clear on what we should do in general for DP because
> > >>> there's a PHY that actually controls a lane count and lane mapping. In
> > >>> my mental model of the SoC, this DP controller's output port is
> > >>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
> > >>> the next downstream device (i.e. a DP connector or type-c muxer). Having
> > >>> a remote-endpoint property with a phandle to typec doesn't fit my mental
> > >>> model. I'd expect it to be the typec PHY.
> > >> ack
> > >>>
> > >>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
> > >>> duplicate the data-lanes property in the PHY binding? I suspect we'll
> > >>> have to. Hopefully that sort of duplication is OK?
> > >> Current we have limitation by reserve 2 data lanes for usb2, i am not
> > >> sure duplication to 4 lanes will work automatically.
> > >>>
> > >>> Similarly, we may have a redriver that limits the link-frequencies
> > >>> property further (e.g. only support <= 2.7GHz). Having multiple
> > >>> link-frequencies along the graph is OK, right? And isn't the
> > >>> link-frequencies property known here by fact that the DP controller
> > >>> tells us which SoC this controller is for, and thus we already know the
> > >>> supported link frequencies?
> > >>>
> > >>> Finally, I wonder if we should put any of this in the DP controller's
> > >>> output endpoint, or if we can put these sorts of properties in the DP
> > >>> PHY binding directly? Can't we do that and then when the DP controller
> > >>> tries to set 4 lanes, the PHY immediately fails the call and the link
> > >>> training algorithm does its thing and tries fewer lanes? And similarly,
> > >>> if link-frequencies were in the PHY's binding, the PHY could fail to set
> > >>> those frequencies during link training, returning an error to the DP
> > >>> controller, letting the training move on to a lower frequency. If we did
> > >>> that this patch series would largely be about modifying the PHY binding,
> > >>> updating the PHY driver to enforce constraints, and handling errors
> > >>> during link training in the DP controller (which may already be done? I
> > >>> didn't check).
> > >>
> > >>
> > >> phy/pll have different configuration base on link lanes and rate.
> > >>
> > >> it has to be set up before link training can start.
> > >>
> > >> Once link training start, then there are no any interactions between
> > >> controller and phy during link training session.
> > >
> > > What do you mean? The DP controller calls phy_configure() and changes
> > > the link rate. The return value from phy_configure() should be checked
> > > and link training should skip link rates that aren't supported and/or
> > > number of lanes that aren't supported.
> >
> > I'd toss another coin into the argument. We have previously discussed
> > using the link-frequencies property in the context of limiting link
> > speeds for the DSI. There we have both hardware (SoC) limitations and
> > the board limitations as in some cases the DSI lanes can not sustain
> > some high rate. I still hope for these patches to materialize at some point.
> >
> > For the DP this is more or less the same story. We have the hardware
> > (SoC, PHY, etc) limitations, but also we have the board/device
> > limitations. For example some of the board might not be able to support
> > HBR3 e.g. because of the PCB design. And while it might be logical to
> > also add the 'max bit rate' support to the eDP & combo PHYs, it
> > definitely makes sense to be able to limit the rate on the DP <->
> > `something' link.
> >
> > Now, for all the practical purposes this `something' for the DP is the
> > DP connector, the eDP panel or the USB-C mux (with the possible
> > redrivers in the middle).
> >
> > Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
> > outbound endpoint. This is the link which will be driven by the data
> > stream from the Linux point of view. The PHY is linked through the
> > 'phys' property, but it doesn't participate in the USB-C (or in the
> > connector/panel) graph.
> >
> > Now let's discuss the data lanes. Currently we have them in the DP
> > property itself. Please correct me if I'm wrong, but I think that we can
> > drop it for all the practical purposes. Judging by the DP compat string
> > the driver can determine if it uses 2 lanes (eDP) or 4 lanes
> > (full-featured DP). In case of USB-C when the altmode dictates whether
> > to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
> > mode and pin configuration, then inform the DP controller about the
> > selected amount of lanes. Then DP informs the PHY about the selection
> > (note, PHY doesn't have control at all in this scenario).
> >
> > The only problematic case is the mixed mode ports, which if I understand
> > correctly, can be configured either to eDP or DP modes. I'm not sure who
> > specifies and limits the amount of lanes available to the DP controller.
>
> For the most part, I'll let others debate the best way to represent
> this data, but I'll comment that the above statement isn't really
> correct. Specifically it's wrong to say that eDP is 2 lanes and DP is
> 2/4 lanes. I will say:
>
> * An eDP display could support 1, 2, or 4 lanes.
> * An eDP controller could support 1, 2, or 4 lanes.
> * A board may wire up 1, 2, or 4 lanes.
>
> Thus if you have an eDP controller that should be capable of 4 lanes
> and an eDP panel that says it's capable of 4 lanes, you still might
> need to use a 2 lane configuration because a board only wired up 2 of
> the lanes. IMO the number of lanes that are wired up should be in the
> device tree somewhere because that's where this board limit should be
> defined.
>
> Similarly, you could have an eDP controller that supports 4 lanes, you
> may wire 4 lanes off the board, but an eDP panel may only support 1 or
> 2 lanes. This is handled by querying the panel and asking how many
> lanes it supports.

Thank you for the explanations. So the `data-lanes' should definitely
be a property of the link between the DP controller and the eDP panel.
This is the path that Kuogee has been using.

-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-15 23:02               ` Dmitry Baryshkov
  0 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-15 23:02 UTC (permalink / raw)
  To: Doug Anderson
  Cc: devicetree, quic_sbillaka, freedreno, krzysztof.kozlowski+dt,
	quic_abhinavk, sean, andersson, konrad.dybcio, vkoul, dri-devel,
	Kuogee Hsieh, robh+dt, agross, linux-arm-msm, Stephen Boyd,
	linux-kernel

On Fri, 16 Dec 2022 at 00:57, Doug Anderson <dianders@chromium.org> wrote:
>
> Hi,
>
> On Thu, Dec 15, 2022 at 1:12 PM Dmitry Baryshkov
> <dmitry.baryshkov@linaro.org> wrote:
> >
> > On 15/12/2022 02:38, Stephen Boyd wrote:
> > > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> > >>
> > >> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> > >>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> > >>>> Add both data-lanes and link-frequencies property into endpoint
> > >>> Why do we care? Please tell us why it's important.
> > >
> > > Any response?
> > >
> > >>>> @@ -193,6 +217,8 @@ examples:
> > >>>>                    reg = <1>;
> > >>>>                    endpoint {
> > >>>>                        remote-endpoint = <&typec>;
> > >>>> +                    data-lanes = <0 1>;
> > >>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
> > >>>>                    };
> > >>> So far we haven't used the output port on the DP controller in DT.
> > >>>
> > >>> I'm still not clear on what we should do in general for DP because
> > >>> there's a PHY that actually controls a lane count and lane mapping. In
> > >>> my mental model of the SoC, this DP controller's output port is
> > >>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
> > >>> the next downstream device (i.e. a DP connector or type-c muxer). Having
> > >>> a remote-endpoint property with a phandle to typec doesn't fit my mental
> > >>> model. I'd expect it to be the typec PHY.
> > >> ack
> > >>>
> > >>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
> > >>> duplicate the data-lanes property in the PHY binding? I suspect we'll
> > >>> have to. Hopefully that sort of duplication is OK?
> > >> Current we have limitation by reserve 2 data lanes for usb2, i am not
> > >> sure duplication to 4 lanes will work automatically.
> > >>>
> > >>> Similarly, we may have a redriver that limits the link-frequencies
> > >>> property further (e.g. only support <= 2.7GHz). Having multiple
> > >>> link-frequencies along the graph is OK, right? And isn't the
> > >>> link-frequencies property known here by fact that the DP controller
> > >>> tells us which SoC this controller is for, and thus we already know the
> > >>> supported link frequencies?
> > >>>
> > >>> Finally, I wonder if we should put any of this in the DP controller's
> > >>> output endpoint, or if we can put these sorts of properties in the DP
> > >>> PHY binding directly? Can't we do that and then when the DP controller
> > >>> tries to set 4 lanes, the PHY immediately fails the call and the link
> > >>> training algorithm does its thing and tries fewer lanes? And similarly,
> > >>> if link-frequencies were in the PHY's binding, the PHY could fail to set
> > >>> those frequencies during link training, returning an error to the DP
> > >>> controller, letting the training move on to a lower frequency. If we did
> > >>> that this patch series would largely be about modifying the PHY binding,
> > >>> updating the PHY driver to enforce constraints, and handling errors
> > >>> during link training in the DP controller (which may already be done? I
> > >>> didn't check).
> > >>
> > >>
> > >> phy/pll have different configuration base on link lanes and rate.
> > >>
> > >> it has to be set up before link training can start.
> > >>
> > >> Once link training start, then there are no any interactions between
> > >> controller and phy during link training session.
> > >
> > > What do you mean? The DP controller calls phy_configure() and changes
> > > the link rate. The return value from phy_configure() should be checked
> > > and link training should skip link rates that aren't supported and/or
> > > number of lanes that aren't supported.
> >
> > I'd toss another coin into the argument. We have previously discussed
> > using the link-frequencies property in the context of limiting link
> > speeds for the DSI. There we have both hardware (SoC) limitations and
> > the board limitations as in some cases the DSI lanes can not sustain
> > some high rate. I still hope for these patches to materialize at some point.
> >
> > For the DP this is more or less the same story. We have the hardware
> > (SoC, PHY, etc) limitations, but also we have the board/device
> > limitations. For example some of the board might not be able to support
> > HBR3 e.g. because of the PCB design. And while it might be logical to
> > also add the 'max bit rate' support to the eDP & combo PHYs, it
> > definitely makes sense to be able to limit the rate on the DP <->
> > `something' link.
> >
> > Now, for all the practical purposes this `something' for the DP is the
> > DP connector, the eDP panel or the USB-C mux (with the possible
> > redrivers in the middle).
> >
> > Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
> > outbound endpoint. This is the link which will be driven by the data
> > stream from the Linux point of view. The PHY is linked through the
> > 'phys' property, but it doesn't participate in the USB-C (or in the
> > connector/panel) graph.
> >
> > Now let's discuss the data lanes. Currently we have them in the DP
> > property itself. Please correct me if I'm wrong, but I think that we can
> > drop it for all the practical purposes. Judging by the DP compat string
> > the driver can determine if it uses 2 lanes (eDP) or 4 lanes
> > (full-featured DP). In case of USB-C when the altmode dictates whether
> > to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
> > mode and pin configuration, then inform the DP controller about the
> > selected amount of lanes. Then DP informs the PHY about the selection
> > (note, PHY doesn't have control at all in this scenario).
> >
> > The only problematic case is the mixed mode ports, which if I understand
> > correctly, can be configured either to eDP or DP modes. I'm not sure who
> > specifies and limits the amount of lanes available to the DP controller.
>
> For the most part, I'll let others debate the best way to represent
> this data, but I'll comment that the above statement isn't really
> correct. Specifically it's wrong to say that eDP is 2 lanes and DP is
> 2/4 lanes. I will say:
>
> * An eDP display could support 1, 2, or 4 lanes.
> * An eDP controller could support 1, 2, or 4 lanes.
> * A board may wire up 1, 2, or 4 lanes.
>
> Thus if you have an eDP controller that should be capable of 4 lanes
> and an eDP panel that says it's capable of 4 lanes, you still might
> need to use a 2 lane configuration because a board only wired up 2 of
> the lanes. IMO the number of lanes that are wired up should be in the
> device tree somewhere because that's where this board limit should be
> defined.
>
> Similarly, you could have an eDP controller that supports 4 lanes, you
> may wire 4 lanes off the board, but an eDP panel may only support 1 or
> 2 lanes. This is handled by querying the panel and asking how many
> lanes it supports.

Thank you for the explanations. So the `data-lanes' should definitely
be a property of the link between the DP controller and the eDP panel.
This is the path that Kuogee has been using.

-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-15 21:12           ` Dmitry Baryshkov
@ 2022-12-16  2:16             ` Stephen Boyd
  -1 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-16  2:16 UTC (permalink / raw)
  To: Dmitry Baryshkov, Kuogee Hsieh, agross, airlied, andersson,
	daniel, devicetree, dianders, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel

Quoting Dmitry Baryshkov (2022-12-15 13:12:49)
> On 15/12/2022 02:38, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> >>
> >> Once link training start, then there are no any interactions between
> >> controller and phy during link training session.
> >
> > What do you mean? The DP controller calls phy_configure() and changes
> > the link rate. The return value from phy_configure() should be checked
> > and link training should skip link rates that aren't supported and/or
> > number of lanes that aren't supported.
>
> I'd toss another coin into the argument. We have previously discussed
> using the link-frequencies property in the context of limiting link
> speeds for the DSI. There we have both hardware (SoC) limitations and
> the board limitations as in some cases the DSI lanes can not sustain
> some high rate. I still hope for these patches to materialize at some point.
>
> For the DP this is more or less the same story. We have the hardware
> (SoC, PHY, etc) limitations, but also we have the board/device
> limitations. For example some of the board might not be able to support
> HBR3 e.g. because of the PCB design. And while it might be logical to
> also add the 'max bit rate' support to the eDP & combo PHYs, it
> definitely makes sense to be able to limit the rate on the DP <->
> `something' link.

Honestly I don't think the PHY even makes sense to put the link rate
property. In the case of Trogdor, the DP controller and DP PHY both
support all DP link frequencies. The limiting factor is the TCPC
redriver that is only rated to support HBR2. We don't describe the TCPC
in DT because the EC controls it. This means we have to put the limit
*somewhere*, and putting it in the DP output node is the only place we
have right now. I would really prefer we put it wherever the limit is,
in this case either in the EC node or on the type-c ports.

Another nice to have feature would be to support different TCPCs connected
to the same DP port. We were considering doing this on Trogdor, where we
would have a TCPC rated for HBR2 and another TCPC rated for HBR3 and
then detect which TCPC was in use to adjust the supported link rates.
We didn't do this though, so the idea got back-burnered.

When the SoC is directly wired to a DP connector, I'd expect the
connector to have the link rate property. That's because the connector
or the traces outside of the SoC will be the part that's limiting the
supported frequencies, not the SoC. The graph would need to be walked to
find the link rate of course. The PHY could do this just as much as the
DP controller could.

>
> Now, for all the practical purposes this `something' for the DP is the
> DP connector, the eDP panel or the USB-C mux (with the possible
> redrivers in the middle).
>
> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
> outbound endpoint. This is the link which will be driven by the data
> stream from the Linux point of view. The PHY is linked through the
> 'phys' property, but it doesn't participate in the USB-C (or in the
> connector/panel) graph.

Why doesn't the PHY participate in the graph? The eDP panel could just
as easily be connected to the eDP PHY if the PHY participated in the
graph.

>
> Now let's discuss the data lanes. Currently we have them in the DP
> property itself. Please correct me if I'm wrong, but I think that we can
> drop it for all the practical purposes.

I vaguely recall that the driver was checking data-lanes to figure out
how many lanes are usable. This is another shortcut taken on Trogdor to
work around a lack of complete DP bindings. We only support two lanes of
DP on Trogdor.

> Judging by the DP compat string
> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
> (full-featured DP). In case of USB-C when the altmode dictates whether
> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
> mode and pin configuration, then inform the DP controller about the
> selected amount of lanes. Then DP informs the PHY about the selection
> (note, PHY doesn't have control at all in this scenario).
>
> The only problematic case is the mixed mode ports, which if I understand
> correctly, can be configured either to eDP or DP modes. I'm not sure who
> specifies and limits the amount of lanes available to the DP controller.
>

This would depend on where we send the type-c message in the kernel. It
really gets to the heart of the question too. Should the PHY be "dumb"
and do whatever the controller tells it to do? Or should the PHY be
aware of what's going on and take action itself? Note that the
data-lanes property is also used to remap lanes. On sc7180 the lane
remapping happens in the DP PHY, and then the type-c PHY can flip that
too, so if we don't involve the PHY(s) in the graph we'll have to
express this information in the DP controller graph and then pass it to
the PHY from the controller. Similarly, when we have more dynamic
configuration of the type-c PHY, where USB may or may not be used
because the TCPM has decided to use 2 or 4 lanes of DP, the data-lanes
property will only indicate lane mappings and not the number of lanes
supported. We'll again have to express the number of lanes to the PHY by
parsing the type-c messages.

It looks simpler to me if the PHY APIs push errors up to the caller for
unsupported configurations. This will hopefully make it easier for the
DP controller when the DP lanes are muxed onto a type-c port so that the
controller doesn't have to parse type-c messages. Instead, the PHY will
get the type-c message, stash away supported number of lanes and link
rates and then notify the DP controller to retrain the link with the
link training algorithm. A few steps of the link training may be
skipped, but the type-c message parsing won't need to be part of the DP
controller code. Said another way, the DP controller can stay focused on
DP instead of navigating type-c in addition to DP.

From a binding perspective, data-lanes/link-frequencies are part of the
graph binding. Having a graph port without a remote-endpoint doesn't
really make any sense. Therefore we should decide to either connect the
PHY into the graph and constrain it via graph properties like
data-lanes, or leave it disconnected and have the controller drive the
PHY (or PHYs when we have type-c). The type-c framework will want the
orientation control (the type-c PHY) to be part of the graph from the
usb-c-connector. That way we can properly map the PHY pins to the
flipped or not-flipped state of the cable. Maybe we don't need to
connect the PHY to the DP graph? Instead there can be a type-c graph for
the PHY, TCPM, etc. and a display graph for the display chain. It feels
like that must not work somehow.

Either way, I don't see how or why these properties should be part of
the DP controller. The controller isn't the limiting part, it's the
redriver or the board/connector/panel that's the limiting factor. Walk
the graph to find the lowest common denominator of link-frequencies and
handle data-lanes either statically in the PHY or dynamically by parsing
type-c messages. How does the eDP panel indicate only two lanes are
supported when all four lanes are wired? I thought that link training
just fails but I don't know.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-16  2:16             ` Stephen Boyd
  0 siblings, 0 replies; 46+ messages in thread
From: Stephen Boyd @ 2022-12-16  2:16 UTC (permalink / raw)
  To: Dmitry Baryshkov, Kuogee Hsieh, agross, airlied, andersson,
	daniel, devicetree, dianders, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel

Quoting Dmitry Baryshkov (2022-12-15 13:12:49)
> On 15/12/2022 02:38, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> >>
> >> Once link training start, then there are no any interactions between
> >> controller and phy during link training session.
> >
> > What do you mean? The DP controller calls phy_configure() and changes
> > the link rate. The return value from phy_configure() should be checked
> > and link training should skip link rates that aren't supported and/or
> > number of lanes that aren't supported.
>
> I'd toss another coin into the argument. We have previously discussed
> using the link-frequencies property in the context of limiting link
> speeds for the DSI. There we have both hardware (SoC) limitations and
> the board limitations as in some cases the DSI lanes can not sustain
> some high rate. I still hope for these patches to materialize at some point.
>
> For the DP this is more or less the same story. We have the hardware
> (SoC, PHY, etc) limitations, but also we have the board/device
> limitations. For example some of the board might not be able to support
> HBR3 e.g. because of the PCB design. And while it might be logical to
> also add the 'max bit rate' support to the eDP & combo PHYs, it
> definitely makes sense to be able to limit the rate on the DP <->
> `something' link.

Honestly I don't think the PHY even makes sense to put the link rate
property. In the case of Trogdor, the DP controller and DP PHY both
support all DP link frequencies. The limiting factor is the TCPC
redriver that is only rated to support HBR2. We don't describe the TCPC
in DT because the EC controls it. This means we have to put the limit
*somewhere*, and putting it in the DP output node is the only place we
have right now. I would really prefer we put it wherever the limit is,
in this case either in the EC node or on the type-c ports.

Another nice to have feature would be to support different TCPCs connected
to the same DP port. We were considering doing this on Trogdor, where we
would have a TCPC rated for HBR2 and another TCPC rated for HBR3 and
then detect which TCPC was in use to adjust the supported link rates.
We didn't do this though, so the idea got back-burnered.

When the SoC is directly wired to a DP connector, I'd expect the
connector to have the link rate property. That's because the connector
or the traces outside of the SoC will be the part that's limiting the
supported frequencies, not the SoC. The graph would need to be walked to
find the link rate of course. The PHY could do this just as much as the
DP controller could.

>
> Now, for all the practical purposes this `something' for the DP is the
> DP connector, the eDP panel or the USB-C mux (with the possible
> redrivers in the middle).
>
> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
> outbound endpoint. This is the link which will be driven by the data
> stream from the Linux point of view. The PHY is linked through the
> 'phys' property, but it doesn't participate in the USB-C (or in the
> connector/panel) graph.

Why doesn't the PHY participate in the graph? The eDP panel could just
as easily be connected to the eDP PHY if the PHY participated in the
graph.

>
> Now let's discuss the data lanes. Currently we have them in the DP
> property itself. Please correct me if I'm wrong, but I think that we can
> drop it for all the practical purposes.

I vaguely recall that the driver was checking data-lanes to figure out
how many lanes are usable. This is another shortcut taken on Trogdor to
work around a lack of complete DP bindings. We only support two lanes of
DP on Trogdor.

> Judging by the DP compat string
> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
> (full-featured DP). In case of USB-C when the altmode dictates whether
> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
> mode and pin configuration, then inform the DP controller about the
> selected amount of lanes. Then DP informs the PHY about the selection
> (note, PHY doesn't have control at all in this scenario).
>
> The only problematic case is the mixed mode ports, which if I understand
> correctly, can be configured either to eDP or DP modes. I'm not sure who
> specifies and limits the amount of lanes available to the DP controller.
>

This would depend on where we send the type-c message in the kernel. It
really gets to the heart of the question too. Should the PHY be "dumb"
and do whatever the controller tells it to do? Or should the PHY be
aware of what's going on and take action itself? Note that the
data-lanes property is also used to remap lanes. On sc7180 the lane
remapping happens in the DP PHY, and then the type-c PHY can flip that
too, so if we don't involve the PHY(s) in the graph we'll have to
express this information in the DP controller graph and then pass it to
the PHY from the controller. Similarly, when we have more dynamic
configuration of the type-c PHY, where USB may or may not be used
because the TCPM has decided to use 2 or 4 lanes of DP, the data-lanes
property will only indicate lane mappings and not the number of lanes
supported. We'll again have to express the number of lanes to the PHY by
parsing the type-c messages.

It looks simpler to me if the PHY APIs push errors up to the caller for
unsupported configurations. This will hopefully make it easier for the
DP controller when the DP lanes are muxed onto a type-c port so that the
controller doesn't have to parse type-c messages. Instead, the PHY will
get the type-c message, stash away supported number of lanes and link
rates and then notify the DP controller to retrain the link with the
link training algorithm. A few steps of the link training may be
skipped, but the type-c message parsing won't need to be part of the DP
controller code. Said another way, the DP controller can stay focused on
DP instead of navigating type-c in addition to DP.

From a binding perspective, data-lanes/link-frequencies are part of the
graph binding. Having a graph port without a remote-endpoint doesn't
really make any sense. Therefore we should decide to either connect the
PHY into the graph and constrain it via graph properties like
data-lanes, or leave it disconnected and have the controller drive the
PHY (or PHYs when we have type-c). The type-c framework will want the
orientation control (the type-c PHY) to be part of the graph from the
usb-c-connector. That way we can properly map the PHY pins to the
flipped or not-flipped state of the cable. Maybe we don't need to
connect the PHY to the DP graph? Instead there can be a type-c graph for
the PHY, TCPM, etc. and a display graph for the display chain. It feels
like that must not work somehow.

Either way, I don't see how or why these properties should be part of
the DP controller. The controller isn't the limiting part, it's the
redriver or the board/connector/panel that's the limiting factor. Walk
the graph to find the lowest common denominator of link-frequencies and
handle data-lanes either statically in the PHY or dynamically by parsing
type-c messages. How does the eDP panel indicate only two lanes are
supported when all four lanes are wired? I thought that link training
just fails but I don't know.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-16  2:16             ` Stephen Boyd
@ 2022-12-16 19:03               ` Kuogee Hsieh
  -1 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-16 19:03 UTC (permalink / raw)
  To: Stephen Boyd, Dmitry Baryshkov, agross, airlied, andersson,
	daniel, devicetree, dianders, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel


On 12/15/2022 6:16 PM, Stephen Boyd wrote:
> Quoting Dmitry Baryshkov (2022-12-15 13:12:49)
>> On 15/12/2022 02:38, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>>>> Once link training start, then there are no any interactions between
>>>> controller and phy during link training session.
>>> What do you mean? The DP controller calls phy_configure() and changes
>>> the link rate. The return value from phy_configure() should be checked
>>> and link training should skip link rates that aren't supported and/or
>>> number of lanes that aren't supported.
>> I'd toss another coin into the argument. We have previously discussed
>> using the link-frequencies property in the context of limiting link
>> speeds for the DSI. There we have both hardware (SoC) limitations and
>> the board limitations as in some cases the DSI lanes can not sustain
>> some high rate. I still hope for these patches to materialize at some point.
>>
>> For the DP this is more or less the same story. We have the hardware
>> (SoC, PHY, etc) limitations, but also we have the board/device
>> limitations. For example some of the board might not be able to support
>> HBR3 e.g. because of the PCB design. And while it might be logical to
>> also add the 'max bit rate' support to the eDP & combo PHYs, it
>> definitely makes sense to be able to limit the rate on the DP <->
>> `something' link.
> Honestly I don't think the PHY even makes sense to put the link rate
> property. In the case of Trogdor, the DP controller and DP PHY both
> support all DP link frequencies. The limiting factor is the TCPC
> redriver that is only rated to support HBR2. We don't describe the TCPC
> in DT because the EC controls it. This means we have to put the limit
> *somewhere*, and putting it in the DP output node is the only place we
> have right now. I would really prefer we put it wherever the limit is,
> in this case either in the EC node or on the type-c ports.
>
> Another nice to have feature would be to support different TCPCs connected
> to the same DP port. We were considering doing this on Trogdor, where we
> would have a TCPC rated for HBR2 and another TCPC rated for HBR3 and
> then detect which TCPC was in use to adjust the supported link rates.
> We didn't do this though, so the idea got back-burnered.
>
> When the SoC is directly wired to a DP connector, I'd expect the
> connector to have the link rate property. That's because the connector
> or the traces outside of the SoC will be the part that's limiting the
> supported frequencies, not the SoC. The graph would need to be walked to
> find the link rate of course. The PHY could do this just as much as the
> DP controller could.
>
>> Now, for all the practical purposes this `something' for the DP is the
>> DP connector, the eDP panel or the USB-C mux (with the possible
>> redrivers in the middle).
>>
>> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
>> outbound endpoint. This is the link which will be driven by the data
>> stream from the Linux point of view. The PHY is linked through the
>> 'phys' property, but it doesn't participate in the USB-C (or in the
>> connector/panel) graph.
> Why doesn't the PHY participate in the graph? The eDP panel could just
> as easily be connected to the eDP PHY if the PHY participated in the
> graph.
>
>> Now let's discuss the data lanes. Currently we have them in the DP
>> property itself. Please correct me if I'm wrong, but I think that we can
>> drop it for all the practical purposes.
> I vaguely recall that the driver was checking data-lanes to figure out
> how many lanes are usable. This is another shortcut taken on Trogdor to
> work around a lack of complete DP bindings. We only support two lanes of
> DP on Trogdor.
>
>> Judging by the DP compat string
>> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
>> (full-featured DP). In case of USB-C when the altmode dictates whether
>> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
>> mode and pin configuration, then inform the DP controller about the
>> selected amount of lanes. Then DP informs the PHY about the selection
>> (note, PHY doesn't have control at all in this scenario).
>>
>> The only problematic case is the mixed mode ports, which if I understand
>> correctly, can be configured either to eDP or DP modes. I'm not sure who
>> specifies and limits the amount of lanes available to the DP controller.
>>
> This would depend on where we send the type-c message in the kernel. It
> really gets to the heart of the question too. Should the PHY be "dumb"
> and do whatever the controller tells it to do? Or should the PHY be
> aware of what's going on and take action itself? Note that the
> data-lanes property is also used to remap lanes. On sc7180 the lane
> remapping happens in the DP PHY, and then the type-c PHY can flip that
> too, so if we don't involve the PHY(s) in the graph we'll have to
> express this information in the DP controller graph and then pass it to
> the PHY from the controller. Similarly, when we have more dynamic
> configuration of the type-c PHY, where USB may or may not be used
> because the TCPM has decided to use 2 or 4 lanes of DP, the data-lanes
> property will only indicate lane mappings and not the number of lanes
> supported. We'll again have to express the number of lanes to the PHY by
> parsing the type-c messages.
>
> It looks simpler to me if the PHY APIs push errors up to the caller for
> unsupported configurations. This will hopefully make it easier for the
> DP controller when the DP lanes are muxed onto a type-c port so that the
> controller doesn't have to parse type-c messages. Instead, the PHY will
> get the type-c message, stash away supported number of lanes and link
> rates and then notify the DP controller to retrain the link with the
> link training algorithm. A few steps of the link training may be
> skipped, but the type-c message parsing won't need to be part of the DP
> controller code. Said another way, the DP controller can stay focused on
> DP instead of navigating type-c in addition to DP.
>
>  From a binding perspective, data-lanes/link-frequencies are part of the
> graph binding. Having a graph port without a remote-endpoint doesn't
> really make any sense. Therefore we should decide to either connect the
> PHY into the graph and constrain it via graph properties like
> data-lanes, or leave it disconnected and have the controller drive the
> PHY (or PHYs when we have type-c). The type-c framework will want the
> orientation control (the type-c PHY) to be part of the graph from the
> usb-c-connector. That way we can properly map the PHY pins to the
> flipped or not-flipped state of the cable. Maybe we don't need to
> connect the PHY to the DP graph? Instead there can be a type-c graph for
> the PHY, TCPM, etc. and a display graph for the display chain. It feels
> like that must not work somehow.
>
> Either way, I don't see how or why these properties should be part of
> the DP controller. The controller isn't the limiting part, it's the
> redriver or the board/connector/panel that's the limiting factor. Walk
> the graph to find the lowest common denominator of link-frequencies and
> handle data-lanes either statically in the PHY or dynamically by parsing
> type-c messages. How does the eDP panel indicate only two lanes are
> supported when all four lanes are wired? I thought that link training
> just fails but I don't know.

According to  DP specification, link policy and stream policy is happen 
at link layer which is dp controller in our case.

Also DP CTS (compliance Test) is also happen at link layer.

I think intelligence should be placed at link layer.

I will submit v14 patch and we can discuss more.



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-16 19:03               ` Kuogee Hsieh
  0 siblings, 0 replies; 46+ messages in thread
From: Kuogee Hsieh @ 2022-12-16 19:03 UTC (permalink / raw)
  To: Stephen Boyd, Dmitry Baryshkov, agross, airlied, andersson,
	daniel, devicetree, dianders, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel


On 12/15/2022 6:16 PM, Stephen Boyd wrote:
> Quoting Dmitry Baryshkov (2022-12-15 13:12:49)
>> On 15/12/2022 02:38, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>>>> Once link training start, then there are no any interactions between
>>>> controller and phy during link training session.
>>> What do you mean? The DP controller calls phy_configure() and changes
>>> the link rate. The return value from phy_configure() should be checked
>>> and link training should skip link rates that aren't supported and/or
>>> number of lanes that aren't supported.
>> I'd toss another coin into the argument. We have previously discussed
>> using the link-frequencies property in the context of limiting link
>> speeds for the DSI. There we have both hardware (SoC) limitations and
>> the board limitations as in some cases the DSI lanes can not sustain
>> some high rate. I still hope for these patches to materialize at some point.
>>
>> For the DP this is more or less the same story. We have the hardware
>> (SoC, PHY, etc) limitations, but also we have the board/device
>> limitations. For example some of the board might not be able to support
>> HBR3 e.g. because of the PCB design. And while it might be logical to
>> also add the 'max bit rate' support to the eDP & combo PHYs, it
>> definitely makes sense to be able to limit the rate on the DP <->
>> `something' link.
> Honestly I don't think the PHY even makes sense to put the link rate
> property. In the case of Trogdor, the DP controller and DP PHY both
> support all DP link frequencies. The limiting factor is the TCPC
> redriver that is only rated to support HBR2. We don't describe the TCPC
> in DT because the EC controls it. This means we have to put the limit
> *somewhere*, and putting it in the DP output node is the only place we
> have right now. I would really prefer we put it wherever the limit is,
> in this case either in the EC node or on the type-c ports.
>
> Another nice to have feature would be to support different TCPCs connected
> to the same DP port. We were considering doing this on Trogdor, where we
> would have a TCPC rated for HBR2 and another TCPC rated for HBR3 and
> then detect which TCPC was in use to adjust the supported link rates.
> We didn't do this though, so the idea got back-burnered.
>
> When the SoC is directly wired to a DP connector, I'd expect the
> connector to have the link rate property. That's because the connector
> or the traces outside of the SoC will be the part that's limiting the
> supported frequencies, not the SoC. The graph would need to be walked to
> find the link rate of course. The PHY could do this just as much as the
> DP controller could.
>
>> Now, for all the practical purposes this `something' for the DP is the
>> DP connector, the eDP panel or the USB-C mux (with the possible
>> redrivers in the middle).
>>
>> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
>> outbound endpoint. This is the link which will be driven by the data
>> stream from the Linux point of view. The PHY is linked through the
>> 'phys' property, but it doesn't participate in the USB-C (or in the
>> connector/panel) graph.
> Why doesn't the PHY participate in the graph? The eDP panel could just
> as easily be connected to the eDP PHY if the PHY participated in the
> graph.
>
>> Now let's discuss the data lanes. Currently we have them in the DP
>> property itself. Please correct me if I'm wrong, but I think that we can
>> drop it for all the practical purposes.
> I vaguely recall that the driver was checking data-lanes to figure out
> how many lanes are usable. This is another shortcut taken on Trogdor to
> work around a lack of complete DP bindings. We only support two lanes of
> DP on Trogdor.
>
>> Judging by the DP compat string
>> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
>> (full-featured DP). In case of USB-C when the altmode dictates whether
>> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
>> mode and pin configuration, then inform the DP controller about the
>> selected amount of lanes. Then DP informs the PHY about the selection
>> (note, PHY doesn't have control at all in this scenario).
>>
>> The only problematic case is the mixed mode ports, which if I understand
>> correctly, can be configured either to eDP or DP modes. I'm not sure who
>> specifies and limits the amount of lanes available to the DP controller.
>>
> This would depend on where we send the type-c message in the kernel. It
> really gets to the heart of the question too. Should the PHY be "dumb"
> and do whatever the controller tells it to do? Or should the PHY be
> aware of what's going on and take action itself? Note that the
> data-lanes property is also used to remap lanes. On sc7180 the lane
> remapping happens in the DP PHY, and then the type-c PHY can flip that
> too, so if we don't involve the PHY(s) in the graph we'll have to
> express this information in the DP controller graph and then pass it to
> the PHY from the controller. Similarly, when we have more dynamic
> configuration of the type-c PHY, where USB may or may not be used
> because the TCPM has decided to use 2 or 4 lanes of DP, the data-lanes
> property will only indicate lane mappings and not the number of lanes
> supported. We'll again have to express the number of lanes to the PHY by
> parsing the type-c messages.
>
> It looks simpler to me if the PHY APIs push errors up to the caller for
> unsupported configurations. This will hopefully make it easier for the
> DP controller when the DP lanes are muxed onto a type-c port so that the
> controller doesn't have to parse type-c messages. Instead, the PHY will
> get the type-c message, stash away supported number of lanes and link
> rates and then notify the DP controller to retrain the link with the
> link training algorithm. A few steps of the link training may be
> skipped, but the type-c message parsing won't need to be part of the DP
> controller code. Said another way, the DP controller can stay focused on
> DP instead of navigating type-c in addition to DP.
>
>  From a binding perspective, data-lanes/link-frequencies are part of the
> graph binding. Having a graph port without a remote-endpoint doesn't
> really make any sense. Therefore we should decide to either connect the
> PHY into the graph and constrain it via graph properties like
> data-lanes, or leave it disconnected and have the controller drive the
> PHY (or PHYs when we have type-c). The type-c framework will want the
> orientation control (the type-c PHY) to be part of the graph from the
> usb-c-connector. That way we can properly map the PHY pins to the
> flipped or not-flipped state of the cable. Maybe we don't need to
> connect the PHY to the DP graph? Instead there can be a type-c graph for
> the PHY, TCPM, etc. and a display graph for the display chain. It feels
> like that must not work somehow.
>
> Either way, I don't see how or why these properties should be part of
> the DP controller. The controller isn't the limiting part, it's the
> redriver or the board/connector/panel that's the limiting factor. Walk
> the graph to find the lowest common denominator of link-frequencies and
> handle data-lanes either statically in the PHY or dynamically by parsing
> type-c messages. How does the eDP panel indicate only two lanes are
> supported when all four lanes are wired? I thought that link training
> just fails but I don't know.

According to  DP specification, link policy and stream policy is happen 
at link layer which is dp controller in our case.

Also DP CTS (compliance Test) is also happen at link layer.

I think intelligence should be placed at link layer.

I will submit v14 patch and we can discuss more.



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
  2022-12-16  2:16             ` Stephen Boyd
@ 2022-12-16 19:43               ` Dmitry Baryshkov
  -1 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-16 19:43 UTC (permalink / raw)
  To: Stephen Boyd, Kuogee Hsieh, agross, airlied, andersson, daniel,
	devicetree, dianders, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: quic_abhinavk, quic_sbillaka, freedreno, linux-arm-msm, linux-kernel

Krzysztof, there is a bunch of DTS code below. If you can comment on it 
(and on my understanding of the existing schemas) that would be great.

On 16/12/2022 04:16, Stephen Boyd wrote:
> Quoting Dmitry Baryshkov (2022-12-15 13:12:49)
>> On 15/12/2022 02:38, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>>>>
>>>> Once link training start, then there are no any interactions between
>>>> controller and phy during link training session.
>>>
>>> What do you mean? The DP controller calls phy_configure() and changes
>>> the link rate. The return value from phy_configure() should be checked
>>> and link training should skip link rates that aren't supported and/or
>>> number of lanes that aren't supported.
>>
>> I'd toss another coin into the argument. We have previously discussed
>> using the link-frequencies property in the context of limiting link
>> speeds for the DSI. There we have both hardware (SoC) limitations and
>> the board limitations as in some cases the DSI lanes can not sustain
>> some high rate. I still hope for these patches to materialize at some point.
>>
>> For the DP this is more or less the same story. We have the hardware
>> (SoC, PHY, etc) limitations, but also we have the board/device
>> limitations. For example some of the board might not be able to support
>> HBR3 e.g. because of the PCB design. And while it might be logical to
>> also add the 'max bit rate' support to the eDP & combo PHYs, it
>> definitely makes sense to be able to limit the rate on the DP <->
>> `something' link.


This discussion made me review some parts of the bindings (and note that 
I was wrong in some of the cases in my previous mail).

> Honestly I don't think the PHY even makes sense to put the link rate
> property. In the case of Trogdor, the DP controller and DP PHY both
> support all DP link frequencies. The limiting factor is the TCPC
> redriver that is only rated to support HBR2. We don't describe the TCPC
> in DT because the EC controls it. This means we have to put the limit
> *somewhere*, and putting it in the DP output node is the only place we
> have right now. I would really prefer we put it wherever the limit is,
> in this case either in the EC node or on the type-c ports.
Yeah, the redriver kind of mixes the puzzle. We have one on the RB5 
board, and I could not find a good way to express it in the DT.

> 
> Another nice to have feature would be to support different TCPCs connected
> to the same DP port. We were considering doing this on Trogdor, where we
> would have a TCPC rated for HBR2 and another TCPC rated for HBR3 and
> then detect which TCPC was in use to adjust the supported link rates.
> We didn't do this though, so the idea got back-burnered.
> 
> When the SoC is directly wired to a DP connector, I'd expect the
> connector to have the link rate property. That's because the connector
> or the traces outside of the SoC will be the part that's limiting the
> supported frequencies, not the SoC. The graph would need to be walked to
> find the link rate of course. The PHY could do this just as much as the
> DP controller could.

Yes, I also thought about putting the frequencies in the connector 
first. But then, the video-interfaces.yaml describes these properties as 
a property of the link.

So the end result can look this way:

displayport_controller@ae900000 {
   phys = <&qmp_combo_phy 1>;


   ports {
     port@0 {
       endpoint {
         remote-endpoint = <&dpu_intf_out>;
       };
     };
     port@1 {
       dp_out: endpoint {
         remote-endpoint = <&dp_con_in>;
       };
     };
   };
};

dp-connector {
   compatible = "dp-connector";
   port {
     dp_con_in: endpoint {
       remote-endpoint = <&dp_out>;
     };
   };
};

In such case the data-lanes and link-frequencies logically fall into the 
dp_out node, in full accordance with the video-interfaces.yaml.

> 
>>
>> Now, for all the practical purposes this `something' for the DP is the
>> DP connector, the eDP panel or the USB-C mux (with the possible
>> redrivers in the middle).
>>
>> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
>> outbound endpoint. This is the link which will be driven by the data
>> stream from the Linux point of view. The PHY is linked through the
>> 'phys' property, but it doesn't participate in the USB-C (or in the
>> connector/panel) graph.
> 
> Why doesn't the PHY participate in the graph? The eDP panel could just
> as easily be connected to the eDP PHY if the PHY participated in the
> graph.

I think that for eDP we have a much simpler graph, which is more in tune 
with the dp-connector one:

displayport_controller@ae900000 {
   phys = <&qmp_edp_phy>;

   aux-bus {
     panel {
       port {
         panel_in: endpoint {
           remote-endpoint = <&dp_out>;
         };
       };
     };
   };

   ports {
     port@0 {
       endpoint {
         remote-endpoint = <&dpu_intf_out>;
       };
     };
     port@1 {
       dp_out: endpoint {
         remote-endpoint = <&panel_in>;
       };
     };
   };
};


> 
>>
>> Now let's discuss the data lanes. Currently we have them in the DP
>> property itself. Please correct me if I'm wrong, but I think that we can
>> drop it for all the practical purposes.
> 
> I vaguely recall that the driver was checking data-lanes to figure out
> how many lanes are usable.

Yes, I think so. However this binding doesn't follow the 
video-interfaces.yaml, thus I suggested moving the property to the 
proper place, the endpoint.

> This is another shortcut taken on Trogdor to
> work around a lack of complete DP bindings. We only support two lanes of
> DP on Trogdor.

How do you implement the rest of USB-C functionality on the Trogdor? Do 
you support USB role switching? Does EC use the DP_HPD GPIO to notify 
the DP controller about all the events?

>> Judging by the DP compat string
>> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
>> (full-featured DP). In case of USB-C when the altmode dictates whether
>> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
>> mode and pin configuration, then inform the DP controller about the
>> selected amount of lanes. Then DP informs the PHY about the selection
>> (note, PHY doesn't have control at all in this scenario).
>>
>> The only problematic case is the mixed mode ports, which if I understand
>> correctly, can be configured either to eDP or DP modes. I'm not sure who
>> specifies and limits the amount of lanes available to the DP controller.
>>
> 
> This would depend on where we send the type-c message in the kernel. It
> really gets to the heart of the question too. Should the PHY be "dumb"
> and do whatever the controller tells it to do? Or should the PHY be
> aware of what's going on and take action itself? Note that the
> data-lanes property is also used to remap lanes. On sc7180 the lane
> remapping happens in the DP PHY, and then the type-c PHY can flip that
> too, so if we don't involve the PHY(s) in the graph we'll have to
> express this information in the DP controller graph and then pass it to
> the PHY from the controller. Similarly, when we have more dynamic
> configuration of the type-c PHY, where USB may or may not be used
> because the TCPM has decided to use 2 or 4 lanes of DP, the data-lanes
> property will only indicate lane mappings and not the number of lanes
> supported. We'll again have to express the number of lanes to the PHY by
> parsing the type-c messages.
> 
> It looks simpler to me if the PHY APIs push errors up to the caller for
> unsupported configurations. This will hopefully make it easier for the
> DP controller when the DP lanes are muxed onto a type-c port so that the
> controller doesn't have to parse type-c messages. Instead, the PHY will
> get the type-c message, stash away supported number of lanes and link
> rates and then notify the DP controller to retrain the link with the
> link training algorithm. A few steps of the link training may be
> skipped, but the type-c message parsing won't need to be part of the DP
> controller code. Said another way, the DP controller can stay focused on
> DP instead of navigating type-c in addition to DP.

Yes, not like the downstream. We have a separate TCPM instance, so this 
should be a part of the usb framework, not the DP.

> 
>  From a binding perspective, data-lanes/link-frequencies are part of the
> graph binding. Having a graph port without a remote-endpoint doesn't
> really make any sense. Therefore we should decide to either connect the
> PHY into the graph and constrain it via graph properties like
> data-lanes, or leave it disconnected and have the controller drive the
> PHY (or PHYs when we have type-c). The type-c framework will want the
> orientation control (the type-c PHY) to be part of the graph from the
> usb-c-connector. That way we can properly map the PHY pins to the
> flipped or not-flipped state of the cable. Maybe we don't need to
> connect the PHY to the DP graph? Instead there can be a type-c graph for
> the PHY, TCPM, etc. and a display graph for the display chain. It feels
> like that must not work somehow.



Just as a reminder:

tcpc: tcpc {
   connector {
     compatible = "usb-c-connector";
     ports {
       port@0 {
         con_hs_ep: endpoint {
           remote-endpoint = <&typec_hs_ep>;
         };
       };
       con_ss_ep: port@1 {
         endpoint {
           remote-endpoint = <&typec_ss_ep>;
         };
       };
       port@2 {
         con_sbu_ep: endpoint {
           remote-endpoint = <&typec_sbu_ep>;
         };
       };
     };
   };
};

Judging from the bindings and all the examples the HS connection should 
be implemented as:

usb@a6f8800 {
   compatible = "qcom,SoC-dwc3", "qcom,dwc3";
   usb@a600000 {
     // existing part
     compatible = "snps,dwc3";
     phys = <&qmp_hs_phy>, <&qmp_combo_phy 0>;

     // new properties per snps,dwc3.yaml
     usb-role-switch;
     port {
       usb_hs_ep: endpoint {
         remote-endpoint = <&con_hs_ep>;
       };
     };
   };
};

qmp_hs_phy: phy@88e3000 {
   #phy-cells = <1>;
};

qmp_combo_phy: phy@88e9000 {
   #phy-cells = <1>;
};

sbu-switch { // GPIO, FSA4480, CrOS EC, etc. See fcs,fsa4480.yaml
   orientation-switch;
   mode-switch;
   port {
     typec_sbu_ep: endpoint {
       remote-endpoint = <&con_sbu_ep>;
     };
   };
};

This is quite logical, as SBU lines from the connector are terminated at 
the switch.

Other devices land the SS lanes at the dwc3 controller (imx8mq-librem5, 
hi3660-hikey960), at the SS lanes switch (r8a774c0-cat874, 
beacon-renesom-baseboard) or at the TypeC PHY itself (rk3399-eaidk-610, 
rk3399-firefly, rk3399-orangepi, rk3399-pinebook-pro).

Thus I'd do the same as rk3399:

&qmp_combo_phy {
   port {
     typec_ss_ep: endpoint {
       remote-endpoint = <&con_ss_ep>;
     };
   };
};

Letting the combo PHY know the orientation would probably require 
additional setup, so we might end up with:

&qmp_combo_phy {
   orientation-switch;
   // maybe also mode-switch; ???
   ports {
     port@0 {
       typec_ss_ep: endpoint {
         remote-endpoint = <&con_ss_ep>;
       };
     };
     port@1 {
       phy_sbu_ep: endpoint {
         remote-endpoint = <&con_phy_sbu_ep>;
       };
     };
   };
};

&tcpc {
   connector {
     ports {
       port@2 {
         con_sbu_ep: endpoint@0 {
           remote-endpoint = <&typec_sbu_ep>;
         };
         con_phy_sbu_ep: endpoint@0 {
           remote-endpoint = <&phy_sbu_ep>;
         };
       };
     };
   };
};

OR:

&qmp_combo_phy {
   orientation-switch;
   // maybe also mode-switch; ???
   ports {
     port@0 {
       typec_ss_ep: endpoint {
         remote-endpoint = <&con_ss_ep>;
       };
     };
     port@1 {
       phy_sbu_ep: endpoint {
         remote-endpoint = <&switch_sbu_ep>;
       };
     };
   };
};

sbu-switch { // GPIO, FSA4480, CrOS EC, etc. See fcs,fsa4480.yaml
   orientation-switch;
   mode-switch;
   ports {
     port@0 {
       typec_sbu_ep: endpoint {
         remote-endpoint = <&con_sbu_ep>;
       };
     };
     port@1 {
       switch_sbu_ep: endpoint {
         remote-endpoint = <&phy_sbu_ep>;
       };
     };
};

I can not select, which one suits better in this case, slight preference 
for the second implementation, as it follows the hardware design.

The redriver then can either be sitting near the EP ports, or be 
implemented in a following way, replacing the switch.

&qmp_combo_phy {
   orientation-switch;
   // maybe also mode-switch; ???
   ports {
     port@0 {
       phy_ss_ep: endpoint {
         remote-endpoint = <&re_ss_ep>;
       };
     };
     port@1 {
       phy_sbu_ep: endpoint {
         remote-endpoint = <&re_sbu_ep>;
       };
     };
   };
};

redriver {
   orientation-switch;
   mode-switch;
   ports {
     port@0 {
       typec_ss_ep: endpoint@0 {
         remote-endpoint = <&con_ss_ep>;
       };
       typec_sbu_ep: endpoint@1 {
         remote-endpoint = <&con_sbu_ep>;
       };
     };
     port@1 {
       re_ss_ep: endpoint@0 {
         remote-endpoint = <&phy_ss_ep>;
       };
       re_sbu_ep: endpoint@1 {
         remote-endpoint = <&phy_sbu_ep>;
       };
     };
   };
};

However all these examples leave the dp_out endpoint unconnected, there 
is no 'next DRM bridge' yet. Original Intel design suggested putting a 
single link from the tcpc node to the displayport controller in the 
corresponding altmode definition. Then the altmode controller would use 
the drm_connector_oob_hotplug_event() to notify the DP connector.

Since unlike Intel we use drm bridges, Bjorn's suggestion was to 
implement the drm_bridge inside pmic_glink (typec port manager in his 
case) and to link it to the dp_out.

This again raises a question regarding the redrivers. I do not think 
that we should add additional links from the redriver to the dp. Neither 
should it implement the drm_bridge.

> Either way, I don't see how or why these properties should be part of
> the DP controller. The controller isn't the limiting part, it's the
> redriver or the board/connector/panel that's the limiting factor. Walk
> the graph to find the lowest common denominator of link-frequencies and
> handle data-lanes either statically in the PHY or dynamically by parsing
> type-c messages. How does the eDP panel indicate only two lanes are
> supported when all four lanes are wired? I thought that link training
> just fails but I don't know.

I think you would agree that data-lanes is the property of the link. It 
might be the link between the DP and the panel, between the DP and 
redriver, or (in the theoretic case) a link between the redriver and the 
connector. Compare this with the DSI case, when we are putting the 
data-lanes property to the host-bridge or host-panel graph link.

For the link rates it's not that obvious, but I also think that 
redrivers can impose different limits on its links.

That said, I think we might end up implementing two different mechanisms 
for such limitations:

- The one coming from the dp_out endpoint (and thus a link to the next 
DP bridge/panel/connector/etc). The link specifies the number of data 
lanes and the maximum link rate.

- Another one implemented through the QMP combo PHY, knowing SoC 
limitations, being able to parse the link to the redriver or the USB-C 
connector and further query the link properties.

-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property
@ 2022-12-16 19:43               ` Dmitry Baryshkov
  0 siblings, 0 replies; 46+ messages in thread
From: Dmitry Baryshkov @ 2022-12-16 19:43 UTC (permalink / raw)
  To: Stephen Boyd, Kuogee Hsieh, agross, airlied, andersson, daniel,
	devicetree, dianders, dri-devel, konrad.dybcio,
	krzysztof.kozlowski+dt, robdclark, robh+dt, sean, vkoul
  Cc: linux-arm-msm, quic_sbillaka, freedreno, quic_abhinavk, linux-kernel

Krzysztof, there is a bunch of DTS code below. If you can comment on it 
(and on my understanding of the existing schemas) that would be great.

On 16/12/2022 04:16, Stephen Boyd wrote:
> Quoting Dmitry Baryshkov (2022-12-15 13:12:49)
>> On 15/12/2022 02:38, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>>>>
>>>> Once link training start, then there are no any interactions between
>>>> controller and phy during link training session.
>>>
>>> What do you mean? The DP controller calls phy_configure() and changes
>>> the link rate. The return value from phy_configure() should be checked
>>> and link training should skip link rates that aren't supported and/or
>>> number of lanes that aren't supported.
>>
>> I'd toss another coin into the argument. We have previously discussed
>> using the link-frequencies property in the context of limiting link
>> speeds for the DSI. There we have both hardware (SoC) limitations and
>> the board limitations as in some cases the DSI lanes can not sustain
>> some high rate. I still hope for these patches to materialize at some point.
>>
>> For the DP this is more or less the same story. We have the hardware
>> (SoC, PHY, etc) limitations, but also we have the board/device
>> limitations. For example some of the board might not be able to support
>> HBR3 e.g. because of the PCB design. And while it might be logical to
>> also add the 'max bit rate' support to the eDP & combo PHYs, it
>> definitely makes sense to be able to limit the rate on the DP <->
>> `something' link.


This discussion made me review some parts of the bindings (and note that 
I was wrong in some of the cases in my previous mail).

> Honestly I don't think the PHY even makes sense to put the link rate
> property. In the case of Trogdor, the DP controller and DP PHY both
> support all DP link frequencies. The limiting factor is the TCPC
> redriver that is only rated to support HBR2. We don't describe the TCPC
> in DT because the EC controls it. This means we have to put the limit
> *somewhere*, and putting it in the DP output node is the only place we
> have right now. I would really prefer we put it wherever the limit is,
> in this case either in the EC node or on the type-c ports.
Yeah, the redriver kind of mixes the puzzle. We have one on the RB5 
board, and I could not find a good way to express it in the DT.

> 
> Another nice to have feature would be to support different TCPCs connected
> to the same DP port. We were considering doing this on Trogdor, where we
> would have a TCPC rated for HBR2 and another TCPC rated for HBR3 and
> then detect which TCPC was in use to adjust the supported link rates.
> We didn't do this though, so the idea got back-burnered.
> 
> When the SoC is directly wired to a DP connector, I'd expect the
> connector to have the link rate property. That's because the connector
> or the traces outside of the SoC will be the part that's limiting the
> supported frequencies, not the SoC. The graph would need to be walked to
> find the link rate of course. The PHY could do this just as much as the
> DP controller could.

Yes, I also thought about putting the frequencies in the connector 
first. But then, the video-interfaces.yaml describes these properties as 
a property of the link.

So the end result can look this way:

displayport_controller@ae900000 {
   phys = <&qmp_combo_phy 1>;


   ports {
     port@0 {
       endpoint {
         remote-endpoint = <&dpu_intf_out>;
       };
     };
     port@1 {
       dp_out: endpoint {
         remote-endpoint = <&dp_con_in>;
       };
     };
   };
};

dp-connector {
   compatible = "dp-connector";
   port {
     dp_con_in: endpoint {
       remote-endpoint = <&dp_out>;
     };
   };
};

In such case the data-lanes and link-frequencies logically fall into the 
dp_out node, in full accordance with the video-interfaces.yaml.

> 
>>
>> Now, for all the practical purposes this `something' for the DP is the
>> DP connector, the eDP panel or the USB-C mux (with the possible
>> redrivers in the middle).
>>
>> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
>> outbound endpoint. This is the link which will be driven by the data
>> stream from the Linux point of view. The PHY is linked through the
>> 'phys' property, but it doesn't participate in the USB-C (or in the
>> connector/panel) graph.
> 
> Why doesn't the PHY participate in the graph? The eDP panel could just
> as easily be connected to the eDP PHY if the PHY participated in the
> graph.

I think that for eDP we have a much simpler graph, which is more in tune 
with the dp-connector one:

displayport_controller@ae900000 {
   phys = <&qmp_edp_phy>;

   aux-bus {
     panel {
       port {
         panel_in: endpoint {
           remote-endpoint = <&dp_out>;
         };
       };
     };
   };

   ports {
     port@0 {
       endpoint {
         remote-endpoint = <&dpu_intf_out>;
       };
     };
     port@1 {
       dp_out: endpoint {
         remote-endpoint = <&panel_in>;
       };
     };
   };
};


> 
>>
>> Now let's discuss the data lanes. Currently we have them in the DP
>> property itself. Please correct me if I'm wrong, but I think that we can
>> drop it for all the practical purposes.
> 
> I vaguely recall that the driver was checking data-lanes to figure out
> how many lanes are usable.

Yes, I think so. However this binding doesn't follow the 
video-interfaces.yaml, thus I suggested moving the property to the 
proper place, the endpoint.

> This is another shortcut taken on Trogdor to
> work around a lack of complete DP bindings. We only support two lanes of
> DP on Trogdor.

How do you implement the rest of USB-C functionality on the Trogdor? Do 
you support USB role switching? Does EC use the DP_HPD GPIO to notify 
the DP controller about all the events?

>> Judging by the DP compat string
>> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
>> (full-featured DP). In case of USB-C when the altmode dictates whether
>> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
>> mode and pin configuration, then inform the DP controller about the
>> selected amount of lanes. Then DP informs the PHY about the selection
>> (note, PHY doesn't have control at all in this scenario).
>>
>> The only problematic case is the mixed mode ports, which if I understand
>> correctly, can be configured either to eDP or DP modes. I'm not sure who
>> specifies and limits the amount of lanes available to the DP controller.
>>
> 
> This would depend on where we send the type-c message in the kernel. It
> really gets to the heart of the question too. Should the PHY be "dumb"
> and do whatever the controller tells it to do? Or should the PHY be
> aware of what's going on and take action itself? Note that the
> data-lanes property is also used to remap lanes. On sc7180 the lane
> remapping happens in the DP PHY, and then the type-c PHY can flip that
> too, so if we don't involve the PHY(s) in the graph we'll have to
> express this information in the DP controller graph and then pass it to
> the PHY from the controller. Similarly, when we have more dynamic
> configuration of the type-c PHY, where USB may or may not be used
> because the TCPM has decided to use 2 or 4 lanes of DP, the data-lanes
> property will only indicate lane mappings and not the number of lanes
> supported. We'll again have to express the number of lanes to the PHY by
> parsing the type-c messages.
> 
> It looks simpler to me if the PHY APIs push errors up to the caller for
> unsupported configurations. This will hopefully make it easier for the
> DP controller when the DP lanes are muxed onto a type-c port so that the
> controller doesn't have to parse type-c messages. Instead, the PHY will
> get the type-c message, stash away supported number of lanes and link
> rates and then notify the DP controller to retrain the link with the
> link training algorithm. A few steps of the link training may be
> skipped, but the type-c message parsing won't need to be part of the DP
> controller code. Said another way, the DP controller can stay focused on
> DP instead of navigating type-c in addition to DP.

Yes, not like the downstream. We have a separate TCPM instance, so this 
should be a part of the usb framework, not the DP.

> 
>  From a binding perspective, data-lanes/link-frequencies are part of the
> graph binding. Having a graph port without a remote-endpoint doesn't
> really make any sense. Therefore we should decide to either connect the
> PHY into the graph and constrain it via graph properties like
> data-lanes, or leave it disconnected and have the controller drive the
> PHY (or PHYs when we have type-c). The type-c framework will want the
> orientation control (the type-c PHY) to be part of the graph from the
> usb-c-connector. That way we can properly map the PHY pins to the
> flipped or not-flipped state of the cable. Maybe we don't need to
> connect the PHY to the DP graph? Instead there can be a type-c graph for
> the PHY, TCPM, etc. and a display graph for the display chain. It feels
> like that must not work somehow.



Just as a reminder:

tcpc: tcpc {
   connector {
     compatible = "usb-c-connector";
     ports {
       port@0 {
         con_hs_ep: endpoint {
           remote-endpoint = <&typec_hs_ep>;
         };
       };
       con_ss_ep: port@1 {
         endpoint {
           remote-endpoint = <&typec_ss_ep>;
         };
       };
       port@2 {
         con_sbu_ep: endpoint {
           remote-endpoint = <&typec_sbu_ep>;
         };
       };
     };
   };
};

Judging from the bindings and all the examples the HS connection should 
be implemented as:

usb@a6f8800 {
   compatible = "qcom,SoC-dwc3", "qcom,dwc3";
   usb@a600000 {
     // existing part
     compatible = "snps,dwc3";
     phys = <&qmp_hs_phy>, <&qmp_combo_phy 0>;

     // new properties per snps,dwc3.yaml
     usb-role-switch;
     port {
       usb_hs_ep: endpoint {
         remote-endpoint = <&con_hs_ep>;
       };
     };
   };
};

qmp_hs_phy: phy@88e3000 {
   #phy-cells = <1>;
};

qmp_combo_phy: phy@88e9000 {
   #phy-cells = <1>;
};

sbu-switch { // GPIO, FSA4480, CrOS EC, etc. See fcs,fsa4480.yaml
   orientation-switch;
   mode-switch;
   port {
     typec_sbu_ep: endpoint {
       remote-endpoint = <&con_sbu_ep>;
     };
   };
};

This is quite logical, as SBU lines from the connector are terminated at 
the switch.

Other devices land the SS lanes at the dwc3 controller (imx8mq-librem5, 
hi3660-hikey960), at the SS lanes switch (r8a774c0-cat874, 
beacon-renesom-baseboard) or at the TypeC PHY itself (rk3399-eaidk-610, 
rk3399-firefly, rk3399-orangepi, rk3399-pinebook-pro).

Thus I'd do the same as rk3399:

&qmp_combo_phy {
   port {
     typec_ss_ep: endpoint {
       remote-endpoint = <&con_ss_ep>;
     };
   };
};

Letting the combo PHY know the orientation would probably require 
additional setup, so we might end up with:

&qmp_combo_phy {
   orientation-switch;
   // maybe also mode-switch; ???
   ports {
     port@0 {
       typec_ss_ep: endpoint {
         remote-endpoint = <&con_ss_ep>;
       };
     };
     port@1 {
       phy_sbu_ep: endpoint {
         remote-endpoint = <&con_phy_sbu_ep>;
       };
     };
   };
};

&tcpc {
   connector {
     ports {
       port@2 {
         con_sbu_ep: endpoint@0 {
           remote-endpoint = <&typec_sbu_ep>;
         };
         con_phy_sbu_ep: endpoint@0 {
           remote-endpoint = <&phy_sbu_ep>;
         };
       };
     };
   };
};

OR:

&qmp_combo_phy {
   orientation-switch;
   // maybe also mode-switch; ???
   ports {
     port@0 {
       typec_ss_ep: endpoint {
         remote-endpoint = <&con_ss_ep>;
       };
     };
     port@1 {
       phy_sbu_ep: endpoint {
         remote-endpoint = <&switch_sbu_ep>;
       };
     };
   };
};

sbu-switch { // GPIO, FSA4480, CrOS EC, etc. See fcs,fsa4480.yaml
   orientation-switch;
   mode-switch;
   ports {
     port@0 {
       typec_sbu_ep: endpoint {
         remote-endpoint = <&con_sbu_ep>;
       };
     };
     port@1 {
       switch_sbu_ep: endpoint {
         remote-endpoint = <&phy_sbu_ep>;
       };
     };
};

I can not select, which one suits better in this case, slight preference 
for the second implementation, as it follows the hardware design.

The redriver then can either be sitting near the EP ports, or be 
implemented in a following way, replacing the switch.

&qmp_combo_phy {
   orientation-switch;
   // maybe also mode-switch; ???
   ports {
     port@0 {
       phy_ss_ep: endpoint {
         remote-endpoint = <&re_ss_ep>;
       };
     };
     port@1 {
       phy_sbu_ep: endpoint {
         remote-endpoint = <&re_sbu_ep>;
       };
     };
   };
};

redriver {
   orientation-switch;
   mode-switch;
   ports {
     port@0 {
       typec_ss_ep: endpoint@0 {
         remote-endpoint = <&con_ss_ep>;
       };
       typec_sbu_ep: endpoint@1 {
         remote-endpoint = <&con_sbu_ep>;
       };
     };
     port@1 {
       re_ss_ep: endpoint@0 {
         remote-endpoint = <&phy_ss_ep>;
       };
       re_sbu_ep: endpoint@1 {
         remote-endpoint = <&phy_sbu_ep>;
       };
     };
   };
};

However all these examples leave the dp_out endpoint unconnected, there 
is no 'next DRM bridge' yet. Original Intel design suggested putting a 
single link from the tcpc node to the displayport controller in the 
corresponding altmode definition. Then the altmode controller would use 
the drm_connector_oob_hotplug_event() to notify the DP connector.

Since unlike Intel we use drm bridges, Bjorn's suggestion was to 
implement the drm_bridge inside pmic_glink (typec port manager in his 
case) and to link it to the dp_out.

This again raises a question regarding the redrivers. I do not think 
that we should add additional links from the redriver to the dp. Neither 
should it implement the drm_bridge.

> Either way, I don't see how or why these properties should be part of
> the DP controller. The controller isn't the limiting part, it's the
> redriver or the board/connector/panel that's the limiting factor. Walk
> the graph to find the lowest common denominator of link-frequencies and
> handle data-lanes either statically in the PHY or dynamically by parsing
> type-c messages. How does the eDP panel indicate only two lanes are
> supported when all four lanes are wired? I thought that link training
> just fails but I don't know.

I think you would agree that data-lanes is the property of the link. It 
might be the link between the DP and the panel, between the DP and 
redriver, or (in the theoretic case) a link between the redriver and the 
connector. Compare this with the DSI case, when we are putting the 
data-lanes property to the host-bridge or host-panel graph link.

For the link rates it's not that obvious, but I also think that 
redrivers can impose different limits on its links.

That said, I think we might end up implementing two different mechanisms 
for such limitations:

- The one coming from the dp_out endpoint (and thus a link to the next 
DP bridge/panel/connector/etc). The link specifies the number of data 
lanes and the maximum link rate.

- Another one implemented through the QMP combo PHY, knowing SoC 
limitations, being able to parse the link to the redriver or the USB-C 
connector and further query the link properties.

-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2022-12-16 19:43 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-13 21:44 [PATCH v12 0/5] Add data-lanes and link-frequencies to dp_out endpoint Kuogee Hsieh
2022-12-13 21:44 ` Kuogee Hsieh
2022-12-13 21:44 ` [PATCH v12 1/5] arm64: dts: qcom: add data-lanes and link-freuencies into " Kuogee Hsieh
2022-12-13 21:44   ` Kuogee Hsieh
2022-12-13 23:31   ` Dmitry Baryshkov
2022-12-13 23:31     ` Dmitry Baryshkov
2022-12-13 21:44 ` [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property Kuogee Hsieh
2022-12-13 21:44   ` Kuogee Hsieh
2022-12-13 22:02   ` Dmitry Baryshkov
2022-12-13 22:02     ` Dmitry Baryshkov
2022-12-13 23:06   ` Stephen Boyd
2022-12-13 23:06     ` Stephen Boyd
2022-12-14 22:56     ` Kuogee Hsieh
2022-12-14 22:56       ` Kuogee Hsieh
2022-12-15  0:38       ` Stephen Boyd
2022-12-15  0:38         ` Stephen Boyd
2022-12-15 17:08         ` Kuogee Hsieh
2022-12-15 17:08           ` Kuogee Hsieh
2022-12-15 20:12           ` Stephen Boyd
2022-12-15 20:12             ` Stephen Boyd
2022-12-15 21:12         ` Dmitry Baryshkov
2022-12-15 21:12           ` Dmitry Baryshkov
2022-12-15 22:57           ` Doug Anderson
2022-12-15 22:57             ` Doug Anderson
2022-12-15 23:02             ` Dmitry Baryshkov
2022-12-15 23:02               ` Dmitry Baryshkov
2022-12-16  2:16           ` Stephen Boyd
2022-12-16  2:16             ` Stephen Boyd
2022-12-16 19:03             ` Kuogee Hsieh
2022-12-16 19:03               ` Kuogee Hsieh
2022-12-16 19:43             ` Dmitry Baryshkov
2022-12-16 19:43               ` Dmitry Baryshkov
2022-12-13 21:44 ` [PATCH v12 3/5] drm/msm/dp: parser data-lanes as property of dp_out endpoint Kuogee Hsieh
2022-12-13 21:44   ` Kuogee Hsieh
2022-12-13 23:11   ` Stephen Boyd
2022-12-13 23:11     ` Stephen Boyd
2022-12-13 21:44 ` [PATCH v12 4/5] drm/msm/dp: parser link-frequencies " Kuogee Hsieh
2022-12-13 21:44   ` Kuogee Hsieh
2022-12-13 22:07   ` Dmitry Baryshkov
2022-12-13 22:07     ` Dmitry Baryshkov
2022-12-13 23:18   ` Stephen Boyd
2022-12-13 23:18     ` Stephen Boyd
2022-12-13 21:44 ` [PATCH v12 5/5] drm/msm/dp: add support of max dp link rate Kuogee Hsieh
2022-12-13 21:44   ` Kuogee Hsieh
2022-12-13 22:06   ` Dmitry Baryshkov
2022-12-13 22:06     ` Dmitry Baryshkov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.