linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
@ 2022-10-31 17:27 Kuogee Hsieh
  2022-10-31 18:46 ` Dmitry Baryshkov
  0 siblings, 1 reply; 17+ messages in thread
From: Kuogee Hsieh @ 2022-10-31 17:27 UTC (permalink / raw)
  To: robdclark, sean, swboyd, dianders, vkoul, daniel, airlied,
	agross, dmitry.baryshkov, bjorn.andersson
  Cc: quic_abhinavk, quic_khsieh, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel

An HBR3-capable device shall also support TPS4. Since TPS4 feature
had been implemented already, it is not necessary to limit link
rate at HBR2 (5.4G). This patch remove this limitation to support
HBR3 (8.1G) link rate.

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
---
 drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
index 5149ceb..3344f5a 100644
--- a/drivers/gpu/drm/msm/dp/dp_panel.c
+++ b/drivers/gpu/drm/msm/dp/dp_panel.c
@@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel *dp_panel)
 	if (link_info->num_lanes > dp_panel->max_dp_lanes)
 		link_info->num_lanes = dp_panel->max_dp_lanes;
 
-	/* Limit support upto HBR2 until HBR3 support is added */
-	if (link_info->rate >= (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
-		link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
-
 	drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
 	drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
 	drm_dbg_dp(panel->drm_dev, "lane_count=%d\n", link_info->num_lanes);
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-10-31 17:27 [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3 Kuogee Hsieh
@ 2022-10-31 18:46 ` Dmitry Baryshkov
  2022-10-31 21:11   ` Kuogee Hsieh
  0 siblings, 1 reply; 17+ messages in thread
From: Dmitry Baryshkov @ 2022-10-31 18:46 UTC (permalink / raw)
  To: Kuogee Hsieh, robdclark, sean, swboyd, dianders, vkoul, daniel,
	airlied, agross, bjorn.andersson
  Cc: quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel

On 31/10/2022 20:27, Kuogee Hsieh wrote:
> An HBR3-capable device shall also support TPS4. Since TPS4 feature
> had been implemented already, it is not necessary to limit link
> rate at HBR2 (5.4G). This patch remove this limitation to support
> HBR3 (8.1G) link rate.

The DP driver supports several platforms including sdm845 and can 
support, if I'm not mistaken, platforms up to msm8998/sdm630/660. Could 
you please confirm that all these SoCs have support for HBR3?

With that fact being confirmed:

Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>


> 
> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
> ---
>   drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
>   1 file changed, 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
> index 5149ceb..3344f5a 100644
> --- a/drivers/gpu/drm/msm/dp/dp_panel.c
> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c
> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel *dp_panel)
>   	if (link_info->num_lanes > dp_panel->max_dp_lanes)
>   		link_info->num_lanes = dp_panel->max_dp_lanes;
>   
> -	/* Limit support upto HBR2 until HBR3 support is added */
> -	if (link_info->rate >= (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
> -		link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
> -
>   	drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
>   	drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
>   	drm_dbg_dp(panel->drm_dev, "lane_count=%d\n", link_info->num_lanes);

-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-10-31 18:46 ` Dmitry Baryshkov
@ 2022-10-31 21:11   ` Kuogee Hsieh
  2022-11-01  0:08     ` Doug Anderson
  0 siblings, 1 reply; 17+ messages in thread
From: Kuogee Hsieh @ 2022-10-31 21:11 UTC (permalink / raw)
  To: Dmitry Baryshkov, robdclark, sean, swboyd, dianders, vkoul,
	daniel, airlied, agross, bjorn.andersson
  Cc: quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel

Hi Dmitry,


Link rate is advertised by sink, but adjusted (reduced the link rate)  
by host during link training.

Therefore should be fine if host did not support HBR3 rate.

It will reduce to lower link rate during link training procedures.

kuogee

On 10/31/2022 11:46 AM, Dmitry Baryshkov wrote:
> On 31/10/2022 20:27, Kuogee Hsieh wrote:
>> An HBR3-capable device shall also support TPS4. Since TPS4 feature
>> had been implemented already, it is not necessary to limit link
>> rate at HBR2 (5.4G). This patch remove this limitation to support
>> HBR3 (8.1G) link rate.
>
> The DP driver supports several platforms including sdm845 and can 
> support, if I'm not mistaken, platforms up to msm8998/sdm630/660. 
> Could you please confirm that all these SoCs have support for HBR3?
>
> With that fact being confirmed:
>
> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
>
>
>>
>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
>> ---
>>   drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
>>   1 file changed, 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c 
>> b/drivers/gpu/drm/msm/dp/dp_panel.c
>> index 5149ceb..3344f5a 100644
>> --- a/drivers/gpu/drm/msm/dp/dp_panel.c
>> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c
>> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel 
>> *dp_panel)
>>       if (link_info->num_lanes > dp_panel->max_dp_lanes)
>>           link_info->num_lanes = dp_panel->max_dp_lanes;
>>   -    /* Limit support upto HBR2 until HBR3 support is added */
>> -    if (link_info->rate >= 
>> (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
>> -        link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
>> -
>>       drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
>>       drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
>>       drm_dbg_dp(panel->drm_dev, "lane_count=%d\n", 
>> link_info->num_lanes);
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-10-31 21:11   ` Kuogee Hsieh
@ 2022-11-01  0:08     ` Doug Anderson
  2022-11-01  0:15       ` Dmitry Baryshkov
  0 siblings, 1 reply; 17+ messages in thread
From: Doug Anderson @ 2022-11-01  0:08 UTC (permalink / raw)
  To: Kuogee Hsieh
  Cc: Dmitry Baryshkov, robdclark, sean, swboyd, vkoul, daniel,
	airlied, agross, bjorn.andersson, quic_abhinavk, quic_sbillaka,
	freedreno, dri-devel, linux-arm-msm, linux-kernel

Hi,

On Mon, Oct 31, 2022 at 2:11 PM Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>
> Hi Dmitry,
>
>
> Link rate is advertised by sink, but adjusted (reduced the link rate)
> by host during link training.
>
> Therefore should be fine if host did not support HBR3 rate.
>
> It will reduce to lower link rate during link training procedures.
>
> kuogee
>
> On 10/31/2022 11:46 AM, Dmitry Baryshkov wrote:
> > On 31/10/2022 20:27, Kuogee Hsieh wrote:
> >> An HBR3-capable device shall also support TPS4. Since TPS4 feature
> >> had been implemented already, it is not necessary to limit link
> >> rate at HBR2 (5.4G). This patch remove this limitation to support
> >> HBR3 (8.1G) link rate.
> >
> > The DP driver supports several platforms including sdm845 and can
> > support, if I'm not mistaken, platforms up to msm8998/sdm630/660.
> > Could you please confirm that all these SoCs have support for HBR3?
> >
> > With that fact being confirmed:
> >
> > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> >
> >
> >>
> >> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
> >> ---
> >>   drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
> >>   1 file changed, 4 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c
> >> b/drivers/gpu/drm/msm/dp/dp_panel.c
> >> index 5149ceb..3344f5a 100644
> >> --- a/drivers/gpu/drm/msm/dp/dp_panel.c
> >> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c
> >> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel
> >> *dp_panel)
> >>       if (link_info->num_lanes > dp_panel->max_dp_lanes)
> >>           link_info->num_lanes = dp_panel->max_dp_lanes;
> >>   -    /* Limit support upto HBR2 until HBR3 support is added */
> >> -    if (link_info->rate >=
> >> (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
> >> -        link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
> >> -
> >>       drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
> >>       drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
> >>       drm_dbg_dp(panel->drm_dev, "lane_count=%d\n",
> >> link_info->num_lanes);

Stephen might remember better, but I could have sworn that the problem
was that there might be something in the middle that couldn't support
the higher link rate. In other words, I think we have:

SoC <--> TypeC Port Controller <--> Display

The SoC might support HBR3 and the display might support HBR3, but the
TCPC (Type C Port Controller) might not. I think that the TCPC is a
silent/passive component so it can't really let anyone know about its
limitations.

In theory I guess you could rely on link training to just happen to
fail if you drive the link too fast for the TCPC to handle. Does this
actually work reliably?

I think the other option that was discussed in the past was to add
something in the device tree for this. Either you could somehow model
the TCPC in DRM and thus know that a given model of TCPC limits the
link rate or you could hack in a property in the DP controller to
limit it.

-Doug

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-01  0:08     ` Doug Anderson
@ 2022-11-01  0:15       ` Dmitry Baryshkov
  2022-11-01 14:37         ` Doug Anderson
  0 siblings, 1 reply; 17+ messages in thread
From: Dmitry Baryshkov @ 2022-11-01  0:15 UTC (permalink / raw)
  To: Doug Anderson, Kuogee Hsieh
  Cc: robdclark, sean, swboyd, vkoul, daniel, airlied, agross,
	bjorn.andersson, quic_abhinavk, quic_sbillaka, freedreno,
	dri-devel, linux-arm-msm, linux-kernel

On 01/11/2022 03:08, Doug Anderson wrote:
> Hi,
> 
> On Mon, Oct 31, 2022 at 2:11 PM Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>>
>> Hi Dmitry,
>>
>>
>> Link rate is advertised by sink, but adjusted (reduced the link rate)
>> by host during link training.
>>
>> Therefore should be fine if host did not support HBR3 rate.
>>
>> It will reduce to lower link rate during link training procedures.
>>
>> kuogee
>>
>> On 10/31/2022 11:46 AM, Dmitry Baryshkov wrote:
>>> On 31/10/2022 20:27, Kuogee Hsieh wrote:
>>>> An HBR3-capable device shall also support TPS4. Since TPS4 feature
>>>> had been implemented already, it is not necessary to limit link
>>>> rate at HBR2 (5.4G). This patch remove this limitation to support
>>>> HBR3 (8.1G) link rate.
>>>
>>> The DP driver supports several platforms including sdm845 and can
>>> support, if I'm not mistaken, platforms up to msm8998/sdm630/660.
>>> Could you please confirm that all these SoCs have support for HBR3?
>>>
>>> With that fact being confirmed:
>>>
>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
>>>
>>>
>>>>
>>>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
>>>> ---
>>>>    drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
>>>>    1 file changed, 4 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c
>>>> b/drivers/gpu/drm/msm/dp/dp_panel.c
>>>> index 5149ceb..3344f5a 100644
>>>> --- a/drivers/gpu/drm/msm/dp/dp_panel.c
>>>> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c
>>>> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel
>>>> *dp_panel)
>>>>        if (link_info->num_lanes > dp_panel->max_dp_lanes)
>>>>            link_info->num_lanes = dp_panel->max_dp_lanes;
>>>>    -    /* Limit support upto HBR2 until HBR3 support is added */
>>>> -    if (link_info->rate >=
>>>> (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
>>>> -        link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
>>>> -
>>>>        drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
>>>>        drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
>>>>        drm_dbg_dp(panel->drm_dev, "lane_count=%d\n",
>>>> link_info->num_lanes);
> 
> Stephen might remember better, but I could have sworn that the problem
> was that there might be something in the middle that couldn't support
> the higher link rate. In other words, I think we have:
> 
> SoC <--> TypeC Port Controller <--> Display
> 
> The SoC might support HBR3 and the display might support HBR3, but the
> TCPC (Type C Port Controller) might not. I think that the TCPC is a
> silent/passive component so it can't really let anyone know about its
> limitations.
> 
> In theory I guess you could rely on link training to just happen to
> fail if you drive the link too fast for the TCPC to handle. Does this
> actually work reliably?
> 
> I think the other option that was discussed in the past was to add
> something in the device tree for this. Either you could somehow model
> the TCPC in DRM and thus know that a given model of TCPC limits the
> link rate or you could hack in a property in the DP controller to
> limit it.

Latest pmic_glink proposal from Bjorn include adding the drm_bridge for 
the TCPC. Such bridge can in theory limit supported modes and rates.

-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-01  0:15       ` Dmitry Baryshkov
@ 2022-11-01 14:37         ` Doug Anderson
  2022-11-02 15:47           ` Doug Anderson
  2022-11-02 17:15           ` Dmitry Baryshkov
  0 siblings, 2 replies; 17+ messages in thread
From: Doug Anderson @ 2022-11-01 14:37 UTC (permalink / raw)
  To: Dmitry Baryshkov
  Cc: Kuogee Hsieh, robdclark, sean, swboyd, vkoul, daniel, airlied,
	agross, quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson

Hi,

On Mon, Oct 31, 2022 at 5:15 PM Dmitry Baryshkov
<dmitry.baryshkov@linaro.org> wrote:
>
> On 01/11/2022 03:08, Doug Anderson wrote:
> > Hi,
> >
> > On Mon, Oct 31, 2022 at 2:11 PM Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
> >>
> >> Hi Dmitry,
> >>
> >>
> >> Link rate is advertised by sink, but adjusted (reduced the link rate)
> >> by host during link training.
> >>
> >> Therefore should be fine if host did not support HBR3 rate.
> >>
> >> It will reduce to lower link rate during link training procedures.
> >>
> >> kuogee
> >>
> >> On 10/31/2022 11:46 AM, Dmitry Baryshkov wrote:
> >>> On 31/10/2022 20:27, Kuogee Hsieh wrote:
> >>>> An HBR3-capable device shall also support TPS4. Since TPS4 feature
> >>>> had been implemented already, it is not necessary to limit link
> >>>> rate at HBR2 (5.4G). This patch remove this limitation to support
> >>>> HBR3 (8.1G) link rate.
> >>>
> >>> The DP driver supports several platforms including sdm845 and can
> >>> support, if I'm not mistaken, platforms up to msm8998/sdm630/660.
> >>> Could you please confirm that all these SoCs have support for HBR3?
> >>>
> >>> With that fact being confirmed:
> >>>
> >>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> >>>
> >>>
> >>>>
> >>>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
> >>>> ---
> >>>>    drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
> >>>>    1 file changed, 4 deletions(-)
> >>>>
> >>>> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c
> >>>> b/drivers/gpu/drm/msm/dp/dp_panel.c
> >>>> index 5149ceb..3344f5a 100644
> >>>> --- a/drivers/gpu/drm/msm/dp/dp_panel.c
> >>>> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c
> >>>> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel
> >>>> *dp_panel)
> >>>>        if (link_info->num_lanes > dp_panel->max_dp_lanes)
> >>>>            link_info->num_lanes = dp_panel->max_dp_lanes;
> >>>>    -    /* Limit support upto HBR2 until HBR3 support is added */
> >>>> -    if (link_info->rate >=
> >>>> (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
> >>>> -        link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
> >>>> -
> >>>>        drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
> >>>>        drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
> >>>>        drm_dbg_dp(panel->drm_dev, "lane_count=%d\n",
> >>>> link_info->num_lanes);
> >
> > Stephen might remember better, but I could have sworn that the problem
> > was that there might be something in the middle that couldn't support
> > the higher link rate. In other words, I think we have:
> >
> > SoC <--> TypeC Port Controller <--> Display
> >
> > The SoC might support HBR3 and the display might support HBR3, but the
> > TCPC (Type C Port Controller) might not. I think that the TCPC is a
> > silent/passive component so it can't really let anyone know about its
> > limitations.
> >
> > In theory I guess you could rely on link training to just happen to
> > fail if you drive the link too fast for the TCPC to handle. Does this
> > actually work reliably?
> >
> > I think the other option that was discussed in the past was to add
> > something in the device tree for this. Either you could somehow model
> > the TCPC in DRM and thus know that a given model of TCPC limits the
> > link rate or you could hack in a property in the DP controller to
> > limit it.
>
> Latest pmic_glink proposal from Bjorn include adding the drm_bridge for
> the TCPC. Such bridge can in theory limit supported modes and rates.

Excellent! Even so, I think this isn't totally a solved problem,
right? Even though a bridge seems like a good place for this, last I
remember checking the bridge API wasn't expressive enough to solve
this problem. A bridge could limit pixel clocks just fine, but here we
need to take into account other considerations to know if a given
pixel clock can work at 5.4 GHz or not. For instance, if we're at 4
lanes we could maybe make a given pixel clock at 5.4 GHz but not if we
only have 2 lanes. I don't think that the DP controller passes the
number of lanes to other parts of the bridge chain, though maybe
there's some trick for it?

...I guess the other problem is that all existing users aren't
currently modeling their TCPC in this way. What happens to them?

-Doug

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-01 14:37         ` Doug Anderson
@ 2022-11-02 15:47           ` Doug Anderson
  2022-11-02 17:23             ` Dmitry Baryshkov
  2022-11-02 17:15           ` Dmitry Baryshkov
  1 sibling, 1 reply; 17+ messages in thread
From: Doug Anderson @ 2022-11-02 15:47 UTC (permalink / raw)
  To: Dmitry Baryshkov
  Cc: Kuogee Hsieh, robdclark, sean, swboyd, vkoul, daniel, airlied,
	agross, quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson

Hi,

On Tue, Nov 1, 2022 at 7:37 AM Doug Anderson <dianders@chromium.org> wrote:
>
> Hi,
>
> On Mon, Oct 31, 2022 at 5:15 PM Dmitry Baryshkov
> <dmitry.baryshkov@linaro.org> wrote:
> >
> > On 01/11/2022 03:08, Doug Anderson wrote:
> > > Hi,
> > >
> > > On Mon, Oct 31, 2022 at 2:11 PM Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
> > >>
> > >> Hi Dmitry,
> > >>
> > >>
> > >> Link rate is advertised by sink, but adjusted (reduced the link rate)
> > >> by host during link training.
> > >>
> > >> Therefore should be fine if host did not support HBR3 rate.
> > >>
> > >> It will reduce to lower link rate during link training procedures.
> > >>
> > >> kuogee
> > >>
> > >> On 10/31/2022 11:46 AM, Dmitry Baryshkov wrote:
> > >>> On 31/10/2022 20:27, Kuogee Hsieh wrote:
> > >>>> An HBR3-capable device shall also support TPS4. Since TPS4 feature
> > >>>> had been implemented already, it is not necessary to limit link
> > >>>> rate at HBR2 (5.4G). This patch remove this limitation to support
> > >>>> HBR3 (8.1G) link rate.
> > >>>
> > >>> The DP driver supports several platforms including sdm845 and can
> > >>> support, if I'm not mistaken, platforms up to msm8998/sdm630/660.
> > >>> Could you please confirm that all these SoCs have support for HBR3?
> > >>>
> > >>> With that fact being confirmed:
> > >>>
> > >>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> > >>>
> > >>>
> > >>>>
> > >>>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
> > >>>> ---
> > >>>>    drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
> > >>>>    1 file changed, 4 deletions(-)
> > >>>>
> > >>>> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c
> > >>>> b/drivers/gpu/drm/msm/dp/dp_panel.c
> > >>>> index 5149ceb..3344f5a 100644
> > >>>> --- a/drivers/gpu/drm/msm/dp/dp_panel.c
> > >>>> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c
> > >>>> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel
> > >>>> *dp_panel)
> > >>>>        if (link_info->num_lanes > dp_panel->max_dp_lanes)
> > >>>>            link_info->num_lanes = dp_panel->max_dp_lanes;
> > >>>>    -    /* Limit support upto HBR2 until HBR3 support is added */
> > >>>> -    if (link_info->rate >=
> > >>>> (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
> > >>>> -        link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
> > >>>> -
> > >>>>        drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
> > >>>>        drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
> > >>>>        drm_dbg_dp(panel->drm_dev, "lane_count=%d\n",
> > >>>> link_info->num_lanes);
> > >
> > > Stephen might remember better, but I could have sworn that the problem
> > > was that there might be something in the middle that couldn't support
> > > the higher link rate. In other words, I think we have:
> > >
> > > SoC <--> TypeC Port Controller <--> Display
> > >
> > > The SoC might support HBR3 and the display might support HBR3, but the
> > > TCPC (Type C Port Controller) might not. I think that the TCPC is a
> > > silent/passive component so it can't really let anyone know about its
> > > limitations.
> > >
> > > In theory I guess you could rely on link training to just happen to
> > > fail if you drive the link too fast for the TCPC to handle. Does this
> > > actually work reliably?
> > >
> > > I think the other option that was discussed in the past was to add
> > > something in the device tree for this. Either you could somehow model
> > > the TCPC in DRM and thus know that a given model of TCPC limits the
> > > link rate or you could hack in a property in the DP controller to
> > > limit it.
> >
> > Latest pmic_glink proposal from Bjorn include adding the drm_bridge for
> > the TCPC. Such bridge can in theory limit supported modes and rates.
>
> Excellent! Even so, I think this isn't totally a solved problem,
> right? Even though a bridge seems like a good place for this, last I
> remember checking the bridge API wasn't expressive enough to solve
> this problem. A bridge could limit pixel clocks just fine, but here we
> need to take into account other considerations to know if a given
> pixel clock can work at 5.4 GHz or not. For instance, if we're at 4
> lanes we could maybe make a given pixel clock at 5.4 GHz but not if we
> only have 2 lanes. I don't think that the DP controller passes the
> number of lanes to other parts of the bridge chain, though maybe
> there's some trick for it?
>
> ...I guess the other problem is that all existing users aren't
> currently modeling their TCPC in this way. What happens to them?

FWIW, I did more research on the "let's rely on link training to
detect TCPC's that only support HBR2". I haven't tested it myself, but
from looking at a 1.5 year old internal bug where we discussed this
before, both others at Qualcomm and others at Google were skeptical
about this. Both parties had past experience where link training would
succeed but the display wouldn't be reliable at the higher link rate.

I guess that leaves us with 3 possible approaches:

1. Someone figures out how to model this with the bridge chain and
then we only allow HBR3 if we detect we've got a TCPC that supports
it. This seems like the cleanest / best but feels like a long pole.
Not only have we been trying to get the TCPC-modeled-as-a-bridge stuff
landed for a long time but even when we do it we still don't have a
solution for how to communicate the number of lanes and other stuff
between the TCPC and the DP controller so we have to enrich the bridge
interface.

2. We add in a DT property to the display controller node that says
the max link rate for use on this board. This feels like a hack, but
maybe it's not too bad. Certainly it would be incredibly simple to
implement. Actually... ...one could argue that even if we later model
the TCPC as a bridge that this property would still be valid / useful!
You could certainly imagine that the SoC supports HBR3 and the TCPC
supports HBR3 but that the board routing between the SoC and the TCPC
is bad and only supports HBR2. In this case the only way out is
essentially a "board constraint" AKA a DT property in the DP
controller.

3. We could do some hack based on the SoC. We could assume that newer
SoCs will have a TCPC that was tested with HBR3. This doesn't require
any DT changes and would work, but feels like it won't stand the test
of time.

I'd vote for #2 but I'm interested in what others say.

-Doug

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-01 14:37         ` Doug Anderson
  2022-11-02 15:47           ` Doug Anderson
@ 2022-11-02 17:15           ` Dmitry Baryshkov
  2022-11-02 17:25             ` Doug Anderson
  1 sibling, 1 reply; 17+ messages in thread
From: Dmitry Baryshkov @ 2022-11-02 17:15 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Kuogee Hsieh, robdclark, sean, swboyd, vkoul, daniel, airlied,
	agross, quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson

On 01/11/2022 17:37, Doug Anderson wrote:
> Hi,
> 
> On Mon, Oct 31, 2022 at 5:15 PM Dmitry Baryshkov
> <dmitry.baryshkov@linaro.org> wrote:
>>
>> On 01/11/2022 03:08, Doug Anderson wrote:
>>> Hi,
>>>
>>> On Mon, Oct 31, 2022 at 2:11 PM Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>>>>
>>>> Hi Dmitry,
>>>>
>>>>
>>>> Link rate is advertised by sink, but adjusted (reduced the link rate)
>>>> by host during link training.
>>>>
>>>> Therefore should be fine if host did not support HBR3 rate.
>>>>
>>>> It will reduce to lower link rate during link training procedures.
>>>>
>>>> kuogee
>>>>
>>>> On 10/31/2022 11:46 AM, Dmitry Baryshkov wrote:
>>>>> On 31/10/2022 20:27, Kuogee Hsieh wrote:
>>>>>> An HBR3-capable device shall also support TPS4. Since TPS4 feature
>>>>>> had been implemented already, it is not necessary to limit link
>>>>>> rate at HBR2 (5.4G). This patch remove this limitation to support
>>>>>> HBR3 (8.1G) link rate.
>>>>>
>>>>> The DP driver supports several platforms including sdm845 and can
>>>>> support, if I'm not mistaken, platforms up to msm8998/sdm630/660.
>>>>> Could you please confirm that all these SoCs have support for HBR3?
>>>>>
>>>>> With that fact being confirmed:
>>>>>
>>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
>>>>>
>>>>>
>>>>>>
>>>>>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
>>>>>> ---
>>>>>>     drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
>>>>>>     1 file changed, 4 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>> b/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>> index 5149ceb..3344f5a 100644
>>>>>> --- a/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel
>>>>>> *dp_panel)
>>>>>>         if (link_info->num_lanes > dp_panel->max_dp_lanes)
>>>>>>             link_info->num_lanes = dp_panel->max_dp_lanes;
>>>>>>     -    /* Limit support upto HBR2 until HBR3 support is added */
>>>>>> -    if (link_info->rate >=
>>>>>> (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
>>>>>> -        link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
>>>>>> -
>>>>>>         drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
>>>>>>         drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
>>>>>>         drm_dbg_dp(panel->drm_dev, "lane_count=%d\n",
>>>>>> link_info->num_lanes);
>>>
>>> Stephen might remember better, but I could have sworn that the problem
>>> was that there might be something in the middle that couldn't support
>>> the higher link rate. In other words, I think we have:
>>>
>>> SoC <--> TypeC Port Controller <--> Display
>>>
>>> The SoC might support HBR3 and the display might support HBR3, but the
>>> TCPC (Type C Port Controller) might not. I think that the TCPC is a
>>> silent/passive component so it can't really let anyone know about its
>>> limitations.
>>>
>>> In theory I guess you could rely on link training to just happen to
>>> fail if you drive the link too fast for the TCPC to handle. Does this
>>> actually work reliably?
>>>
>>> I think the other option that was discussed in the past was to add
>>> something in the device tree for this. Either you could somehow model
>>> the TCPC in DRM and thus know that a given model of TCPC limits the
>>> link rate or you could hack in a property in the DP controller to
>>> limit it.
>>
>> Latest pmic_glink proposal from Bjorn include adding the drm_bridge for
>> the TCPC. Such bridge can in theory limit supported modes and rates.
> 
> Excellent! Even so, I think this isn't totally a solved problem,
> right? Even though a bridge seems like a good place for this, last I
> remember checking the bridge API wasn't expressive enough to solve
> this problem. A bridge could limit pixel clocks just fine, but here we
> need to take into account other considerations to know if a given
> pixel clock can work at 5.4 GHz or not. For instance, if we're at 4
> lanes we could maybe make a given pixel clock at 5.4 GHz but not if we
> only have 2 lanes. I don't think that the DP controller passes the
> number of lanes to other parts of the bridge chain, though maybe
> there's some trick for it?

I hope that somebody would fix MSM DP's data-lanes property usage to 
follow the usual way (a part of DT graph). Then it would be possible to 
query the amount of the lanes from the bridge.

> ...I guess the other problem is that all existing users aren't
> currently modeling their TCPC in this way. What happens to them?

There are no existing users. Bryan implemented TCPM support at some 
point, but we never pushed this upstream.

-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-02 15:47           ` Doug Anderson
@ 2022-11-02 17:23             ` Dmitry Baryshkov
  2022-11-02 17:28               ` Doug Anderson
  0 siblings, 1 reply; 17+ messages in thread
From: Dmitry Baryshkov @ 2022-11-02 17:23 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Kuogee Hsieh, robdclark, sean, swboyd, vkoul, daniel, airlied,
	agross, quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson

On 02/11/2022 18:47, Doug Anderson wrote:
> Hi,
> 
> On Tue, Nov 1, 2022 at 7:37 AM Doug Anderson <dianders@chromium.org> wrote:
>>
>> Hi,
>>
>> On Mon, Oct 31, 2022 at 5:15 PM Dmitry Baryshkov
>> <dmitry.baryshkov@linaro.org> wrote:
>>>
>>> On 01/11/2022 03:08, Doug Anderson wrote:
>>>> Hi,
>>>>
>>>> On Mon, Oct 31, 2022 at 2:11 PM Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>>>>>
>>>>> Hi Dmitry,
>>>>>
>>>>>
>>>>> Link rate is advertised by sink, but adjusted (reduced the link rate)
>>>>> by host during link training.
>>>>>
>>>>> Therefore should be fine if host did not support HBR3 rate.
>>>>>
>>>>> It will reduce to lower link rate during link training procedures.
>>>>>
>>>>> kuogee
>>>>>
>>>>> On 10/31/2022 11:46 AM, Dmitry Baryshkov wrote:
>>>>>> On 31/10/2022 20:27, Kuogee Hsieh wrote:
>>>>>>> An HBR3-capable device shall also support TPS4. Since TPS4 feature
>>>>>>> had been implemented already, it is not necessary to limit link
>>>>>>> rate at HBR2 (5.4G). This patch remove this limitation to support
>>>>>>> HBR3 (8.1G) link rate.
>>>>>>
>>>>>> The DP driver supports several platforms including sdm845 and can
>>>>>> support, if I'm not mistaken, platforms up to msm8998/sdm630/660.
>>>>>> Could you please confirm that all these SoCs have support for HBR3?
>>>>>>
>>>>>> With that fact being confirmed:
>>>>>>
>>>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
>>>>>>> ---
>>>>>>>     drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
>>>>>>>     1 file changed, 4 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>>> b/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>>> index 5149ceb..3344f5a 100644
>>>>>>> --- a/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>>> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>>> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel
>>>>>>> *dp_panel)
>>>>>>>         if (link_info->num_lanes > dp_panel->max_dp_lanes)
>>>>>>>             link_info->num_lanes = dp_panel->max_dp_lanes;
>>>>>>>     -    /* Limit support upto HBR2 until HBR3 support is added */
>>>>>>> -    if (link_info->rate >=
>>>>>>> (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
>>>>>>> -        link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
>>>>>>> -
>>>>>>>         drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
>>>>>>>         drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
>>>>>>>         drm_dbg_dp(panel->drm_dev, "lane_count=%d\n",
>>>>>>> link_info->num_lanes);
>>>>
>>>> Stephen might remember better, but I could have sworn that the problem
>>>> was that there might be something in the middle that couldn't support
>>>> the higher link rate. In other words, I think we have:
>>>>
>>>> SoC <--> TypeC Port Controller <--> Display
>>>>
>>>> The SoC might support HBR3 and the display might support HBR3, but the
>>>> TCPC (Type C Port Controller) might not. I think that the TCPC is a
>>>> silent/passive component so it can't really let anyone know about its
>>>> limitations.
>>>>
>>>> In theory I guess you could rely on link training to just happen to
>>>> fail if you drive the link too fast for the TCPC to handle. Does this
>>>> actually work reliably?
>>>>
>>>> I think the other option that was discussed in the past was to add
>>>> something in the device tree for this. Either you could somehow model
>>>> the TCPC in DRM and thus know that a given model of TCPC limits the
>>>> link rate or you could hack in a property in the DP controller to
>>>> limit it.
>>>
>>> Latest pmic_glink proposal from Bjorn include adding the drm_bridge for
>>> the TCPC. Such bridge can in theory limit supported modes and rates.
>>
>> Excellent! Even so, I think this isn't totally a solved problem,
>> right? Even though a bridge seems like a good place for this, last I
>> remember checking the bridge API wasn't expressive enough to solve
>> this problem. A bridge could limit pixel clocks just fine, but here we
>> need to take into account other considerations to know if a given
>> pixel clock can work at 5.4 GHz or not. For instance, if we're at 4
>> lanes we could maybe make a given pixel clock at 5.4 GHz but not if we
>> only have 2 lanes. I don't think that the DP controller passes the
>> number of lanes to other parts of the bridge chain, though maybe
>> there's some trick for it?
>>
>> ...I guess the other problem is that all existing users aren't
>> currently modeling their TCPC in this way. What happens to them?
> 
> FWIW, I did more research on the "let's rely on link training to
> detect TCPC's that only support HBR2". I haven't tested it myself, but
> from looking at a 1.5 year old internal bug where we discussed this
> before, both others at Qualcomm and others at Google were skeptical
> about this. Both parties had past experience where link training would
> succeed but the display wouldn't be reliable at the higher link rate.
> 
> I guess that leaves us with 3 possible approaches:
> 
> 1. Someone figures out how to model this with the bridge chain and
> then we only allow HBR3 if we detect we've got a TCPC that supports
> it. This seems like the cleanest / best but feels like a long pole.
> Not only have we been trying to get the TCPC-modeled-as-a-bridge stuff
> landed for a long time but even when we do it we still don't have a
> solution for how to communicate the number of lanes and other stuff
> between the TCPC and the DP controller so we have to enrich the bridge
> interface.

I think we'd need some OOB interface. For example for DSI interfaces we 
have mipi_dsi_device struct to communicate such OOB data.

Also take a note regarding data-lanes from my previous email.

> 
> 2. We add in a DT property to the display controller node that says
> the max link rate for use on this board. This feels like a hack, but
> maybe it's not too bad. Certainly it would be incredibly simple to
> implement. Actually... ...one could argue that even if we later model
> the TCPC as a bridge that this property would still be valid / useful!
> You could certainly imagine that the SoC supports HBR3 and the TCPC
> supports HBR3 but that the board routing between the SoC and the TCPC
> is bad and only supports HBR2. In this case the only way out is
> essentially a "board constraint" AKA a DT property in the DP
> controller.

We have been discussing similar topics with Abhinav. Krzysztof suggested 
using link-frequencies property to provide max and min values.

> 
> 3. We could do some hack based on the SoC. We could assume that newer
> SoCs will have a TCPC that was tested with HBR3. This doesn't require
> any DT changes and would work, but feels like it won't stand the test
> of time.
> 
> I'd vote for #2 but I'm interested in what others say.
> 
> -Doug

-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-02 17:15           ` Dmitry Baryshkov
@ 2022-11-02 17:25             ` Doug Anderson
  2022-11-02 17:42               ` Dmitry Baryshkov
  0 siblings, 1 reply; 17+ messages in thread
From: Doug Anderson @ 2022-11-02 17:25 UTC (permalink / raw)
  To: Dmitry Baryshkov
  Cc: Kuogee Hsieh, robdclark, sean, swboyd, vkoul, daniel, airlied,
	agross, quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson

Hi,

On Wed, Nov 2, 2022 at 10:15 AM Dmitry Baryshkov
<dmitry.baryshkov@linaro.org> wrote:
>
> On 01/11/2022 17:37, Doug Anderson wrote:
> > Hi,
> >
> > On Mon, Oct 31, 2022 at 5:15 PM Dmitry Baryshkov
> > <dmitry.baryshkov@linaro.org> wrote:
> >>
> >> On 01/11/2022 03:08, Doug Anderson wrote:
> >>> Hi,
> >>>
> >>> On Mon, Oct 31, 2022 at 2:11 PM Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
> >>>>
> >>>> Hi Dmitry,
> >>>>
> >>>>
> >>>> Link rate is advertised by sink, but adjusted (reduced the link rate)
> >>>> by host during link training.
> >>>>
> >>>> Therefore should be fine if host did not support HBR3 rate.
> >>>>
> >>>> It will reduce to lower link rate during link training procedures.
> >>>>
> >>>> kuogee
> >>>>
> >>>> On 10/31/2022 11:46 AM, Dmitry Baryshkov wrote:
> >>>>> On 31/10/2022 20:27, Kuogee Hsieh wrote:
> >>>>>> An HBR3-capable device shall also support TPS4. Since TPS4 feature
> >>>>>> had been implemented already, it is not necessary to limit link
> >>>>>> rate at HBR2 (5.4G). This patch remove this limitation to support
> >>>>>> HBR3 (8.1G) link rate.
> >>>>>
> >>>>> The DP driver supports several platforms including sdm845 and can
> >>>>> support, if I'm not mistaken, platforms up to msm8998/sdm630/660.
> >>>>> Could you please confirm that all these SoCs have support for HBR3?
> >>>>>
> >>>>> With that fact being confirmed:
> >>>>>
> >>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> >>>>>
> >>>>>
> >>>>>>
> >>>>>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
> >>>>>> ---
> >>>>>>     drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
> >>>>>>     1 file changed, 4 deletions(-)
> >>>>>>
> >>>>>> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c
> >>>>>> b/drivers/gpu/drm/msm/dp/dp_panel.c
> >>>>>> index 5149ceb..3344f5a 100644
> >>>>>> --- a/drivers/gpu/drm/msm/dp/dp_panel.c
> >>>>>> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c
> >>>>>> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel
> >>>>>> *dp_panel)
> >>>>>>         if (link_info->num_lanes > dp_panel->max_dp_lanes)
> >>>>>>             link_info->num_lanes = dp_panel->max_dp_lanes;
> >>>>>>     -    /* Limit support upto HBR2 until HBR3 support is added */
> >>>>>> -    if (link_info->rate >=
> >>>>>> (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
> >>>>>> -        link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
> >>>>>> -
> >>>>>>         drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
> >>>>>>         drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
> >>>>>>         drm_dbg_dp(panel->drm_dev, "lane_count=%d\n",
> >>>>>> link_info->num_lanes);
> >>>
> >>> Stephen might remember better, but I could have sworn that the problem
> >>> was that there might be something in the middle that couldn't support
> >>> the higher link rate. In other words, I think we have:
> >>>
> >>> SoC <--> TypeC Port Controller <--> Display
> >>>
> >>> The SoC might support HBR3 and the display might support HBR3, but the
> >>> TCPC (Type C Port Controller) might not. I think that the TCPC is a
> >>> silent/passive component so it can't really let anyone know about its
> >>> limitations.
> >>>
> >>> In theory I guess you could rely on link training to just happen to
> >>> fail if you drive the link too fast for the TCPC to handle. Does this
> >>> actually work reliably?
> >>>
> >>> I think the other option that was discussed in the past was to add
> >>> something in the device tree for this. Either you could somehow model
> >>> the TCPC in DRM and thus know that a given model of TCPC limits the
> >>> link rate or you could hack in a property in the DP controller to
> >>> limit it.
> >>
> >> Latest pmic_glink proposal from Bjorn include adding the drm_bridge for
> >> the TCPC. Such bridge can in theory limit supported modes and rates.
> >
> > Excellent! Even so, I think this isn't totally a solved problem,
> > right? Even though a bridge seems like a good place for this, last I
> > remember checking the bridge API wasn't expressive enough to solve
> > this problem. A bridge could limit pixel clocks just fine, but here we
> > need to take into account other considerations to know if a given
> > pixel clock can work at 5.4 GHz or not. For instance, if we're at 4
> > lanes we could maybe make a given pixel clock at 5.4 GHz but not if we
> > only have 2 lanes. I don't think that the DP controller passes the
> > number of lanes to other parts of the bridge chain, though maybe
> > there's some trick for it?
>
> I hope that somebody would fix MSM DP's data-lanes property usage to
> follow the usual way (a part of DT graph). Then it would be possible to
> query the amount of the lanes from the bridge.

Sorry, can you explain how exactly this works?

I suspect that _somehow_ we need to get info from the TCPC to the DP
controller driver about the maximum link rate. I think anything where
the TCPC uses mode_valid() to reject modes and tries to make decisions
based on "pixel clock" is going to be bad. If nothing else, I think
that during link training that DP controller can try many different
things to see what works. It may try varying the number of lanes, the
BPC, the link rate, etc. I don't think mode_valid() is called each
time through here.


> > ...I guess the other problem is that all existing users aren't
> > currently modeling their TCPC in this way. What happens to them?
>
> There are no existing users. Bryan implemented TCPM support at some
> point, but we never pushed this upstream.

I mean existing DP users, like sc7180-trogdor devices. If the TCPC
isn't modeled, then these need to continue defaulting to HBR2 since at
least some of the boards have HBR2-only TCPCs.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-02 17:23             ` Dmitry Baryshkov
@ 2022-11-02 17:28               ` Doug Anderson
  2022-11-02 18:04                 ` Dmitry Baryshkov
  0 siblings, 1 reply; 17+ messages in thread
From: Doug Anderson @ 2022-11-02 17:28 UTC (permalink / raw)
  To: Dmitry Baryshkov
  Cc: Kuogee Hsieh, robdclark, sean, swboyd, vkoul, daniel, airlied,
	agross, quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson

Hi,

On Wed, Nov 2, 2022 at 10:23 AM Dmitry Baryshkov
<dmitry.baryshkov@linaro.org> wrote:
>
> > 1. Someone figures out how to model this with the bridge chain and
> > then we only allow HBR3 if we detect we've got a TCPC that supports
> > it. This seems like the cleanest / best but feels like a long pole.
> > Not only have we been trying to get the TCPC-modeled-as-a-bridge stuff
> > landed for a long time but even when we do it we still don't have a
> > solution for how to communicate the number of lanes and other stuff
> > between the TCPC and the DP controller so we have to enrich the bridge
> > interface.
>
> I think we'd need some OOB interface. For example for DSI interfaces we
> have mipi_dsi_device struct to communicate such OOB data.
>
> Also take a note regarding data-lanes from my previous email.

Right, we can somehow communicate the max link rate through the bridge
chain to the DP controller in an OOB manner that would work.


> > 2. We add in a DT property to the display controller node that says
> > the max link rate for use on this board. This feels like a hack, but
> > maybe it's not too bad. Certainly it would be incredibly simple to
> > implement. Actually... ...one could argue that even if we later model
> > the TCPC as a bridge that this property would still be valid / useful!
> > You could certainly imagine that the SoC supports HBR3 and the TCPC
> > supports HBR3 but that the board routing between the SoC and the TCPC
> > is bad and only supports HBR2. In this case the only way out is
> > essentially a "board constraint" AKA a DT property in the DP
> > controller.
>
> We have been discussing similar topics with Abhinav. Krzysztof suggested
> using link-frequencies property to provide max and min values.

This sounds good to me and seems worth doing even if we eventually do #1.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-02 17:25             ` Doug Anderson
@ 2022-11-02 17:42               ` Dmitry Baryshkov
  0 siblings, 0 replies; 17+ messages in thread
From: Dmitry Baryshkov @ 2022-11-02 17:42 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Kuogee Hsieh, robdclark, sean, swboyd, vkoul, daniel, airlied,
	agross, quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson

On 02/11/2022 20:25, Doug Anderson wrote:
> Hi,
> 
> On Wed, Nov 2, 2022 at 10:15 AM Dmitry Baryshkov
> <dmitry.baryshkov@linaro.org> wrote:
>>
>> On 01/11/2022 17:37, Doug Anderson wrote:
>>> Hi,
>>>
>>> On Mon, Oct 31, 2022 at 5:15 PM Dmitry Baryshkov
>>> <dmitry.baryshkov@linaro.org> wrote:
>>>>
>>>> On 01/11/2022 03:08, Doug Anderson wrote:
>>>>> Hi,
>>>>>
>>>>> On Mon, Oct 31, 2022 at 2:11 PM Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>>>>>>
>>>>>> Hi Dmitry,
>>>>>>
>>>>>>
>>>>>> Link rate is advertised by sink, but adjusted (reduced the link rate)
>>>>>> by host during link training.
>>>>>>
>>>>>> Therefore should be fine if host did not support HBR3 rate.
>>>>>>
>>>>>> It will reduce to lower link rate during link training procedures.
>>>>>>
>>>>>> kuogee
>>>>>>
>>>>>> On 10/31/2022 11:46 AM, Dmitry Baryshkov wrote:
>>>>>>> On 31/10/2022 20:27, Kuogee Hsieh wrote:
>>>>>>>> An HBR3-capable device shall also support TPS4. Since TPS4 feature
>>>>>>>> had been implemented already, it is not necessary to limit link
>>>>>>>> rate at HBR2 (5.4G). This patch remove this limitation to support
>>>>>>>> HBR3 (8.1G) link rate.
>>>>>>>
>>>>>>> The DP driver supports several platforms including sdm845 and can
>>>>>>> support, if I'm not mistaken, platforms up to msm8998/sdm630/660.
>>>>>>> Could you please confirm that all these SoCs have support for HBR3?
>>>>>>>
>>>>>>> With that fact being confirmed:
>>>>>>>
>>>>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
>>>>>>>> ---
>>>>>>>>      drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
>>>>>>>>      1 file changed, 4 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>>>> b/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>>>> index 5149ceb..3344f5a 100644
>>>>>>>> --- a/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>>>> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c
>>>>>>>> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel
>>>>>>>> *dp_panel)
>>>>>>>>          if (link_info->num_lanes > dp_panel->max_dp_lanes)
>>>>>>>>              link_info->num_lanes = dp_panel->max_dp_lanes;
>>>>>>>>      -    /* Limit support upto HBR2 until HBR3 support is added */
>>>>>>>> -    if (link_info->rate >=
>>>>>>>> (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
>>>>>>>> -        link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
>>>>>>>> -
>>>>>>>>          drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
>>>>>>>>          drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
>>>>>>>>          drm_dbg_dp(panel->drm_dev, "lane_count=%d\n",
>>>>>>>> link_info->num_lanes);
>>>>>
>>>>> Stephen might remember better, but I could have sworn that the problem
>>>>> was that there might be something in the middle that couldn't support
>>>>> the higher link rate. In other words, I think we have:
>>>>>
>>>>> SoC <--> TypeC Port Controller <--> Display
>>>>>
>>>>> The SoC might support HBR3 and the display might support HBR3, but the
>>>>> TCPC (Type C Port Controller) might not. I think that the TCPC is a
>>>>> silent/passive component so it can't really let anyone know about its
>>>>> limitations.
>>>>>
>>>>> In theory I guess you could rely on link training to just happen to
>>>>> fail if you drive the link too fast for the TCPC to handle. Does this
>>>>> actually work reliably?
>>>>>
>>>>> I think the other option that was discussed in the past was to add
>>>>> something in the device tree for this. Either you could somehow model
>>>>> the TCPC in DRM and thus know that a given model of TCPC limits the
>>>>> link rate or you could hack in a property in the DP controller to
>>>>> limit it.
>>>>
>>>> Latest pmic_glink proposal from Bjorn include adding the drm_bridge for
>>>> the TCPC. Such bridge can in theory limit supported modes and rates.
>>>
>>> Excellent! Even so, I think this isn't totally a solved problem,
>>> right? Even though a bridge seems like a good place for this, last I
>>> remember checking the bridge API wasn't expressive enough to solve
>>> this problem. A bridge could limit pixel clocks just fine, but here we
>>> need to take into account other considerations to know if a given
>>> pixel clock can work at 5.4 GHz or not. For instance, if we're at 4
>>> lanes we could maybe make a given pixel clock at 5.4 GHz but not if we
>>> only have 2 lanes. I don't think that the DP controller passes the
>>> number of lanes to other parts of the bridge chain, though maybe
>>> there's some trick for it?
>>
>> I hope that somebody would fix MSM DP's data-lanes property usage to
>> follow the usual way (a part of DT graph). Then it would be possible to
>> query the amount of the lanes from the bridge.
> 
> Sorry, can you explain how exactly this works?

This was related to your point regarding communicating number of data 
lanes. Currently DP nodes have data-lanes in the device node itself. 
This contradicts with the typical definition and usage of the property - 
to be used in the graph endpoint. Then the 
drm_of_get_data_lanes_count() and drm_of_get_data_lanes_count_ep() 
functions can be used to query data-lanes value.

> I suspect that _somehow_ we need to get info from the TCPC to the DP
> controller driver about the maximum link rate. I think anything where
> the TCPC uses mode_valid() to reject modes and tries to make decisions
> based on "pixel clock" is going to be bad. If nothing else, I think
> that during link training that DP controller can try many different
> things to see what works. It may try varying the number of lanes, the
> BPC, the link rate, etc. I don't think mode_valid() is called each
> time through here.

In the worst case this can become the new max-data-rate propery. Or the 
existing link-frequencies property. But this needs to defined in the 
board file (or in the TCPC driver if that's the hardware limitation).

Granted the existing dp_panel code I think that the fix can be to check 
for the link-frequencies property and to limit the link_info->rate based 
on the value.


>>> ...I guess the other problem is that all existing users aren't
>>> currently modeling their TCPC in this way. What happens to them?
>>
>> There are no existing users. Bryan implemented TCPM support at some
>> point, but we never pushed this upstream.
> 
> I mean existing DP users, like sc7180-trogdor devices. If the TCPC
> isn't modeled, then these need to continue defaulting to HBR2 since at
> least some of the boards have HBR2-only TCPCs.

Ack. I think somebody has to describe the DP links properly on those 
platforms. E.g. by adding the usb-connector nodes, etc (I assume that 
existing sc7180/7280 platforms use USB-C connectors mixed with USB 
rather than normal DP/uDP connectors). Let's see how Bjorn's proposal goes.


-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-02 17:28               ` Doug Anderson
@ 2022-11-02 18:04                 ` Dmitry Baryshkov
  2022-11-09 23:47                   ` Kuogee Hsieh
  0 siblings, 1 reply; 17+ messages in thread
From: Dmitry Baryshkov @ 2022-11-02 18:04 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Kuogee Hsieh, robdclark, sean, swboyd, vkoul, daniel, airlied,
	agross, quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson

On 02/11/2022 20:28, Doug Anderson wrote:
> Hi,
> 
> On Wed, Nov 2, 2022 at 10:23 AM Dmitry Baryshkov
> <dmitry.baryshkov@linaro.org> wrote:
>>
>>> 1. Someone figures out how to model this with the bridge chain and
>>> then we only allow HBR3 if we detect we've got a TCPC that supports
>>> it. This seems like the cleanest / best but feels like a long pole.
>>> Not only have we been trying to get the TCPC-modeled-as-a-bridge stuff
>>> landed for a long time but even when we do it we still don't have a
>>> solution for how to communicate the number of lanes and other stuff
>>> between the TCPC and the DP controller so we have to enrich the bridge
>>> interface.
>>
>> I think we'd need some OOB interface. For example for DSI interfaces we
>> have mipi_dsi_device struct to communicate such OOB data.
>>
>> Also take a note regarding data-lanes from my previous email.
> 
> Right, we can somehow communicate the max link rate through the bridge
> chain to the DP controller in an OOB manner that would work.

I'd note that our dp_panel has some notion of such OOB data. So do AUX 
drivers including the panel-edp. My suggestion would be to consider both 
of them while modelling the OOB data.

> 
> 
>>> 2. We add in a DT property to the display controller node that says
>>> the max link rate for use on this board. This feels like a hack, but
>>> maybe it's not too bad. Certainly it would be incredibly simple to
>>> implement. Actually... ...one could argue that even if we later model
>>> the TCPC as a bridge that this property would still be valid / useful!
>>> You could certainly imagine that the SoC supports HBR3 and the TCPC
>>> supports HBR3 but that the board routing between the SoC and the TCPC
>>> is bad and only supports HBR2. In this case the only way out is
>>> essentially a "board constraint" AKA a DT property in the DP
>>> controller.
>>
>> We have been discussing similar topics with Abhinav. Krzysztof suggested
>> using link-frequencies property to provide max and min values.
> 
> This sounds good to me and seems worth doing even if we eventually do #1.

And the bonus point is that it can be done easily.

-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-02 18:04                 ` Dmitry Baryshkov
@ 2022-11-09 23:47                   ` Kuogee Hsieh
  2022-11-10  7:43                     ` Dmitry Baryshkov
  0 siblings, 1 reply; 17+ messages in thread
From: Kuogee Hsieh @ 2022-11-09 23:47 UTC (permalink / raw)
  To: Dmitry Baryshkov, Doug Anderson
  Cc: robdclark, sean, swboyd, vkoul, daniel, airlied, agross,
	quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson


On 11/2/2022 11:04 AM, Dmitry Baryshkov wrote:
> On 02/11/2022 20:28, Doug Anderson wrote:
>> Hi,
>>
>> On Wed, Nov 2, 2022 at 10:23 AM Dmitry Baryshkov
>> <dmitry.baryshkov@linaro.org> wrote:
>>>
>>>> 1. Someone figures out how to model this with the bridge chain and
>>>> then we only allow HBR3 if we detect we've got a TCPC that supports
>>>> it. This seems like the cleanest / best but feels like a long pole.
>>>> Not only have we been trying to get the TCPC-modeled-as-a-bridge stuff
>>>> landed for a long time but even when we do it we still don't have a
>>>> solution for how to communicate the number of lanes and other stuff
>>>> between the TCPC and the DP controller so we have to enrich the bridge
>>>> interface.
>>>
>>> I think we'd need some OOB interface. For example for DSI interfaces we
>>> have mipi_dsi_device struct to communicate such OOB data.
>>>
>>> Also take a note regarding data-lanes from my previous email.
>>
>> Right, we can somehow communicate the max link rate through the bridge
>> chain to the DP controller in an OOB manner that would work.
>
> I'd note that our dp_panel has some notion of such OOB data. So do AUX 
> drivers including the panel-edp. My suggestion would be to consider 
> both of them while modelling the OOB data.
>
>>
>>
>>>> 2. We add in a DT property to the display controller node that says
>>>> the max link rate for use on this board. This feels like a hack, but
>>>> maybe it's not too bad. Certainly it would be incredibly simple to
>>>> implement. Actually... ...one could argue that even if we later model
>>>> the TCPC as a bridge that this property would still be valid / useful!
>>>> You could certainly imagine that the SoC supports HBR3 and the TCPC
>>>> supports HBR3 but that the board routing between the SoC and the TCPC
>>>> is bad and only supports HBR2. In this case the only way out is
>>>> essentially a "board constraint" AKA a DT property in the DP
>>>> controller.
>>>
>>> We have been discussing similar topics with Abhinav. Krzysztof 
>>> suggested
>>> using link-frequencies property to provide max and min values.

questions,

1)is Krzysztof suggested had been implemented?

2) where is link property i can add link-frequencies?


>>
>> This sounds good to me and seems worth doing even if we eventually do 
>> #1.
>
> And the bonus point is that it can be done easily.
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-09 23:47                   ` Kuogee Hsieh
@ 2022-11-10  7:43                     ` Dmitry Baryshkov
  2022-11-15 18:43                       ` Kuogee Hsieh
  0 siblings, 1 reply; 17+ messages in thread
From: Dmitry Baryshkov @ 2022-11-10  7:43 UTC (permalink / raw)
  To: Kuogee Hsieh, Doug Anderson
  Cc: robdclark, sean, swboyd, vkoul, daniel, airlied, agross,
	quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson

On 10/11/2022 02:47, Kuogee Hsieh wrote:
> 
> On 11/2/2022 11:04 AM, Dmitry Baryshkov wrote:
>> On 02/11/2022 20:28, Doug Anderson wrote:
>>> Hi,
>>>
>>> On Wed, Nov 2, 2022 at 10:23 AM Dmitry Baryshkov
>>> <dmitry.baryshkov@linaro.org> wrote:
>>>>
>>>>> 1. Someone figures out how to model this with the bridge chain and
>>>>> then we only allow HBR3 if we detect we've got a TCPC that supports
>>>>> it. This seems like the cleanest / best but feels like a long pole.
>>>>> Not only have we been trying to get the TCPC-modeled-as-a-bridge stuff
>>>>> landed for a long time but even when we do it we still don't have a
>>>>> solution for how to communicate the number of lanes and other stuff
>>>>> between the TCPC and the DP controller so we have to enrich the bridge
>>>>> interface.
>>>>
>>>> I think we'd need some OOB interface. For example for DSI interfaces we
>>>> have mipi_dsi_device struct to communicate such OOB data.
>>>>
>>>> Also take a note regarding data-lanes from my previous email.
>>>
>>> Right, we can somehow communicate the max link rate through the bridge
>>> chain to the DP controller in an OOB manner that would work.
>>
>> I'd note that our dp_panel has some notion of such OOB data. So do AUX 
>> drivers including the panel-edp. My suggestion would be to consider 
>> both of them while modelling the OOB data.
>>
>>>
>>>
>>>>> 2. We add in a DT property to the display controller node that says
>>>>> the max link rate for use on this board. This feels like a hack, but
>>>>> maybe it's not too bad. Certainly it would be incredibly simple to
>>>>> implement. Actually... ...one could argue that even if we later model
>>>>> the TCPC as a bridge that this property would still be valid / useful!
>>>>> You could certainly imagine that the SoC supports HBR3 and the TCPC
>>>>> supports HBR3 but that the board routing between the SoC and the TCPC
>>>>> is bad and only supports HBR2. In this case the only way out is
>>>>> essentially a "board constraint" AKA a DT property in the DP
>>>>> controller.
>>>>
>>>> We have been discussing similar topics with Abhinav. Krzysztof 
>>>> suggested
>>>> using link-frequencies property to provide max and min values.
> 
> questions,
> 
> 1)is Krzysztof suggested had been implemented?

I can not parse this question, please excuse me.

Yes, Krzysztof suggested this being implemented as a link property, see 
media/video-interfaces.txt.

Moreover your implementation goes against both the existing definition 
(array with the list of frequencies) and Krzysztof's suggested extension 
(min and max). Listing just a single frequency goes against both these 
suggestions. In case of DP we have a fixed set of frequencies. Thus I'd 
suggest listing all supported frequencies instead.

> 2) where is link property i can add link-frequencies?

link node. Create outbound graph node, add link-frequencies there. Also 
as you are touching this part, please move the data-lanes property too.

> 
> 
>>>
>>> This sounds good to me and seems worth doing even if we eventually do 
>>> #1.
>>
>> And the bonus point is that it can be done easily.
>>

-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-10  7:43                     ` Dmitry Baryshkov
@ 2022-11-15 18:43                       ` Kuogee Hsieh
  2022-11-16 13:40                         ` Dmitry Baryshkov
  0 siblings, 1 reply; 17+ messages in thread
From: Kuogee Hsieh @ 2022-11-15 18:43 UTC (permalink / raw)
  To: Dmitry Baryshkov, Doug Anderson
  Cc: robdclark, sean, swboyd, vkoul, daniel, airlied, agross,
	quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson


On 11/9/2022 11:43 PM, Dmitry Baryshkov wrote:
> On 10/11/2022 02:47, Kuogee Hsieh wrote:
>>
>> On 11/2/2022 11:04 AM, Dmitry Baryshkov wrote:
>>> On 02/11/2022 20:28, Doug Anderson wrote:
>>>> Hi,
>>>>
>>>> On Wed, Nov 2, 2022 at 10:23 AM Dmitry Baryshkov
>>>> <dmitry.baryshkov@linaro.org> wrote:
>>>>>
>>>>>> 1. Someone figures out how to model this with the bridge chain and
>>>>>> then we only allow HBR3 if we detect we've got a TCPC that supports
>>>>>> it. This seems like the cleanest / best but feels like a long pole.
>>>>>> Not only have we been trying to get the TCPC-modeled-as-a-bridge 
>>>>>> stuff
>>>>>> landed for a long time but even when we do it we still don't have a
>>>>>> solution for how to communicate the number of lanes and other stuff
>>>>>> between the TCPC and the DP controller so we have to enrich the 
>>>>>> bridge
>>>>>> interface.
>>>>>
>>>>> I think we'd need some OOB interface. For example for DSI 
>>>>> interfaces we
>>>>> have mipi_dsi_device struct to communicate such OOB data.
>>>>>
>>>>> Also take a note regarding data-lanes from my previous email.
>>>>
>>>> Right, we can somehow communicate the max link rate through the bridge
>>>> chain to the DP controller in an OOB manner that would work.
>>>
>>> I'd note that our dp_panel has some notion of such OOB data. So do 
>>> AUX drivers including the panel-edp. My suggestion would be to 
>>> consider both of them while modelling the OOB data.
>>>
>>>>
>>>>
>>>>>> 2. We add in a DT property to the display controller node that says
>>>>>> the max link rate for use on this board. This feels like a hack, but
>>>>>> maybe it's not too bad. Certainly it would be incredibly simple to
>>>>>> implement. Actually... ...one could argue that even if we later 
>>>>>> model
>>>>>> the TCPC as a bridge that this property would still be valid / 
>>>>>> useful!
>>>>>> You could certainly imagine that the SoC supports HBR3 and the TCPC
>>>>>> supports HBR3 but that the board routing between the SoC and the 
>>>>>> TCPC
>>>>>> is bad and only supports HBR2. In this case the only way out is
>>>>>> essentially a "board constraint" AKA a DT property in the DP
>>>>>> controller.
>>>>>
>>>>> We have been discussing similar topics with Abhinav. Krzysztof 
>>>>> suggested
>>>>> using link-frequencies property to provide max and min values.
>>
>> questions,
>>
>> 1)is Krzysztof suggested had been implemented?
>
> I can not parse this question, please excuse me.
>
> Yes, Krzysztof suggested this being implemented as a link property, 
> see media/video-interfaces.txt.
>
> Moreover your implementation goes against both the existing definition 
> (array with the list of frequencies) and Krzysztof's suggested 
> extension (min and max). Listing just a single frequency goes against 
> both these suggestions. In case of DP we have a fixed set of 
> frequencies. Thus I'd suggest listing all supported frequencies instead.

I think this proposal is kind of strange.

According to DP spec, if a link support 5,4G, then it must support 1.6, 
2.7 and 5.4.

If it support 8.1G, then it must support 1.6 , 2.7 and 5.4.

There is no link can only support 2.7 and 5.4G without supporting 1.6G.

>
>> 2) where is link property i can add link-frequencies?
>
> link node. Create outbound graph node, add link-frequencies there. 
> Also as you are touching this part, please move the data-lanes 
> property too.
>
>>
>>
>>>>
>>>> This sounds good to me and seems worth doing even if we eventually 
>>>> do #1.
>>>
>>> And the bonus point is that it can be done easily.
>>>
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
  2022-11-15 18:43                       ` Kuogee Hsieh
@ 2022-11-16 13:40                         ` Dmitry Baryshkov
  0 siblings, 0 replies; 17+ messages in thread
From: Dmitry Baryshkov @ 2022-11-16 13:40 UTC (permalink / raw)
  To: Kuogee Hsieh, Doug Anderson
  Cc: robdclark, sean, swboyd, vkoul, daniel, airlied, agross,
	quic_abhinavk, quic_sbillaka, freedreno, dri-devel,
	linux-arm-msm, linux-kernel, Bjorn Andersson

On 15/11/2022 21:43, Kuogee Hsieh wrote:
> 
> On 11/9/2022 11:43 PM, Dmitry Baryshkov wrote:
>> On 10/11/2022 02:47, Kuogee Hsieh wrote:
>>>
>>> On 11/2/2022 11:04 AM, Dmitry Baryshkov wrote:
>>>> On 02/11/2022 20:28, Doug Anderson wrote:
>>>>> Hi,
>>>>>
>>>>> On Wed, Nov 2, 2022 at 10:23 AM Dmitry Baryshkov
>>>>> <dmitry.baryshkov@linaro.org> wrote:
>>>>>>
>>>>>>> 1. Someone figures out how to model this with the bridge chain and
>>>>>>> then we only allow HBR3 if we detect we've got a TCPC that supports
>>>>>>> it. This seems like the cleanest / best but feels like a long pole.
>>>>>>> Not only have we been trying to get the TCPC-modeled-as-a-bridge 
>>>>>>> stuff
>>>>>>> landed for a long time but even when we do it we still don't have a
>>>>>>> solution for how to communicate the number of lanes and other stuff
>>>>>>> between the TCPC and the DP controller so we have to enrich the 
>>>>>>> bridge
>>>>>>> interface.
>>>>>>
>>>>>> I think we'd need some OOB interface. For example for DSI 
>>>>>> interfaces we
>>>>>> have mipi_dsi_device struct to communicate such OOB data.
>>>>>>
>>>>>> Also take a note regarding data-lanes from my previous email.
>>>>>
>>>>> Right, we can somehow communicate the max link rate through the bridge
>>>>> chain to the DP controller in an OOB manner that would work.
>>>>
>>>> I'd note that our dp_panel has some notion of such OOB data. So do 
>>>> AUX drivers including the panel-edp. My suggestion would be to 
>>>> consider both of them while modelling the OOB data.
>>>>
>>>>>
>>>>>
>>>>>>> 2. We add in a DT property to the display controller node that says
>>>>>>> the max link rate for use on this board. This feels like a hack, but
>>>>>>> maybe it's not too bad. Certainly it would be incredibly simple to
>>>>>>> implement. Actually... ...one could argue that even if we later 
>>>>>>> model
>>>>>>> the TCPC as a bridge that this property would still be valid / 
>>>>>>> useful!
>>>>>>> You could certainly imagine that the SoC supports HBR3 and the TCPC
>>>>>>> supports HBR3 but that the board routing between the SoC and the 
>>>>>>> TCPC
>>>>>>> is bad and only supports HBR2. In this case the only way out is
>>>>>>> essentially a "board constraint" AKA a DT property in the DP
>>>>>>> controller.
>>>>>>
>>>>>> We have been discussing similar topics with Abhinav. Krzysztof 
>>>>>> suggested
>>>>>> using link-frequencies property to provide max and min values.
>>>
>>> questions,
>>>
>>> 1)is Krzysztof suggested had been implemented?
>>
>> I can not parse this question, please excuse me.
>>
>> Yes, Krzysztof suggested this being implemented as a link property, 
>> see media/video-interfaces.txt.
>>
>> Moreover your implementation goes against both the existing definition 
>> (array with the list of frequencies) and Krzysztof's suggested 
>> extension (min and max). Listing just a single frequency goes against 
>> both these suggestions. In case of DP we have a fixed set of 
>> frequencies. Thus I'd suggest listing all supported frequencies instead.
> 
> I think this proposal is kind of strange.
> 
> According to DP spec, if a link support 5,4G, then it must support 1.6, 
> 2.7 and 5.4.
> 
> If it support 8.1G, then it must support 1.6 , 2.7 and 5.4.
> 
> There is no link can only support 2.7 and 5.4G without supporting 1.6G.

Let me quote the docs.

   link-frequencies:
     $ref: /schemas/types.yaml#/definitions/uint64-array
     description:
       Allowed data bus frequencies. For MIPI CSI-2, for instance, this 
is the
       actual frequency of the bus, not bits per clock per lane value. 
An array
       of 64-bit unsigned integers.

Note. 'allowed data bus frequencies'. So by listing only the max 
frequency you'd break this description.

-- 
With best wishes
Dmitry


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-11-16 13:40 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-31 17:27 [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3 Kuogee Hsieh
2022-10-31 18:46 ` Dmitry Baryshkov
2022-10-31 21:11   ` Kuogee Hsieh
2022-11-01  0:08     ` Doug Anderson
2022-11-01  0:15       ` Dmitry Baryshkov
2022-11-01 14:37         ` Doug Anderson
2022-11-02 15:47           ` Doug Anderson
2022-11-02 17:23             ` Dmitry Baryshkov
2022-11-02 17:28               ` Doug Anderson
2022-11-02 18:04                 ` Dmitry Baryshkov
2022-11-09 23:47                   ` Kuogee Hsieh
2022-11-10  7:43                     ` Dmitry Baryshkov
2022-11-15 18:43                       ` Kuogee Hsieh
2022-11-16 13:40                         ` Dmitry Baryshkov
2022-11-02 17:15           ` Dmitry Baryshkov
2022-11-02 17:25             ` Doug Anderson
2022-11-02 17:42               ` Dmitry Baryshkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).