All of lore.kernel.org
 help / color / mirror / Atom feed
* Soliciting DRM feedback on latest DC rework
@ 2017-05-03 14:13 Harry Wentland
       [not found] ` <6409067e-8f25-d2f8-37ee-fb3ed3c06ea2-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Harry Wentland @ 2017-05-03 14:13 UTC (permalink / raw)
  To: Deucher, Alexander, Dave Airlie, Daniel Vetter,
	amd-gfx mailing list, dri-devel
  Cc: dc_upstream

Hi all,

Over the last few months we (mostly Andrey and myself) have taken and 
addressed some of the feedback received from December's DC RFC. A lot of 
our work so far centers around atomic. We were able to take a whole 
bunch of the areas where we rolled our own solution and use DRM atomic 
helpers instead.

You can find our most recent drm-next based tree at 
https://cgit.freedesktop.org/~hwentland/linux/log/?h=dc-drm-next-atomic-wip

An outline of the changes with commit hashes from that tree are listed 
below. They aren't necessarily in order but are grouped by functionality.

I would like to solicit input on the changes and the current state of 
amd/display in general.

I'm on IRC every weekday from 9-5 eastern time, sometimes at other 
hours. Feel free to ask questions, discuss, leave comments. Let me know 
if there's anything else I can do to facilitate review.

We know there's more work to be done but would much rather prioritize 
that work based on community feedback than merely our own impressions.

We haven't finished plumbing drm types to the dc types yet, and there 
are still a bunch of other things in progress.  We are not looking to 
re-hash the previous discussion, but rather we'd like some feedback on 
our work so far.

The list of changes (trimmed commit tags):

== Use common helpers for pageflip ==
144da239b047 Use pflip prepare and submit parts (v2)
ff7ac264a9a1 Fix page flip after daniel's acquire_ctx change


== implement atomic_get_properties ==
cf4a84df7189 Fallback on legacy properties in atomic_get_properties
01f96706b6ca get_atomic_property missing for drm_connector_funcs


== Use common helpers for gamma ==
3f547d7098de Use atomic helpers for gamma


== Use atomic helpers for commit ==
41831f55bd58 Refactor atomic commit implementation. (v2)
6c67dd3c5cd5 Refactor headless to use atomic commit.
eb22ef1ecb16 Remove page_fleep_needed function.


== Use atomic helpers for S3 ==
5a6ae6f76249 Switch to DRM helpers in s3.


== Simmplify mapping between DRM and DC objects ==
84a3ee023b9b Remove get_connector_for_link.
6d8978a98b40 Remove get_connector_for_sink.


== Use DRM EDID read instead of DC ==
566969dacfad Fix i2c write flag.
527c3699ff3c Refactor edid read.
5ac51023d275 Fix missing irq refactor causing potential i2c race


== Save DC validate_ctx in atomic_check and use in commit ==
8c194d8e4ee9 pull commit_surfaces out of atomic_commit into helper function
27eef98b38c8 Return context from validate_context
ca3ee10e915b Add DM global atomic state
8ba1ca856532 Hook dm private state into atomic_check
10160455ac6d Add correct retain/release for objs in dm_atomic_state
0f1b2e2aecbb Commit validation set from state
258e6a91fc61 Add validate_context to atomic_state
64f569b5df20 Use validate_context from atomic_check in commit


== Start multi-plane implementation ==
601ff4e70b7c decouple per-crtc-plane model
a0b859a0b114 Fix cleanup in amdgpu_dm_initialize_drm_device
ee02010d7a82 update plane functionalities
3b7345fd1abb Allow planes on all crtcs
d9cf37462156 initialize YUV plane capabilities
248f06b2613e Universal cursor plane hook-up.


== Minor cleanups ==
d99bf02cb8fc Rename atomic_commit parameter.
f15fb9726502 Use amdgpu mode funcs statically
d3e37fa70643 Remove unused define from amdgpu_dm_types
80a38ad58125 Add lock around updating freesync property
8c7f16853824 Use new drm_dp_find_vcpi_slots to compute slots

Regards,
Harry
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Soliciting DRM feedback on latest DC rework
       [not found] ` <6409067e-8f25-d2f8-37ee-fb3ed3c06ea2-5C7GfCeVMHo@public.gmane.org>
@ 2017-05-03 14:26   ` Christian König
  2017-05-03 15:02     ` Daniel Vetter
  0 siblings, 1 reply; 10+ messages in thread
From: Christian König @ 2017-05-03 14:26 UTC (permalink / raw)
  To: Harry Wentland, Deucher, Alexander, Dave Airlie, Daniel Vetter,
	amd-gfx mailing list, dri-devel
  Cc: dc_upstream

Hi Harry,

while this looks more and more like it could work something which would 
really help would be to have a set of patches squashed together and 
rebased on drm-next.

The dc-drm-next-atomic-wip looks like a start, but we need more 
something like:
drm/amdgpu: add base DC components
drm/amdgpu: add DCE8 support for DC
drm/amdgpu: add DCE9 support for DC
drm/amdgpu: add DCE10 support for DC
...
drm/amdgpu: enable DC globally
drm/amdgpu: remove old display infrastructure

Otherwise I fear that people will just be lost in the mass amount of 
patches we have in the branch.

Regards,
Christian.

Am 03.05.2017 um 16:13 schrieb Harry Wentland:
> Hi all,
>
> Over the last few months we (mostly Andrey and myself) have taken and 
> addressed some of the feedback received from December's DC RFC. A lot 
> of our work so far centers around atomic. We were able to take a whole 
> bunch of the areas where we rolled our own solution and use DRM atomic 
> helpers instead.
>
> You can find our most recent drm-next based tree at 
> https://cgit.freedesktop.org/~hwentland/linux/log/?h=dc-drm-next-atomic-wip
>
> An outline of the changes with commit hashes from that tree are listed 
> below. They aren't necessarily in order but are grouped by functionality.
>
> I would like to solicit input on the changes and the current state of 
> amd/display in general.
>
> I'm on IRC every weekday from 9-5 eastern time, sometimes at other 
> hours. Feel free to ask questions, discuss, leave comments. Let me 
> know if there's anything else I can do to facilitate review.
>
> We know there's more work to be done but would much rather prioritize 
> that work based on community feedback than merely our own impressions.
>
> We haven't finished plumbing drm types to the dc types yet, and there 
> are still a bunch of other things in progress.  We are not looking to 
> re-hash the previous discussion, but rather we'd like some feedback on 
> our work so far.
>
> The list of changes (trimmed commit tags):
>
> == Use common helpers for pageflip ==
> 144da239b047 Use pflip prepare and submit parts (v2)
> ff7ac264a9a1 Fix page flip after daniel's acquire_ctx change
>
>
> == implement atomic_get_properties ==
> cf4a84df7189 Fallback on legacy properties in atomic_get_properties
> 01f96706b6ca get_atomic_property missing for drm_connector_funcs
>
>
> == Use common helpers for gamma ==
> 3f547d7098de Use atomic helpers for gamma
>
>
> == Use atomic helpers for commit ==
> 41831f55bd58 Refactor atomic commit implementation. (v2)
> 6c67dd3c5cd5 Refactor headless to use atomic commit.
> eb22ef1ecb16 Remove page_fleep_needed function.
>
>
> == Use atomic helpers for S3 ==
> 5a6ae6f76249 Switch to DRM helpers in s3.
>
>
> == Simmplify mapping between DRM and DC objects ==
> 84a3ee023b9b Remove get_connector_for_link.
> 6d8978a98b40 Remove get_connector_for_sink.
>
>
> == Use DRM EDID read instead of DC ==
> 566969dacfad Fix i2c write flag.
> 527c3699ff3c Refactor edid read.
> 5ac51023d275 Fix missing irq refactor causing potential i2c race
>
>
> == Save DC validate_ctx in atomic_check and use in commit ==
> 8c194d8e4ee9 pull commit_surfaces out of atomic_commit into helper 
> function
> 27eef98b38c8 Return context from validate_context
> ca3ee10e915b Add DM global atomic state
> 8ba1ca856532 Hook dm private state into atomic_check
> 10160455ac6d Add correct retain/release for objs in dm_atomic_state
> 0f1b2e2aecbb Commit validation set from state
> 258e6a91fc61 Add validate_context to atomic_state
> 64f569b5df20 Use validate_context from atomic_check in commit
>
>
> == Start multi-plane implementation ==
> 601ff4e70b7c decouple per-crtc-plane model
> a0b859a0b114 Fix cleanup in amdgpu_dm_initialize_drm_device
> ee02010d7a82 update plane functionalities
> 3b7345fd1abb Allow planes on all crtcs
> d9cf37462156 initialize YUV plane capabilities
> 248f06b2613e Universal cursor plane hook-up.
>
>
> == Minor cleanups ==
> d99bf02cb8fc Rename atomic_commit parameter.
> f15fb9726502 Use amdgpu mode funcs statically
> d3e37fa70643 Remove unused define from amdgpu_dm_types
> 80a38ad58125 Add lock around updating freesync property
> 8c7f16853824 Use new drm_dp_find_vcpi_slots to compute slots
>
> Regards,
> Harry
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Soliciting DRM feedback on latest DC rework
  2017-05-03 14:26   ` Christian König
@ 2017-05-03 15:02     ` Daniel Vetter
       [not found]       ` <20170503150236.pkr3r2o5l2pjx5m5-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Vetter @ 2017-05-03 15:02 UTC (permalink / raw)
  To: Christian König
  Cc: dc_upstream, dri-devel, amd-gfx mailing list, Daniel Vetter,
	Deucher, Alexander

On Wed, May 03, 2017 at 04:26:51PM +0200, Christian König wrote:
> Hi Harry,
> 
> while this looks more and more like it could work something which would
> really help would be to have a set of patches squashed together and rebased
> on drm-next.
> 
> The dc-drm-next-atomic-wip looks like a start, but we need more something
> like:
> drm/amdgpu: add base DC components
> drm/amdgpu: add DCE8 support for DC
> drm/amdgpu: add DCE9 support for DC
> drm/amdgpu: add DCE10 support for DC
> ...
> drm/amdgpu: enable DC globally
> drm/amdgpu: remove old display infrastructure
> 
> Otherwise I fear that people will just be lost in the mass amount of patches
> we have in the branch.

I think for a quick first feedback round simply something that's based on
top of recent drm-next is good enough, since I'll probably only look at
the aggregate diff anyway (or resulting tree).
-Daniel

> 
> Regards,
> Christian.
> 
> Am 03.05.2017 um 16:13 schrieb Harry Wentland:
> > Hi all,
> > 
> > Over the last few months we (mostly Andrey and myself) have taken and
> > addressed some of the feedback received from December's DC RFC. A lot of
> > our work so far centers around atomic. We were able to take a whole
> > bunch of the areas where we rolled our own solution and use DRM atomic
> > helpers instead.
> > 
> > You can find our most recent drm-next based tree at
> > https://cgit.freedesktop.org/~hwentland/linux/log/?h=dc-drm-next-atomic-wip
> > 
> > An outline of the changes with commit hashes from that tree are listed
> > below. They aren't necessarily in order but are grouped by
> > functionality.
> > 
> > I would like to solicit input on the changes and the current state of
> > amd/display in general.
> > 
> > I'm on IRC every weekday from 9-5 eastern time, sometimes at other
> > hours. Feel free to ask questions, discuss, leave comments. Let me know
> > if there's anything else I can do to facilitate review.
> > 
> > We know there's more work to be done but would much rather prioritize
> > that work based on community feedback than merely our own impressions.
> > 
> > We haven't finished plumbing drm types to the dc types yet, and there
> > are still a bunch of other things in progress.  We are not looking to
> > re-hash the previous discussion, but rather we'd like some feedback on
> > our work so far.
> > 
> > The list of changes (trimmed commit tags):
> > 
> > == Use common helpers for pageflip ==
> > 144da239b047 Use pflip prepare and submit parts (v2)
> > ff7ac264a9a1 Fix page flip after daniel's acquire_ctx change
> > 
> > 
> > == implement atomic_get_properties ==
> > cf4a84df7189 Fallback on legacy properties in atomic_get_properties
> > 01f96706b6ca get_atomic_property missing for drm_connector_funcs
> > 
> > 
> > == Use common helpers for gamma ==
> > 3f547d7098de Use atomic helpers for gamma
> > 
> > 
> > == Use atomic helpers for commit ==
> > 41831f55bd58 Refactor atomic commit implementation. (v2)
> > 6c67dd3c5cd5 Refactor headless to use atomic commit.
> > eb22ef1ecb16 Remove page_fleep_needed function.
> > 
> > 
> > == Use atomic helpers for S3 ==
> > 5a6ae6f76249 Switch to DRM helpers in s3.
> > 
> > 
> > == Simmplify mapping between DRM and DC objects ==
> > 84a3ee023b9b Remove get_connector_for_link.
> > 6d8978a98b40 Remove get_connector_for_sink.
> > 
> > 
> > == Use DRM EDID read instead of DC ==
> > 566969dacfad Fix i2c write flag.
> > 527c3699ff3c Refactor edid read.
> > 5ac51023d275 Fix missing irq refactor causing potential i2c race
> > 
> > 
> > == Save DC validate_ctx in atomic_check and use in commit ==
> > 8c194d8e4ee9 pull commit_surfaces out of atomic_commit into helper
> > function
> > 27eef98b38c8 Return context from validate_context
> > ca3ee10e915b Add DM global atomic state
> > 8ba1ca856532 Hook dm private state into atomic_check
> > 10160455ac6d Add correct retain/release for objs in dm_atomic_state
> > 0f1b2e2aecbb Commit validation set from state
> > 258e6a91fc61 Add validate_context to atomic_state
> > 64f569b5df20 Use validate_context from atomic_check in commit
> > 
> > 
> > == Start multi-plane implementation ==
> > 601ff4e70b7c decouple per-crtc-plane model
> > a0b859a0b114 Fix cleanup in amdgpu_dm_initialize_drm_device
> > ee02010d7a82 update plane functionalities
> > 3b7345fd1abb Allow planes on all crtcs
> > d9cf37462156 initialize YUV plane capabilities
> > 248f06b2613e Universal cursor plane hook-up.
> > 
> > 
> > == Minor cleanups ==
> > d99bf02cb8fc Rename atomic_commit parameter.
> > f15fb9726502 Use amdgpu mode funcs statically
> > d3e37fa70643 Remove unused define from amdgpu_dm_types
> > 80a38ad58125 Add lock around updating freesync property
> > 8c7f16853824 Use new drm_dp_find_vcpi_slots to compute slots
> > 
> > Regards,
> > Harry
> > _______________________________________________
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Soliciting DRM feedback on latest DC rework
       [not found]       ` <20170503150236.pkr3r2o5l2pjx5m5-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
@ 2017-05-03 19:12         ` Harry Wentland
       [not found]           ` <244fc211-ec97-cb99-9f2f-b16b02f874b2-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Harry Wentland @ 2017-05-03 19:12 UTC (permalink / raw)
  To: Daniel Vetter, Christian König
  Cc: dc_upstream, Daniel Vetter, dri-devel, amd-gfx mailing list,
	Deucher, Alexander, Dave Airlie



On 2017-05-03 11:02 AM, Daniel Vetter wrote:
> On Wed, May 03, 2017 at 04:26:51PM +0200, Christian König wrote:
>> Hi Harry,
>>
>> while this looks more and more like it could work something which would
>> really help would be to have a set of patches squashed together and rebased
>> on drm-next.
>>
>> The dc-drm-next-atomic-wip looks like a start, but we need more something
>> like:
>> drm/amdgpu: add base DC components
>> drm/amdgpu: add DCE8 support for DC
>> drm/amdgpu: add DCE9 support for DC
>> drm/amdgpu: add DCE10 support for DC
>> ...
>> drm/amdgpu: enable DC globally
>> drm/amdgpu: remove old display infrastructure
>>
>> Otherwise I fear that people will just be lost in the mass amount of patches
>> we have in the branch.
>
> I think for a quick first feedback round simply something that's based on
> top of recent drm-next is good enough, since I'll probably only look at
> the aggregate diff anyway (or resulting tree).

This was basically what I figured which is why I didn't invest more time 
to squash changes. I agree with Christian that eventually we probably 
want something like his suggested split but for now I'd rather focus on 
tying a dc_surface to a drm_plane (and same for other objects). Starting 
to tie our objects to drm objects has been very eye-opening, to say the 
least.

Daniel, do you think you'll find time to take a look at this this week 
or next?

Harry

> -Daniel
>
>>
>> Regards,
>> Christian.
>>
>> Am 03.05.2017 um 16:13 schrieb Harry Wentland:
>>> Hi all,
>>>
>>> Over the last few months we (mostly Andrey and myself) have taken and
>>> addressed some of the feedback received from December's DC RFC. A lot of
>>> our work so far centers around atomic. We were able to take a whole
>>> bunch of the areas where we rolled our own solution and use DRM atomic
>>> helpers instead.
>>>
>>> You can find our most recent drm-next based tree at
>>> https://cgit.freedesktop.org/~hwentland/linux/log/?h=dc-drm-next-atomic-wip
>>>
>>> An outline of the changes with commit hashes from that tree are listed
>>> below. They aren't necessarily in order but are grouped by
>>> functionality.
>>>
>>> I would like to solicit input on the changes and the current state of
>>> amd/display in general.
>>>
>>> I'm on IRC every weekday from 9-5 eastern time, sometimes at other
>>> hours. Feel free to ask questions, discuss, leave comments. Let me know
>>> if there's anything else I can do to facilitate review.
>>>
>>> We know there's more work to be done but would much rather prioritize
>>> that work based on community feedback than merely our own impressions.
>>>
>>> We haven't finished plumbing drm types to the dc types yet, and there
>>> are still a bunch of other things in progress.  We are not looking to
>>> re-hash the previous discussion, but rather we'd like some feedback on
>>> our work so far.
>>>
>>> The list of changes (trimmed commit tags):
>>>
>>> == Use common helpers for pageflip ==
>>> 144da239b047 Use pflip prepare and submit parts (v2)
>>> ff7ac264a9a1 Fix page flip after daniel's acquire_ctx change
>>>
>>>
>>> == implement atomic_get_properties ==
>>> cf4a84df7189 Fallback on legacy properties in atomic_get_properties
>>> 01f96706b6ca get_atomic_property missing for drm_connector_funcs
>>>
>>>
>>> == Use common helpers for gamma ==
>>> 3f547d7098de Use atomic helpers for gamma
>>>
>>>
>>> == Use atomic helpers for commit ==
>>> 41831f55bd58 Refactor atomic commit implementation. (v2)
>>> 6c67dd3c5cd5 Refactor headless to use atomic commit.
>>> eb22ef1ecb16 Remove page_fleep_needed function.
>>>
>>>
>>> == Use atomic helpers for S3 ==
>>> 5a6ae6f76249 Switch to DRM helpers in s3.
>>>
>>>
>>> == Simmplify mapping between DRM and DC objects ==
>>> 84a3ee023b9b Remove get_connector_for_link.
>>> 6d8978a98b40 Remove get_connector_for_sink.
>>>
>>>
>>> == Use DRM EDID read instead of DC ==
>>> 566969dacfad Fix i2c write flag.
>>> 527c3699ff3c Refactor edid read.
>>> 5ac51023d275 Fix missing irq refactor causing potential i2c race
>>>
>>>
>>> == Save DC validate_ctx in atomic_check and use in commit ==
>>> 8c194d8e4ee9 pull commit_surfaces out of atomic_commit into helper
>>> function
>>> 27eef98b38c8 Return context from validate_context
>>> ca3ee10e915b Add DM global atomic state
>>> 8ba1ca856532 Hook dm private state into atomic_check
>>> 10160455ac6d Add correct retain/release for objs in dm_atomic_state
>>> 0f1b2e2aecbb Commit validation set from state
>>> 258e6a91fc61 Add validate_context to atomic_state
>>> 64f569b5df20 Use validate_context from atomic_check in commit
>>>
>>>
>>> == Start multi-plane implementation ==
>>> 601ff4e70b7c decouple per-crtc-plane model
>>> a0b859a0b114 Fix cleanup in amdgpu_dm_initialize_drm_device
>>> ee02010d7a82 update plane functionalities
>>> 3b7345fd1abb Allow planes on all crtcs
>>> d9cf37462156 initialize YUV plane capabilities
>>> 248f06b2613e Universal cursor plane hook-up.
>>>
>>>
>>> == Minor cleanups ==
>>> d99bf02cb8fc Rename atomic_commit parameter.
>>> f15fb9726502 Use amdgpu mode funcs statically
>>> d3e37fa70643 Remove unused define from amdgpu_dm_types
>>> 80a38ad58125 Add lock around updating freesync property
>>> 8c7f16853824 Use new drm_dp_find_vcpi_slots to compute slots
>>>
>>> Regards,
>>> Harry
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>
>>
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Soliciting DRM feedback on latest DC rework
       [not found]           ` <244fc211-ec97-cb99-9f2f-b16b02f874b2-5C7GfCeVMHo@public.gmane.org>
@ 2017-05-08 18:54             ` Harry Wentland
  2017-05-08 19:07               ` Dave Airlie
       [not found]               ` <ad7a0d85-817c-aee7-a596-374db23b8a2d-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 2 replies; 10+ messages in thread
From: Harry Wentland @ 2017-05-08 18:54 UTC (permalink / raw)
  To: Daniel Vetter, Christian König
  Cc: dc_upstream, Daniel Vetter, dri-devel, amd-gfx mailing list,
	Deucher, Alexander, Dave Airlie

Hi Daniel,

Thanks for taking the time to look at DC.

I had a couple more questions/comments in regard to the patch you posted 
on IRC: http://paste.debian.net/plain/930704

My impression is that this item is the most important next step for us:

>     From a quick glance I think what we want instead is:
>     - amdgpu_crtc_state and amdgpu_plane_state as following:
>
>     struct amdgpu_crtc_state {
>             struct drm_crtc_state base;
>             struct dc_stream;
>     };

Unfortunately as sketched here it doesn't quite mesh with what we 
currently have, which is:

struct stream {
	struct core_stream protected;
	...
}

struct core_stream {
	struct dc_stream public;
	...
}

struct dc_stream {
	...
}

Any objections to doing something like this instead:

#ifdef LINUX
#include "drm_crtc.h"
#endif

struct dc_stream {
#ifdef LINUX
	struct drm_crtc_state base;
#endif
	...
}

The ifdefs are then removed on Linux versions of DC, while other OSs 
wouldn't compile with the LINUX flag.

>     - validate_context->res_ctx->pool looks extremely fishy. In principle,
>       refcounting does not mesh with the duplicate, check, then
>       free-or-commit semantics of atomic. Are you sure this works
>       correctly when doing TEST_ONLY commits? igt or weston (with Daniel's
>       patches) are your friend (or enemy) here :-)
>
>       The common approach all other drivers implement:
>       - Same duplicate semantics for private objects as for core drm
>         objects. The private obj helpers will help with cutting down the
>         boiler-plate.
>       - No refcounts, instead an allocation bitmask in the object one up
>         in the hierarchy (usually global state, but sometimes also
>         crtc/stream).
>       - If you want to keep the current design, then you
>         must do a deep copy of pool (i.e. not just the struct, but also
>         every struct that might change it points too). The above
>         accomplishes that essentially, except in standard atomic patterns.
>       - You'll probably need a bunch of for_each_foo_in_context macros,
>         built using the private state helpers.
>
>       Trying to review correctness of atomic_check when private resources
>       are refcounted is real hard. The only case we have is vc4, and it's
>       kinda not pretty (it has it's own manual rollback hacks in the
>       release functions). Going entirely with free-standing stuff is much
>       better.

Good points here. I think I'll ultimately have to get IGT running 
against our code with TEST_ONLY commits and see what happens. Ultimately 
we probably have to deep copy, one way or another. I don't really want 
any rollback stuff as that would be near impossible to maintain.

>     This shouldn't be a huge conceptual change in the DC design (it
>     already has checks for "is this unchanged" and then ignore that
>     object), just quite a bit more invasive than sprinkling for_each_*
>     macros all over the place. From a glane it could be that you'd get
>     80% there by having a for_each_changed_*_in_context set of macros,
>     with the current code everywhere else, and the "jumps over unchanged
>     obj because they're not in drm_atomic_state/dc_validation_context"
>     logic on drm atomic.

Yeah, this should fit mostly with DC design. Will evaluate this once we 
link DC objects to DRM state objects.

> @@ -1640,6 +1644,8 @@ static void dm_crtc_helper_disable(struct drm_crtc *crtc)
>  {
>  }
>
> +/* no dummy funcs pls, counts everywhere */
> +

Does the comment apply to the preceding or next function? What do you 
mean with "counts everywhere"?

>  static int dm_crtc_helper_atomic_check(
>  	struct drm_crtc *crtc,
>  	struct drm_crtc_state *state)


> @@ -3077,6 +3102,15 @@ int amdgpu_dm_atomic_check(struct drm_device *dev,
>
>  		acrtc = to_amdgpu_crtc(crtc);
>
> +		/* This is the wrong way round. If you have a requirement for a
> +		 * 1:1 connector/crtc mapping, then loop over connectors and
> +		 * grab the crtc from there. Plus make sure there's really only
> +		 * 1 connector per crtc (but the possible clones stuff should
> +		 * catch that for you I think, at least with latest atomic
> +		 * helpers.
> +		 *
> +		 * Similar for the same loop in commit.
> +		 */
>  		aconnector = amdgpu_dm_find_first_crct_matching_connector(state, crtc, true);
>
>  		action = get_dm_commit_action(crtc_state);

Good point. This code is still a bit of a mess.

Harry


On 2017-05-03 03:12 PM, Harry Wentland wrote:
>
>
> On 2017-05-03 11:02 AM, Daniel Vetter wrote:
>> On Wed, May 03, 2017 at 04:26:51PM +0200, Christian König wrote:
>>> Hi Harry,
>>>
>>> while this looks more and more like it could work something which would
>>> really help would be to have a set of patches squashed together and
>>> rebased
>>> on drm-next.
>>>
>>> The dc-drm-next-atomic-wip looks like a start, but we need more
>>> something
>>> like:
>>> drm/amdgpu: add base DC components
>>> drm/amdgpu: add DCE8 support for DC
>>> drm/amdgpu: add DCE9 support for DC
>>> drm/amdgpu: add DCE10 support for DC
>>> ...
>>> drm/amdgpu: enable DC globally
>>> drm/amdgpu: remove old display infrastructure
>>>
>>> Otherwise I fear that people will just be lost in the mass amount of
>>> patches
>>> we have in the branch.
>>
>> I think for a quick first feedback round simply something that's based on
>> top of recent drm-next is good enough, since I'll probably only look at
>> the aggregate diff anyway (or resulting tree).
>
> This was basically what I figured which is why I didn't invest more time
> to squash changes. I agree with Christian that eventually we probably
> want something like his suggested split but for now I'd rather focus on
> tying a dc_surface to a drm_plane (and same for other objects). Starting
> to tie our objects to drm objects has been very eye-opening, to say the
> least.
>
> Daniel, do you think you'll find time to take a look at this this week
> or next?
>
> Harry
>
>> -Daniel
>>
>>>
>>> Regards,
>>> Christian.
>>>
>>> Am 03.05.2017 um 16:13 schrieb Harry Wentland:
>>>> Hi all,
>>>>
>>>> Over the last few months we (mostly Andrey and myself) have taken and
>>>> addressed some of the feedback received from December's DC RFC. A
>>>> lot of
>>>> our work so far centers around atomic. We were able to take a whole
>>>> bunch of the areas where we rolled our own solution and use DRM atomic
>>>> helpers instead.
>>>>
>>>> You can find our most recent drm-next based tree at
>>>> https://cgit.freedesktop.org/~hwentland/linux/log/?h=dc-drm-next-atomic-wip
>>>>
>>>>
>>>> An outline of the changes with commit hashes from that tree are listed
>>>> below. They aren't necessarily in order but are grouped by
>>>> functionality.
>>>>
>>>> I would like to solicit input on the changes and the current state of
>>>> amd/display in general.
>>>>
>>>> I'm on IRC every weekday from 9-5 eastern time, sometimes at other
>>>> hours. Feel free to ask questions, discuss, leave comments. Let me know
>>>> if there's anything else I can do to facilitate review.
>>>>
>>>> We know there's more work to be done but would much rather prioritize
>>>> that work based on community feedback than merely our own impressions.
>>>>
>>>> We haven't finished plumbing drm types to the dc types yet, and there
>>>> are still a bunch of other things in progress.  We are not looking to
>>>> re-hash the previous discussion, but rather we'd like some feedback on
>>>> our work so far.
>>>>
>>>> The list of changes (trimmed commit tags):
>>>>
>>>> == Use common helpers for pageflip ==
>>>> 144da239b047 Use pflip prepare and submit parts (v2)
>>>> ff7ac264a9a1 Fix page flip after daniel's acquire_ctx change
>>>>
>>>>
>>>> == implement atomic_get_properties ==
>>>> cf4a84df7189 Fallback on legacy properties in atomic_get_properties
>>>> 01f96706b6ca get_atomic_property missing for drm_connector_funcs
>>>>
>>>>
>>>> == Use common helpers for gamma ==
>>>> 3f547d7098de Use atomic helpers for gamma
>>>>
>>>>
>>>> == Use atomic helpers for commit ==
>>>> 41831f55bd58 Refactor atomic commit implementation. (v2)
>>>> 6c67dd3c5cd5 Refactor headless to use atomic commit.
>>>> eb22ef1ecb16 Remove page_fleep_needed function.
>>>>
>>>>
>>>> == Use atomic helpers for S3 ==
>>>> 5a6ae6f76249 Switch to DRM helpers in s3.
>>>>
>>>>
>>>> == Simmplify mapping between DRM and DC objects ==
>>>> 84a3ee023b9b Remove get_connector_for_link.
>>>> 6d8978a98b40 Remove get_connector_for_sink.
>>>>
>>>>
>>>> == Use DRM EDID read instead of DC ==
>>>> 566969dacfad Fix i2c write flag.
>>>> 527c3699ff3c Refactor edid read.
>>>> 5ac51023d275 Fix missing irq refactor causing potential i2c race
>>>>
>>>>
>>>> == Save DC validate_ctx in atomic_check and use in commit ==
>>>> 8c194d8e4ee9 pull commit_surfaces out of atomic_commit into helper
>>>> function
>>>> 27eef98b38c8 Return context from validate_context
>>>> ca3ee10e915b Add DM global atomic state
>>>> 8ba1ca856532 Hook dm private state into atomic_check
>>>> 10160455ac6d Add correct retain/release for objs in dm_atomic_state
>>>> 0f1b2e2aecbb Commit validation set from state
>>>> 258e6a91fc61 Add validate_context to atomic_state
>>>> 64f569b5df20 Use validate_context from atomic_check in commit
>>>>
>>>>
>>>> == Start multi-plane implementation ==
>>>> 601ff4e70b7c decouple per-crtc-plane model
>>>> a0b859a0b114 Fix cleanup in amdgpu_dm_initialize_drm_device
>>>> ee02010d7a82 update plane functionalities
>>>> 3b7345fd1abb Allow planes on all crtcs
>>>> d9cf37462156 initialize YUV plane capabilities
>>>> 248f06b2613e Universal cursor plane hook-up.
>>>>
>>>>
>>>> == Minor cleanups ==
>>>> d99bf02cb8fc Rename atomic_commit parameter.
>>>> f15fb9726502 Use amdgpu mode funcs statically
>>>> d3e37fa70643 Remove unused define from amdgpu_dm_types
>>>> 80a38ad58125 Add lock around updating freesync property
>>>> 8c7f16853824 Use new drm_dp_find_vcpi_slots to compute slots
>>>>
>>>> Regards,
>>>> Harry
>>>> _______________________________________________
>>>> amd-gfx mailing list
>>>> amd-gfx@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>
>>>
>>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Soliciting DRM feedback on latest DC rework
  2017-05-08 18:54             ` Harry Wentland
@ 2017-05-08 19:07               ` Dave Airlie
  2017-05-08 19:50                 ` Harry Wentland
       [not found]               ` <ad7a0d85-817c-aee7-a596-374db23b8a2d-5C7GfCeVMHo@public.gmane.org>
  1 sibling, 1 reply; 10+ messages in thread
From: Dave Airlie @ 2017-05-08 19:07 UTC (permalink / raw)
  To: Harry Wentland
  Cc: dc_upstream, Daniel Vetter, dri-devel, amd-gfx mailing list,
	Deucher, Alexander, Christian König

On 9 May 2017 at 04:54, Harry Wentland <harry.wentland@amd.com> wrote:
> Hi Daniel,
>
> Thanks for taking the time to look at DC.
>
> I had a couple more questions/comments in regard to the patch you posted on
> IRC: http://paste.debian.net/plain/930704
>
> My impression is that this item is the most important next step for us:
>
>>     From a quick glance I think what we want instead is:
>>     - amdgpu_crtc_state and amdgpu_plane_state as following:
>>
>>     struct amdgpu_crtc_state {
>>             struct drm_crtc_state base;
>>             struct dc_stream;
>>     };
>
>
> Unfortunately as sketched here it doesn't quite mesh with what we currently
> have, which is:
>
> struct stream {
>         struct core_stream protected;
>         ...
> }
>
> struct core_stream {
>         struct dc_stream public;
>         ...
> }
>
> struct dc_stream {
>         ...
> }
>

Is there any major reason to keep all those abstractions?

Could you collapse everything into struct dc_stream?

I haven't looked recently but I didn't get the impression there was a
lot of design around what was public/protected, more whatever needed
to be used by someone else was in public.

Dave.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Soliciting DRM feedback on latest DC rework
  2017-05-08 19:07               ` Dave Airlie
@ 2017-05-08 19:50                 ` Harry Wentland
       [not found]                   ` <52f32c81-2a8b-2c8e-4e4a-7362277940df-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Harry Wentland @ 2017-05-08 19:50 UTC (permalink / raw)
  To: Dave Airlie
  Cc: dc_upstream, Daniel Vetter, dri-devel, amd-gfx mailing list,
	Deucher, Alexander, Christian König



On 2017-05-08 03:07 PM, Dave Airlie wrote:
> On 9 May 2017 at 04:54, Harry Wentland <harry.wentland@amd.com> wrote:
>> Hi Daniel,
>>
>> Thanks for taking the time to look at DC.
>>
>> I had a couple more questions/comments in regard to the patch you posted on
>> IRC: http://paste.debian.net/plain/930704
>>
>> My impression is that this item is the most important next step for us:
>>
>>>     From a quick glance I think what we want instead is:
>>>     - amdgpu_crtc_state and amdgpu_plane_state as following:
>>>
>>>     struct amdgpu_crtc_state {
>>>             struct drm_crtc_state base;
>>>             struct dc_stream;
>>>     };
>>
>>
>> Unfortunately as sketched here it doesn't quite mesh with what we currently
>> have, which is:
>>
>> struct stream {
>>         struct core_stream protected;
>>         ...
>> }
>>
>> struct core_stream {
>>         struct dc_stream public;
>>         ...
>> }
>>
>> struct dc_stream {
>>         ...
>> }
>>
>
> Is there any major reason to keep all those abstractions?
>
> Could you collapse everything into struct dc_stream?
>
> I haven't looked recently but I didn't get the impression there was a
> lot of design around what was public/protected, more whatever needed
> to be used by someone else was in public.
>

Yeah, I've been thinking the same recently.

The thought behind public/protected was that DM can modify public but 
would never touch protected. We don't really want to give DM an 
opportunity to touch core_stream directly. If it does we might lose a 
lot of our confidence in DC correctness which comes from running the 
same code on different OSes.

That said, this could be caught during code review and collapsing these 
structs would simplify things a bit. For one, it would allow me to take 
Daniel's proposal as-is.

I'll think about it some more and maybe mock up a couple patches and see 
how they're received internally.

Harry

> Dave.
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Soliciting DRM feedback on latest DC rework
       [not found]                   ` <52f32c81-2a8b-2c8e-4e4a-7362277940df-5C7GfCeVMHo@public.gmane.org>
@ 2017-05-09  8:19                     ` Daniel Vetter
  0 siblings, 0 replies; 10+ messages in thread
From: Daniel Vetter @ 2017-05-09  8:19 UTC (permalink / raw)
  To: Harry Wentland
  Cc: dc_upstream, Daniel Vetter, amd-gfx mailing list, dri-devel,
	Daniel Vetter, Deucher, Alexander, Dave Airlie,
	Christian König

On Mon, May 08, 2017 at 03:50:36PM -0400, Harry Wentland wrote:
> 
> 
> On 2017-05-08 03:07 PM, Dave Airlie wrote:
> > On 9 May 2017 at 04:54, Harry Wentland <harry.wentland@amd.com> wrote:
> > > Hi Daniel,
> > > 
> > > Thanks for taking the time to look at DC.
> > > 
> > > I had a couple more questions/comments in regard to the patch you posted on
> > > IRC: http://paste.debian.net/plain/930704
> > > 
> > > My impression is that this item is the most important next step for us:
> > > 
> > > >     From a quick glance I think what we want instead is:
> > > >     - amdgpu_crtc_state and amdgpu_plane_state as following:
> > > > 
> > > >     struct amdgpu_crtc_state {
> > > >             struct drm_crtc_state base;
> > > >             struct dc_stream;
> > > >     };
> > > 
> > > 
> > > Unfortunately as sketched here it doesn't quite mesh with what we currently
> > > have, which is:
> > > 
> > > struct stream {
> > >         struct core_stream protected;
> > >         ...
> > > }
> > > 
> > > struct core_stream {
> > >         struct dc_stream public;
> > >         ...
> > > }
> > > 
> > > struct dc_stream {
> > >         ...
> > > }
> > > 
> > 
> > Is there any major reason to keep all those abstractions?
> > 
> > Could you collapse everything into struct dc_stream?
> > 
> > I haven't looked recently but I didn't get the impression there was a
> > lot of design around what was public/protected, more whatever needed
> > to be used by someone else was in public.
> > 
> 
> Yeah, I've been thinking the same recently.
> 
> The thought behind public/protected was that DM can modify public but would
> never touch protected. We don't really want to give DM an opportunity to
> touch core_stream directly. If it does we might lose a lot of our confidence
> in DC correctness which comes from running the same code on different OSes.
> 
> That said, this could be caught during code review and collapsing these
> structs would simplify things a bit. For one, it would allow me to take
> Daniel's proposal as-is.
> 
> I'll think about it some more and maybe mock up a couple patches and see how
> they're received internally.

Yeah I'm seconding Dave's concern. The entire point of doing this
embedding is to convert DC-the-midlayer into DC-the-helper library. The
distinguishing desing point is that the former hides stuff, the latter
doesn't. So shoveling all the core_stream stuff into dc_stream (or just
embedding that one directly into amdgpu_crtc_state as an interim step) is
much more in line with this.

public/private interfaces for helper libraries don't make much sense as a
concept, it's more different levels of details in how much you need to
overwrite/control the implemented behaviour. So sliding scale, not binary
barrier.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Soliciting DRM feedback on latest DC rework
       [not found]               ` <ad7a0d85-817c-aee7-a596-374db23b8a2d-5C7GfCeVMHo@public.gmane.org>
@ 2017-05-09  8:24                 ` Daniel Vetter
       [not found]                   ` <20170509082416.bx7fflkkmdypvuyy-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Vetter @ 2017-05-09  8:24 UTC (permalink / raw)
  To: Harry Wentland
  Cc: dc_upstream, Daniel Vetter, amd-gfx mailing list, dri-devel,
	Daniel Vetter, Deucher, Alexander, Dave Airlie,
	Christian König

On Mon, May 08, 2017 at 02:54:22PM -0400, Harry Wentland wrote:
> Hi Daniel,
> 
> Thanks for taking the time to look at DC.
> 
> I had a couple more questions/comments in regard to the patch you posted on
> IRC: http://paste.debian.net/plain/930704
> 
> My impression is that this item is the most important next step for us:
> 
> >     From a quick glance I think what we want instead is:
> >     - amdgpu_crtc_state and amdgpu_plane_state as following:
> > 
> >     struct amdgpu_crtc_state {
> >             struct drm_crtc_state base;
> >             struct dc_stream;
> >     };
> 
> Unfortunately as sketched here it doesn't quite mesh with what we currently
> have, which is:
> 
> struct stream {
> 	struct core_stream protected;
> 	...
> }
> 
> struct core_stream {
> 	struct dc_stream public;
> 	...
> }
> 
> struct dc_stream {
> 	...
> }
> 
> Any objections to doing something like this instead:
> 
> #ifdef LINUX
> #include "drm_crtc.h"
> #endif
> 
> struct dc_stream {
> #ifdef LINUX
> 	struct drm_crtc_state base;
> #endif
> 	...
> }
> 
> The ifdefs are then removed on Linux versions of DC, while other OSs
> wouldn't compile with the LINUX flag.
> 
> >     - validate_context->res_ctx->pool looks extremely fishy. In principle,
> >       refcounting does not mesh with the duplicate, check, then
> >       free-or-commit semantics of atomic. Are you sure this works
> >       correctly when doing TEST_ONLY commits? igt or weston (with Daniel's
> >       patches) are your friend (or enemy) here :-)
> > 
> >       The common approach all other drivers implement:
> >       - Same duplicate semantics for private objects as for core drm
> >         objects. The private obj helpers will help with cutting down the
> >         boiler-plate.
> >       - No refcounts, instead an allocation bitmask in the object one up
> >         in the hierarchy (usually global state, but sometimes also
> >         crtc/stream).
> >       - If you want to keep the current design, then you
> >         must do a deep copy of pool (i.e. not just the struct, but also
> >         every struct that might change it points too). The above
> >         accomplishes that essentially, except in standard atomic patterns.
> >       - You'll probably need a bunch of for_each_foo_in_context macros,
> >         built using the private state helpers.
> > 
> >       Trying to review correctness of atomic_check when private resources
> >       are refcounted is real hard. The only case we have is vc4, and it's
> >       kinda not pretty (it has it's own manual rollback hacks in the
> >       release functions). Going entirely with free-standing stuff is much
> >       better.
> 
> Good points here. I think I'll ultimately have to get IGT running against
> our code with TEST_ONLY commits and see what happens. Ultimately we probably
> have to deep copy, one way or another. I don't really want any rollback
> stuff as that would be near impossible to maintain.
> 
> >     This shouldn't be a huge conceptual change in the DC design (it
> >     already has checks for "is this unchanged" and then ignore that
> >     object), just quite a bit more invasive than sprinkling for_each_*
> >     macros all over the place. From a glane it could be that you'd get
> >     80% there by having a for_each_changed_*_in_context set of macros,
> >     with the current code everywhere else, and the "jumps over unchanged
> >     obj because they're not in drm_atomic_state/dc_validation_context"
> >     logic on drm atomic.
> 
> Yeah, this should fit mostly with DC design. Will evaluate this once we link
> DC objects to DRM state objects.
> 
> > @@ -1640,6 +1644,8 @@ static void dm_crtc_helper_disable(struct drm_crtc *crtc)
> >  {
> >  }
> > 
> > +/* no dummy funcs pls, counts everywhere */
> > +
> 
> Does the comment apply to the preceding or next function? What do you mean
> with "counts everywhere"?

In general you have a lot of callbacks which are either just {} or {
return 0; }, aka empty/dummy implementations.

We've refactored atomic helpers a lot to make all of these optional, pls
remove them. This was a somewhat recent development, I guess initial
atomic DC still had to have all these dummy callbacks.

> >  static int dm_crtc_helper_atomic_check(
> >  	struct drm_crtc *crtc,
> >  	struct drm_crtc_state *state)
> 
> 
> > @@ -3077,6 +3102,15 @@ int amdgpu_dm_atomic_check(struct drm_device *dev,
> > 
> >  		acrtc = to_amdgpu_crtc(crtc);
> > 
> > +		/* This is the wrong way round. If you have a requirement for a
> > +		 * 1:1 connector/crtc mapping, then loop over connectors and
> > +		 * grab the crtc from there. Plus make sure there's really only
> > +		 * 1 connector per crtc (but the possible clones stuff should
> > +		 * catch that for you I think, at least with latest atomic
> > +		 * helpers.
> > +		 *
> > +		 * Similar for the same loop in commit.
> > +		 */
> >  		aconnector = amdgpu_dm_find_first_crct_matching_connector(state, crtc, true);
> > 
> >  		action = get_dm_commit_action(crtc_state);
> 
> Good point. This code is still a bit of a mess.

I think once you do the embedding and switch the state/obj iterators to
the atomic versions there should be a pile more places to clean up. Atomic
helper library should be a good place to read up on best practices
(especially since the recent merge of the extended iterator macros from
Maarten).

Loop over connectors and chase one pointer (often
connector_state->best_encoder) is a fairly common pattern, and if you have
a 1:1 crtc/connector routing (and it's enforced) then I think in many
places a natural loop pattern will go over connectors and then chase
connector_state->crtc. i915 also has plenty of examples in drm-tip (only
that one has Maarten's patches which switch all the modeset code over to
the new iterators).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Soliciting DRM feedback on latest DC rework
       [not found]                   ` <20170509082416.bx7fflkkmdypvuyy-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
@ 2017-05-09 20:52                     ` Harry Wentland
  0 siblings, 0 replies; 10+ messages in thread
From: Harry Wentland @ 2017-05-09 20:52 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: dc_upstream, Daniel Vetter, dri-devel, amd-gfx mailing list,
	Deucher, Alexander, Dave Airlie, Christian König



On 2017-05-09 04:24 AM, Daniel Vetter wrote:
> On Mon, May 08, 2017 at 02:54:22PM -0400, Harry Wentland wrote:
>> Hi Daniel,
>>
>> Thanks for taking the time to look at DC.
>>
>> I had a couple more questions/comments in regard to the patch you posted on
>> IRC: http://paste.debian.net/plain/930704
>>
>> My impression is that this item is the most important next step for us:
>>
>>>     From a quick glance I think what we want instead is:
>>>     - amdgpu_crtc_state and amdgpu_plane_state as following:
>>>
>>>     struct amdgpu_crtc_state {
>>>             struct drm_crtc_state base;
>>>             struct dc_stream;
>>>     };
>>
>> Unfortunately as sketched here it doesn't quite mesh with what we currently
>> have, which is:
>>
>> struct stream {
>> 	struct core_stream protected;
>> 	...
>> }
>>
>> struct core_stream {
>> 	struct dc_stream public;
>> 	...
>> }
>>
>> struct dc_stream {
>> 	...
>> }
>>
>> Any objections to doing something like this instead:
>>
>> #ifdef LINUX
>> #include "drm_crtc.h"
>> #endif
>>
>> struct dc_stream {
>> #ifdef LINUX
>> 	struct drm_crtc_state base;
>> #endif
>> 	...
>> }
>>
>> The ifdefs are then removed on Linux versions of DC, while other OSs
>> wouldn't compile with the LINUX flag.
>>
>>>     - validate_context->res_ctx->pool looks extremely fishy. In principle,
>>>       refcounting does not mesh with the duplicate, check, then
>>>       free-or-commit semantics of atomic. Are you sure this works
>>>       correctly when doing TEST_ONLY commits? igt or weston (with Daniel's
>>>       patches) are your friend (or enemy) here :-)
>>>
>>>       The common approach all other drivers implement:
>>>       - Same duplicate semantics for private objects as for core drm
>>>         objects. The private obj helpers will help with cutting down the
>>>         boiler-plate.
>>>       - No refcounts, instead an allocation bitmask in the object one up
>>>         in the hierarchy (usually global state, but sometimes also
>>>         crtc/stream).
>>>       - If you want to keep the current design, then you
>>>         must do a deep copy of pool (i.e. not just the struct, but also
>>>         every struct that might change it points too). The above
>>>         accomplishes that essentially, except in standard atomic patterns.
>>>       - You'll probably need a bunch of for_each_foo_in_context macros,
>>>         built using the private state helpers.
>>>
>>>       Trying to review correctness of atomic_check when private resources
>>>       are refcounted is real hard. The only case we have is vc4, and it's
>>>       kinda not pretty (it has it's own manual rollback hacks in the
>>>       release functions). Going entirely with free-standing stuff is much
>>>       better.
>>
>> Good points here. I think I'll ultimately have to get IGT running against
>> our code with TEST_ONLY commits and see what happens. Ultimately we probably
>> have to deep copy, one way or another. I don't really want any rollback
>> stuff as that would be near impossible to maintain.
>>
>>>     This shouldn't be a huge conceptual change in the DC design (it
>>>     already has checks for "is this unchanged" and then ignore that
>>>     object), just quite a bit more invasive than sprinkling for_each_*
>>>     macros all over the place. From a glane it could be that you'd get
>>>     80% there by having a for_each_changed_*_in_context set of macros,
>>>     with the current code everywhere else, and the "jumps over unchanged
>>>     obj because they're not in drm_atomic_state/dc_validation_context"
>>>     logic on drm atomic.
>>
>> Yeah, this should fit mostly with DC design. Will evaluate this once we link
>> DC objects to DRM state objects.
>>
>>> @@ -1640,6 +1644,8 @@ static void dm_crtc_helper_disable(struct drm_crtc *crtc)
>>>  {
>>>  }
>>>
>>> +/* no dummy funcs pls, counts everywhere */
>>> +
>>
>> Does the comment apply to the preceding or next function? What do you mean
>> with "counts everywhere"?
>
> In general you have a lot of callbacks which are either just {} or {
> return 0; }, aka empty/dummy implementations.
>
> We've refactored atomic helpers a lot to make all of these optional, pls
> remove them. This was a somewhat recent development, I guess initial
> atomic DC still had to have all these dummy callbacks.
>

That makes sense. We haven't really revisited these since our initial 
work on atomic more than a year ago.

>>>  static int dm_crtc_helper_atomic_check(
>>>  	struct drm_crtc *crtc,
>>>  	struct drm_crtc_state *state)
>>
>>
>>> @@ -3077,6 +3102,15 @@ int amdgpu_dm_atomic_check(struct drm_device *dev,
>>>
>>>  		acrtc = to_amdgpu_crtc(crtc);
>>>
>>> +		/* This is the wrong way round. If you have a requirement for a
>>> +		 * 1:1 connector/crtc mapping, then loop over connectors and
>>> +		 * grab the crtc from there. Plus make sure there's really only
>>> +		 * 1 connector per crtc (but the possible clones stuff should
>>> +		 * catch that for you I think, at least with latest atomic
>>> +		 * helpers.
>>> +		 *
>>> +		 * Similar for the same loop in commit.
>>> +		 */
>>>  		aconnector = amdgpu_dm_find_first_crct_matching_connector(state, crtc, true);
>>>
>>>  		action = get_dm_commit_action(crtc_state);
>>
>> Good point. This code is still a bit of a mess.
>
> I think once you do the embedding and switch the state/obj iterators to
> the atomic versions there should be a pile more places to clean up. Atomic

That's pretty much what I'm thinking. I've noticed how good cleanups 
tend to snowball, allowing us to clean up more stuff.

> helper library should be a good place to read up on best practices
> (especially since the recent merge of the extended iterator macros from
> Maarten).
>
> Loop over connectors and chase one pointer (often
> connector_state->best_encoder) is a fairly common pattern, and if you have
> a 1:1 crtc/connector routing (and it's enforced) then I think in many
> places a natural loop pattern will go over connectors and then chase
> connector_state->crtc. i915 also has plenty of examples in drm-tip (only
> that one has Maarten's patches which switch all the modeset code over to
> the new iterators).

Thanks. Will take a look there.

Harry

> -Daniel
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-05-09 20:52 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-03 14:13 Soliciting DRM feedback on latest DC rework Harry Wentland
     [not found] ` <6409067e-8f25-d2f8-37ee-fb3ed3c06ea2-5C7GfCeVMHo@public.gmane.org>
2017-05-03 14:26   ` Christian König
2017-05-03 15:02     ` Daniel Vetter
     [not found]       ` <20170503150236.pkr3r2o5l2pjx5m5-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
2017-05-03 19:12         ` Harry Wentland
     [not found]           ` <244fc211-ec97-cb99-9f2f-b16b02f874b2-5C7GfCeVMHo@public.gmane.org>
2017-05-08 18:54             ` Harry Wentland
2017-05-08 19:07               ` Dave Airlie
2017-05-08 19:50                 ` Harry Wentland
     [not found]                   ` <52f32c81-2a8b-2c8e-4e4a-7362277940df-5C7GfCeVMHo@public.gmane.org>
2017-05-09  8:19                     ` Daniel Vetter
     [not found]               ` <ad7a0d85-817c-aee7-a596-374db23b8a2d-5C7GfCeVMHo@public.gmane.org>
2017-05-09  8:24                 ` Daniel Vetter
     [not found]                   ` <20170509082416.bx7fflkkmdypvuyy-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
2017-05-09 20:52                     ` Harry Wentland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.