All of lore.kernel.org
 help / color / mirror / Atom feed
* drm_gem_get_pages and proper flushing/coherency
@ 2018-11-26 12:15 Oleksandr Andrushchenko
  2018-12-03  9:43 ` Oleksandr Andrushchenko
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Oleksandr Andrushchenko @ 2018-11-26 12:15 UTC (permalink / raw)
  To: dri-devel; +Cc: xen-devel, Daniel Vetter

Hello, all!

My driver (Xen para-virtualized frontend) in some scenarios uses
drm_gem_get_pages for allocating backing storage for dumb buffers.
There are use-cases which showed some artifacts on the screen
(modetest, other) which were worked around by flushing pages of the
buffer on page flip with drm_clflush_pages. But, the problem here
is that drm_clflush_pages is not available on ARM platforms (it is a NOP)
and doing flushes on every page flip seems to be non-optimal.

Other drivers that use drm_gem_get_pages seem to use DMA map/unmap
on the shmem backed buffer (this is from where drm_gem_get_pages
allocates the pages) and this is an obvious approach as the buffer needs
to be shared with real HW for DMA - please correct me if my understanding
here is wrong.

This is the part I missed in my implementation as I don't really have a
HW device which needs DMA, but a backend running in a different Xen domain.

Thus, as the buffer is backed with cachable pages the backend may see

artifacts on its side.


I am looking for some advices on what would be the best option to
make sure dumb buffers are not flushed every page flip and still
the memory remains coherent to the backend. I have implemented a
DMA map/unmap of the shmem pages on GEM object creation/destruction
and this does solve the problem, but as the backend is not really
a DMA device this is a bit misleading.

Is there any other (more?) suitable/preferable way(s) to achieve the same?

Thank you,
Oleksandr

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: drm_gem_get_pages and proper flushing/coherency
  2018-11-26 12:15 drm_gem_get_pages and proper flushing/coherency Oleksandr Andrushchenko
  2018-12-03  9:43 ` Oleksandr Andrushchenko
@ 2018-12-03  9:43 ` Oleksandr Andrushchenko
  2018-12-20 14:02   ` Oleksandr Andrushchenko
  2018-12-20 14:02 ` Oleksandr Andrushchenko
  3 siblings, 0 replies; 7+ messages in thread
From: Oleksandr Andrushchenko @ 2018-12-03  9:43 UTC (permalink / raw)
  To: dri-devel; +Cc: xen-devel, Daniel Vetter

On 11/26/18 2:15 PM, Oleksandr Andrushchenko wrote:
> Hello, all!
>
> My driver (Xen para-virtualized frontend) in some scenarios uses
> drm_gem_get_pages for allocating backing storage for dumb buffers.
> There are use-cases which showed some artifacts on the screen
> (modetest, other) which were worked around by flushing pages of the
> buffer on page flip with drm_clflush_pages. But, the problem here
> is that drm_clflush_pages is not available on ARM platforms (it is a NOP)
> and doing flushes on every page flip seems to be non-optimal.
>
> Other drivers that use drm_gem_get_pages seem to use DMA map/unmap
> on the shmem backed buffer (this is from where drm_gem_get_pages
> allocates the pages) and this is an obvious approach as the buffer needs
> to be shared with real HW for DMA - please correct me if my understanding
> here is wrong.

I have created a patch which implements DMA mapping [1] and this

does solve artifacts problem for me.

Is this the right way to go?

>
> This is the part I missed in my implementation as I don't really have a
> HW device which needs DMA, but a backend running in a different Xen 
> domain.
>
> Thus, as the buffer is backed with cachable pages the backend may see
>
> artifacts on its side.
>
>
> I am looking for some advices on what would be the best option to
> make sure dumb buffers are not flushed every page flip and still
> the memory remains coherent to the backend. I have implemented a
> DMA map/unmap of the shmem pages on GEM object creation/destruction
> and this does solve the problem, but as the backend is not really
> a DMA device this is a bit misleading.
>
> Is there any other (more?) suitable/preferable way(s) to achieve the 
> same?
>
> Thank you,
> Oleksandr
>
Thank you,

Oleksandr

[1] https://patchwork.freedesktop.org/series/53069/

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: drm_gem_get_pages and proper flushing/coherency
  2018-11-26 12:15 drm_gem_get_pages and proper flushing/coherency Oleksandr Andrushchenko
@ 2018-12-03  9:43 ` Oleksandr Andrushchenko
  2018-12-03  9:43 ` Oleksandr Andrushchenko
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Oleksandr Andrushchenko @ 2018-12-03  9:43 UTC (permalink / raw)
  To: dri-devel; +Cc: xen-devel, Daniel Vetter

On 11/26/18 2:15 PM, Oleksandr Andrushchenko wrote:
> Hello, all!
>
> My driver (Xen para-virtualized frontend) in some scenarios uses
> drm_gem_get_pages for allocating backing storage for dumb buffers.
> There are use-cases which showed some artifacts on the screen
> (modetest, other) which were worked around by flushing pages of the
> buffer on page flip with drm_clflush_pages. But, the problem here
> is that drm_clflush_pages is not available on ARM platforms (it is a NOP)
> and doing flushes on every page flip seems to be non-optimal.
>
> Other drivers that use drm_gem_get_pages seem to use DMA map/unmap
> on the shmem backed buffer (this is from where drm_gem_get_pages
> allocates the pages) and this is an obvious approach as the buffer needs
> to be shared with real HW for DMA - please correct me if my understanding
> here is wrong.

I have created a patch which implements DMA mapping [1] and this

does solve artifacts problem for me.

Is this the right way to go?

>
> This is the part I missed in my implementation as I don't really have a
> HW device which needs DMA, but a backend running in a different Xen 
> domain.
>
> Thus, as the buffer is backed with cachable pages the backend may see
>
> artifacts on its side.
>
>
> I am looking for some advices on what would be the best option to
> make sure dumb buffers are not flushed every page flip and still
> the memory remains coherent to the backend. I have implemented a
> DMA map/unmap of the shmem pages on GEM object creation/destruction
> and this does solve the problem, but as the backend is not really
> a DMA device this is a bit misleading.
>
> Is there any other (more?) suitable/preferable way(s) to achieve the 
> same?
>
> Thank you,
> Oleksandr
>
Thank you,

Oleksandr

[1] https://patchwork.freedesktop.org/series/53069/


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: drm_gem_get_pages and proper flushing/coherency
  2018-11-26 12:15 drm_gem_get_pages and proper flushing/coherency Oleksandr Andrushchenko
@ 2018-12-20 14:02   ` Oleksandr Andrushchenko
  2018-12-03  9:43 ` Oleksandr Andrushchenko
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Oleksandr Andrushchenko @ 2018-12-20 14:02 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: xen-devel, Noralf Trønnes, Gerd Hoffmann, dri-devel

+ARM mailing list for help and suggestions

Dear ARM community!

Xen hypervizor para-virtualized DRM frontend driver [1] uses shmem
backed pages for display buffers and shares those with the host
driver. Everything works just fine, but in some scenarios I see
artifacts on the screen which are because those shmem pages of the
buffer are not flushed (please see the mail below for more
details from DRM point of view).
For x86 the flushing of the pages can be done with DRM helper [2]
and seem to help. But clflushopt, which is used there, is not defined
for ARM. I had a suggestion to use set_pages_array_*(),
but again it is only defined for x86, not ARM [3].

The implementation/workaround that I have [4] is based on the DMA
approach, e.g. I map shmem pages and it seem to help, but the whole
DMA approach here seems to be an overkill for that.

Now to the question: could anyone on ARM community help understand the
right way to flush those pages, so I don't need to use DMA for that?

Thank you,
Oleksandr

On 11/26/18 2:15 PM, Oleksandr Andrushchenko wrote:
> Hello, all!
>
> My driver (Xen para-virtualized frontend) in some scenarios uses
> drm_gem_get_pages for allocating backing storage for dumb buffers.
> There are use-cases which showed some artifacts on the screen
> (modetest, other) which were worked around by flushing pages of the
> buffer on page flip with drm_clflush_pages. But, the problem here
> is that drm_clflush_pages is not available on ARM platforms (it is a NOP)
> and doing flushes on every page flip seems to be non-optimal.
>
> Other drivers that use drm_gem_get_pages seem to use DMA map/unmap
> on the shmem backed buffer (this is from where drm_gem_get_pages
> allocates the pages) and this is an obvious approach as the buffer needs
> to be shared with real HW for DMA - please correct me if my understanding
> here is wrong.
>
> This is the part I missed in my implementation as I don't really have a
> HW device which needs DMA, but a backend running in a different Xen 
> domain.
>
> Thus, as the buffer is backed with cachable pages the backend may see
>
> artifacts on its side.
>
>
> I am looking for some advices on what would be the best option to
> make sure dumb buffers are not flushed every page flip and still
> the memory remains coherent to the backend. I have implemented a
> DMA map/unmap of the shmem pages on GEM object creation/destruction
> and this does solve the problem, but as the backend is not really
> a DMA device this is a bit misleading.
>
> Is there any other (more?) suitable/preferable way(s) to achieve the 
> same?
>
> Thank you,
> Oleksandr
>
[1] https://elixir.bootlin.com/linux/v4.20-rc7/source/drivers/gpu/drm/xen
[2] 
https://elixir.bootlin.com/linux/v4.20-rc7/source/drivers/gpu/drm/drm_cache.c#L45
[3] 
https://elixir.bootlin.com/linux/v4.20-rc7/source/arch/x86/include/asm/set_memory.h
[4] https://patchwork.kernel.org/patch/10700089/

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: drm_gem_get_pages and proper flushing/coherency
@ 2018-12-20 14:02   ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 7+ messages in thread
From: Oleksandr Andrushchenko @ 2018-12-20 14:02 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: xen-devel, Gerd Hoffmann, dri-devel

+ARM mailing list for help and suggestions

Dear ARM community!

Xen hypervizor para-virtualized DRM frontend driver [1] uses shmem
backed pages for display buffers and shares those with the host
driver. Everything works just fine, but in some scenarios I see
artifacts on the screen which are because those shmem pages of the
buffer are not flushed (please see the mail below for more
details from DRM point of view).
For x86 the flushing of the pages can be done with DRM helper [2]
and seem to help. But clflushopt, which is used there, is not defined
for ARM. I had a suggestion to use set_pages_array_*(),
but again it is only defined for x86, not ARM [3].

The implementation/workaround that I have [4] is based on the DMA
approach, e.g. I map shmem pages and it seem to help, but the whole
DMA approach here seems to be an overkill for that.

Now to the question: could anyone on ARM community help understand the
right way to flush those pages, so I don't need to use DMA for that?

Thank you,
Oleksandr

On 11/26/18 2:15 PM, Oleksandr Andrushchenko wrote:
> Hello, all!
>
> My driver (Xen para-virtualized frontend) in some scenarios uses
> drm_gem_get_pages for allocating backing storage for dumb buffers.
> There are use-cases which showed some artifacts on the screen
> (modetest, other) which were worked around by flushing pages of the
> buffer on page flip with drm_clflush_pages. But, the problem here
> is that drm_clflush_pages is not available on ARM platforms (it is a NOP)
> and doing flushes on every page flip seems to be non-optimal.
>
> Other drivers that use drm_gem_get_pages seem to use DMA map/unmap
> on the shmem backed buffer (this is from where drm_gem_get_pages
> allocates the pages) and this is an obvious approach as the buffer needs
> to be shared with real HW for DMA - please correct me if my understanding
> here is wrong.
>
> This is the part I missed in my implementation as I don't really have a
> HW device which needs DMA, but a backend running in a different Xen 
> domain.
>
> Thus, as the buffer is backed with cachable pages the backend may see
>
> artifacts on its side.
>
>
> I am looking for some advices on what would be the best option to
> make sure dumb buffers are not flushed every page flip and still
> the memory remains coherent to the backend. I have implemented a
> DMA map/unmap of the shmem pages on GEM object creation/destruction
> and this does solve the problem, but as the backend is not really
> a DMA device this is a bit misleading.
>
> Is there any other (more?) suitable/preferable way(s) to achieve the 
> same?
>
> Thank you,
> Oleksandr
>
[1] https://elixir.bootlin.com/linux/v4.20-rc7/source/drivers/gpu/drm/xen
[2] 
https://elixir.bootlin.com/linux/v4.20-rc7/source/drivers/gpu/drm/drm_cache.c#L45
[3] 
https://elixir.bootlin.com/linux/v4.20-rc7/source/arch/x86/include/asm/set_memory.h
[4] https://patchwork.kernel.org/patch/10700089/
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: drm_gem_get_pages and proper flushing/coherency
  2018-11-26 12:15 drm_gem_get_pages and proper flushing/coherency Oleksandr Andrushchenko
                   ` (2 preceding siblings ...)
  2018-12-20 14:02   ` Oleksandr Andrushchenko
@ 2018-12-20 14:02 ` Oleksandr Andrushchenko
  3 siblings, 0 replies; 7+ messages in thread
From: Oleksandr Andrushchenko @ 2018-12-20 14:02 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: xen-devel, Noralf Trønnes, Gerd Hoffmann, dri-devel

+ARM mailing list for help and suggestions

Dear ARM community!

Xen hypervizor para-virtualized DRM frontend driver [1] uses shmem
backed pages for display buffers and shares those with the host
driver. Everything works just fine, but in some scenarios I see
artifacts on the screen which are because those shmem pages of the
buffer are not flushed (please see the mail below for more
details from DRM point of view).
For x86 the flushing of the pages can be done with DRM helper [2]
and seem to help. But clflushopt, which is used there, is not defined
for ARM. I had a suggestion to use set_pages_array_*(),
but again it is only defined for x86, not ARM [3].

The implementation/workaround that I have [4] is based on the DMA
approach, e.g. I map shmem pages and it seem to help, but the whole
DMA approach here seems to be an overkill for that.

Now to the question: could anyone on ARM community help understand the
right way to flush those pages, so I don't need to use DMA for that?

Thank you,
Oleksandr

On 11/26/18 2:15 PM, Oleksandr Andrushchenko wrote:
> Hello, all!
>
> My driver (Xen para-virtualized frontend) in some scenarios uses
> drm_gem_get_pages for allocating backing storage for dumb buffers.
> There are use-cases which showed some artifacts on the screen
> (modetest, other) which were worked around by flushing pages of the
> buffer on page flip with drm_clflush_pages. But, the problem here
> is that drm_clflush_pages is not available on ARM platforms (it is a NOP)
> and doing flushes on every page flip seems to be non-optimal.
>
> Other drivers that use drm_gem_get_pages seem to use DMA map/unmap
> on the shmem backed buffer (this is from where drm_gem_get_pages
> allocates the pages) and this is an obvious approach as the buffer needs
> to be shared with real HW for DMA - please correct me if my understanding
> here is wrong.
>
> This is the part I missed in my implementation as I don't really have a
> HW device which needs DMA, but a backend running in a different Xen 
> domain.
>
> Thus, as the buffer is backed with cachable pages the backend may see
>
> artifacts on its side.
>
>
> I am looking for some advices on what would be the best option to
> make sure dumb buffers are not flushed every page flip and still
> the memory remains coherent to the backend. I have implemented a
> DMA map/unmap of the shmem pages on GEM object creation/destruction
> and this does solve the problem, but as the backend is not really
> a DMA device this is a bit misleading.
>
> Is there any other (more?) suitable/preferable way(s) to achieve the 
> same?
>
> Thank you,
> Oleksandr
>
[1] https://elixir.bootlin.com/linux/v4.20-rc7/source/drivers/gpu/drm/xen
[2] 
https://elixir.bootlin.com/linux/v4.20-rc7/source/drivers/gpu/drm/drm_cache.c#L45
[3] 
https://elixir.bootlin.com/linux/v4.20-rc7/source/arch/x86/include/asm/set_memory.h
[4] https://patchwork.kernel.org/patch/10700089/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* drm_gem_get_pages and proper flushing/coherency
@ 2018-11-26 12:15 Oleksandr Andrushchenko
  0 siblings, 0 replies; 7+ messages in thread
From: Oleksandr Andrushchenko @ 2018-11-26 12:15 UTC (permalink / raw)
  To: dri-devel; +Cc: xen-devel, Daniel Vetter

Hello, all!

My driver (Xen para-virtualized frontend) in some scenarios uses
drm_gem_get_pages for allocating backing storage for dumb buffers.
There are use-cases which showed some artifacts on the screen
(modetest, other) which were worked around by flushing pages of the
buffer on page flip with drm_clflush_pages. But, the problem here
is that drm_clflush_pages is not available on ARM platforms (it is a NOP)
and doing flushes on every page flip seems to be non-optimal.

Other drivers that use drm_gem_get_pages seem to use DMA map/unmap
on the shmem backed buffer (this is from where drm_gem_get_pages
allocates the pages) and this is an obvious approach as the buffer needs
to be shared with real HW for DMA - please correct me if my understanding
here is wrong.

This is the part I missed in my implementation as I don't really have a
HW device which needs DMA, but a backend running in a different Xen domain.

Thus, as the buffer is backed with cachable pages the backend may see

artifacts on its side.


I am looking for some advices on what would be the best option to
make sure dumb buffers are not flushed every page flip and still
the memory remains coherent to the backend. I have implemented a
DMA map/unmap of the shmem pages on GEM object creation/destruction
and this does solve the problem, but as the backend is not really
a DMA device this is a bit misleading.

Is there any other (more?) suitable/preferable way(s) to achieve the same?

Thank you,
Oleksandr


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-12-20 14:02 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-26 12:15 drm_gem_get_pages and proper flushing/coherency Oleksandr Andrushchenko
2018-12-03  9:43 ` Oleksandr Andrushchenko
2018-12-03  9:43 ` Oleksandr Andrushchenko
2018-12-20 14:02 ` Oleksandr Andrushchenko
2018-12-20 14:02   ` Oleksandr Andrushchenko
2018-12-20 14:02 ` Oleksandr Andrushchenko
  -- strict thread matches above, loose matches on Subject: below --
2018-11-26 12:15 Oleksandr Andrushchenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.