From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"David Airlie" <airlied@linux.ie>,
"Daniel Vetter" <daniel.vetter@ffwll.ch>,
"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
"DRI Development" <dri-devel@lists.freedesktop.org>,
"Daniel Vetter" <daniel.vetter@intel.com>,
"Christian König" <christian.koenig@amd.com>
Subject: Re: [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86
Date: Fri, 23 Jul 2021 09:36:39 +0200 [thread overview]
Message-ID: <YPpxh0QhILXESykX@phenom.ffwll.local> (raw)
In-Reply-To: <0e4eefe0-9282-672c-7678-8d3162de35e3@suse.de>
On Thu, Jul 22, 2021 at 08:40:56PM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> > intel-gfx-ci realized that something is not quite coherent anymore on
> > some platforms for our i915+vgem tests, when I tried to switch vgem
> > over to shmem helpers.
> >
> > After lots of head-scratching I realized that I've removed calls to
> > drm_clflush. And we need those. To make this a bit cleaner use the
> > same page allocation tooling as ttm, which does internally clflush
> > (and more, as neeeded on any platform instead of just the intel x86
> > cpus i915 can be combined with).
>
> Vgem would therefore not work correctly on non-X86 platforms?
Anything using shmem helpers doesn't work correctly on non-x86 platforms.
At least if they use wc.
vgem with intel-gfx-ci is simply running some very nasty tests that catch
the bugs.
I'm kinda hoping that someone from the armsoc world would care enough to
fix this there. But it's a tricky issue.
> >
> > Unfortunately this doesn't exist on arm, or as a generic feature. For
> > that I think only the dma-api can get at wc memory reliably, so maybe
> > we'd need some kind of GFP_WC flag to do this properly.
> >
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Maxime Ripard <mripard@kernel.org>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: David Airlie <airlied@linux.ie>
> > Cc: Daniel Vetter <daniel@ffwll.ch>
> > ---
> > drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++++++++++
> > 1 file changed, 14 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index 296ab1b7c07f..657d2490aaa5 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -10,6 +10,10 @@
> > #include <linux/slab.h>
> > #include <linux/vmalloc.h>
> > +#ifdef CONFIG_X86
> > +#include <asm/set_memory.h>
> > +#endif
> > +
> > #include <drm/drm.h>
> > #include <drm/drm_device.h>
> > #include <drm/drm_drv.h>
> > @@ -162,6 +166,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> > return PTR_ERR(pages);
> > }
> > +#ifdef CONFIG_X86
> > + if (shmem->map_wc)
> > + set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
> > +#endif
>
> I cannot comment much on the technical details of the caching of various
> architectures. If this patch goes in, there should be a longer comment that
> reflects the discussion in this thread. It's apparently a workaround.
>
> I think the call itself should be hidden behind a DRM API, which depends on
> CONFIG_X86. Something simple like
>
> ifdef CONFIG_X86
> drm_set_pages_array_wc()
> {
> set_pages_array_wc();
> }
> else
> drm_set_pages_array_wc()
> {
> }
> #endif
>
> Maybe in drm_cache.h?
We do have a bunch of this in drm_cache.h already, and architecture
maintainers hate us for it.
The real fix is to get at the architecture-specific wc allocator, which is
currently not something that's exposed, but hidden within the dma api. I
think having this stick out like this is better than hiding it behind fake
generic code (like we do with drm_clflush, which defacto also only really
works on x86).
Also note that ttm has the exact same ifdef in its page allocator, but it
does fall back to using dma_alloc_coherent on other platforms.
-Daniel
> Best regard
> Thomas
>
> > +
> > shmem->pages = pages;
> > return 0;
> > @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> > if (--shmem->pages_use_count > 0)
> > return;
> > +#ifdef CONFIG_X86
> > + if (shmem->map_wc)
> > + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> > +#endif
> > +
> > drm_gem_put_pages(obj, shmem->pages,
> > shmem->pages_mark_dirty_on_put,
> > shmem->pages_mark_accessed_on_put);
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
next prev parent reply other threads:[~2021-07-23 7:36 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-13 20:51 [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter
2021-07-13 20:51 ` [PATCH v4 1/4] dma-buf: Require VM_PFNMAP vma for mmap Daniel Vetter
2021-07-23 18:45 ` Thomas Zimmermann
2021-07-13 20:51 ` [PATCH v4 2/4] drm/shmem-helper: Switch to vmf_insert_pfn Daniel Vetter
2021-07-22 18:22 ` Thomas Zimmermann
2021-07-23 7:32 ` Daniel Vetter
2021-08-12 13:05 ` Daniel Vetter
2021-07-13 20:51 ` [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86 Daniel Vetter
2021-07-14 11:54 ` Christian König
2021-07-14 12:48 ` Daniel Vetter
2021-07-14 12:58 ` Christian König
2021-07-14 16:16 ` Daniel Vetter
2021-07-22 18:40 ` Thomas Zimmermann
2021-07-23 7:36 ` Daniel Vetter [this message]
2021-07-23 8:02 ` Christian König
2021-07-23 8:34 ` Daniel Vetter
2021-08-05 18:40 ` Thomas Zimmermann
2021-07-13 20:51 ` [PATCH v4 4/4] drm/vgem: use shmem helpers Daniel Vetter
2021-07-14 12:45 ` [PATCH] " Daniel Vetter
2021-07-22 18:50 ` [PATCH v4 4/4] " Thomas Zimmermann
2021-07-23 7:38 ` Daniel Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YPpxh0QhILXESykX@phenom.ffwll.local \
--to=daniel@ffwll.ch \
--cc=airlied@linux.ie \
--cc=christian.koenig@amd.com \
--cc=daniel.vetter@ffwll.ch \
--cc=daniel.vetter@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tzimmermann@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).