From: Thomas Zimmermann <tzimmermann@suse.de> To: Daniel Vetter <daniel.vetter@ffwll.ch>, Intel Graphics Development <intel-gfx@lists.freedesktop.org> Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>, "David Airlie" <airlied@linux.ie>, "DRI Development" <dri-devel@lists.freedesktop.org>, "Daniel Vetter" <daniel.vetter@intel.com>, "Christian König" <christian.koenig@amd.com> Subject: Re: [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86 Date: Thu, 22 Jul 2021 20:40:56 +0200 [thread overview] Message-ID: <0e4eefe0-9282-672c-7678-8d3162de35e3@suse.de> (raw) In-Reply-To: <20210713205153.1896059-4-daniel.vetter@ffwll.ch> [-- Attachment #1.1: Type: text/plain, Size: 3277 bytes --] Hi Am 13.07.21 um 22:51 schrieb Daniel Vetter: > intel-gfx-ci realized that something is not quite coherent anymore on > some platforms for our i915+vgem tests, when I tried to switch vgem > over to shmem helpers. > > After lots of head-scratching I realized that I've removed calls to > drm_clflush. And we need those. To make this a bit cleaner use the > same page allocation tooling as ttm, which does internally clflush > (and more, as neeeded on any platform instead of just the intel x86 > cpus i915 can be combined with). Vgem would therefore not work correctly on non-X86 platforms? > > Unfortunately this doesn't exist on arm, or as a generic feature. For > that I think only the dma-api can get at wc memory reliably, so maybe > we'd need some kind of GFP_WC flag to do this properly. > > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> > Cc: Christian König <christian.koenig@amd.com> > Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> > Cc: Maxime Ripard <mripard@kernel.org> > Cc: Thomas Zimmermann <tzimmermann@suse.de> > Cc: David Airlie <airlied@linux.ie> > Cc: Daniel Vetter <daniel@ffwll.ch> > --- > drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c > index 296ab1b7c07f..657d2490aaa5 100644 > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c > @@ -10,6 +10,10 @@ > #include <linux/slab.h> > #include <linux/vmalloc.h> > > +#ifdef CONFIG_X86 > +#include <asm/set_memory.h> > +#endif > + > #include <drm/drm.h> > #include <drm/drm_device.h> > #include <drm/drm_drv.h> > @@ -162,6 +166,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) > return PTR_ERR(pages); > } > > +#ifdef CONFIG_X86 > + if (shmem->map_wc) > + set_pages_array_wc(pages, obj->size >> PAGE_SHIFT); > +#endif I cannot comment much on the technical details of the caching of various architectures. If this patch goes in, there should be a longer comment that reflects the discussion in this thread. It's apparently a workaround. I think the call itself should be hidden behind a DRM API, which depends on CONFIG_X86. Something simple like ifdef CONFIG_X86 drm_set_pages_array_wc() { set_pages_array_wc(); } else drm_set_pages_array_wc() { } #endif Maybe in drm_cache.h? Best regard Thomas > + > shmem->pages = pages; > > return 0; > @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) > if (--shmem->pages_use_count > 0) > return; > > +#ifdef CONFIG_X86 > + if (shmem->map_wc) > + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); > +#endif > + > drm_gem_put_pages(obj, shmem->pages, > shmem->pages_mark_dirty_on_put, > shmem->pages_mark_accessed_on_put); > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 840 bytes --]
WARNING: multiple messages have this Message-ID (diff)
From: Thomas Zimmermann <tzimmermann@suse.de> To: Daniel Vetter <daniel.vetter@ffwll.ch>, Intel Graphics Development <intel-gfx@lists.freedesktop.org> Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>, "David Airlie" <airlied@linux.ie>, "Maxime Ripard" <mripard@kernel.org>, "DRI Development" <dri-devel@lists.freedesktop.org>, "Daniel Vetter" <daniel.vetter@intel.com>, "Christian König" <christian.koenig@amd.com> Subject: Re: [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86 Date: Thu, 22 Jul 2021 20:40:56 +0200 [thread overview] Message-ID: <0e4eefe0-9282-672c-7678-8d3162de35e3@suse.de> (raw) In-Reply-To: <20210713205153.1896059-4-daniel.vetter@ffwll.ch> [-- Attachment #1.1.1: Type: text/plain, Size: 3277 bytes --] Hi Am 13.07.21 um 22:51 schrieb Daniel Vetter: > intel-gfx-ci realized that something is not quite coherent anymore on > some platforms for our i915+vgem tests, when I tried to switch vgem > over to shmem helpers. > > After lots of head-scratching I realized that I've removed calls to > drm_clflush. And we need those. To make this a bit cleaner use the > same page allocation tooling as ttm, which does internally clflush > (and more, as neeeded on any platform instead of just the intel x86 > cpus i915 can be combined with). Vgem would therefore not work correctly on non-X86 platforms? > > Unfortunately this doesn't exist on arm, or as a generic feature. For > that I think only the dma-api can get at wc memory reliably, so maybe > we'd need some kind of GFP_WC flag to do this properly. > > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> > Cc: Christian König <christian.koenig@amd.com> > Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> > Cc: Maxime Ripard <mripard@kernel.org> > Cc: Thomas Zimmermann <tzimmermann@suse.de> > Cc: David Airlie <airlied@linux.ie> > Cc: Daniel Vetter <daniel@ffwll.ch> > --- > drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c > index 296ab1b7c07f..657d2490aaa5 100644 > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c > @@ -10,6 +10,10 @@ > #include <linux/slab.h> > #include <linux/vmalloc.h> > > +#ifdef CONFIG_X86 > +#include <asm/set_memory.h> > +#endif > + > #include <drm/drm.h> > #include <drm/drm_device.h> > #include <drm/drm_drv.h> > @@ -162,6 +166,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) > return PTR_ERR(pages); > } > > +#ifdef CONFIG_X86 > + if (shmem->map_wc) > + set_pages_array_wc(pages, obj->size >> PAGE_SHIFT); > +#endif I cannot comment much on the technical details of the caching of various architectures. If this patch goes in, there should be a longer comment that reflects the discussion in this thread. It's apparently a workaround. I think the call itself should be hidden behind a DRM API, which depends on CONFIG_X86. Something simple like ifdef CONFIG_X86 drm_set_pages_array_wc() { set_pages_array_wc(); } else drm_set_pages_array_wc() { } #endif Maybe in drm_cache.h? Best regard Thomas > + > shmem->pages = pages; > > return 0; > @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) > if (--shmem->pages_use_count > 0) > return; > > +#ifdef CONFIG_X86 > + if (shmem->map_wc) > + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); > +#endif > + > drm_gem_put_pages(obj, shmem->pages, > shmem->pages_mark_dirty_on_put, > shmem->pages_mark_accessed_on_put); > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer [-- Attachment #1.2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 840 bytes --] [-- Attachment #2: Type: text/plain, Size: 160 bytes --] _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2021-07-22 18:41 UTC|newest] Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-07-13 20:51 [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter 2021-07-13 20:51 ` [Intel-gfx] " Daniel Vetter 2021-07-13 20:51 ` [PATCH v4 1/4] dma-buf: Require VM_PFNMAP vma for mmap Daniel Vetter 2021-07-13 20:51 ` [Intel-gfx] " Daniel Vetter 2021-07-13 20:51 ` Daniel Vetter 2021-07-23 18:45 ` Thomas Zimmermann 2021-07-23 18:45 ` [Intel-gfx] " Thomas Zimmermann 2021-07-23 18:45 ` Thomas Zimmermann 2021-07-13 20:51 ` [PATCH v4 2/4] drm/shmem-helper: Switch to vmf_insert_pfn Daniel Vetter 2021-07-13 20:51 ` [Intel-gfx] " Daniel Vetter 2021-07-22 18:22 ` Thomas Zimmermann 2021-07-22 18:22 ` [Intel-gfx] " Thomas Zimmermann 2021-07-23 7:32 ` Daniel Vetter 2021-07-23 7:32 ` [Intel-gfx] " Daniel Vetter 2021-08-12 13:05 ` Daniel Vetter 2021-08-12 13:05 ` [Intel-gfx] " Daniel Vetter 2021-07-13 20:51 ` [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86 Daniel Vetter 2021-07-13 20:51 ` [Intel-gfx] " Daniel Vetter 2021-07-14 11:54 ` Christian König 2021-07-14 11:54 ` [Intel-gfx] " Christian König 2021-07-14 12:48 ` Daniel Vetter 2021-07-14 12:48 ` [Intel-gfx] " Daniel Vetter 2021-07-14 12:58 ` Christian König 2021-07-14 12:58 ` [Intel-gfx] " Christian König 2021-07-14 16:16 ` Daniel Vetter 2021-07-14 16:16 ` [Intel-gfx] " Daniel Vetter 2021-07-22 18:40 ` Thomas Zimmermann [this message] 2021-07-22 18:40 ` Thomas Zimmermann 2021-07-23 7:36 ` Daniel Vetter 2021-07-23 7:36 ` [Intel-gfx] " Daniel Vetter 2021-07-23 8:02 ` Christian König 2021-07-23 8:02 ` [Intel-gfx] " Christian König 2021-07-23 8:34 ` Daniel Vetter 2021-07-23 8:34 ` [Intel-gfx] " Daniel Vetter 2021-08-05 18:40 ` Thomas Zimmermann 2021-08-05 18:40 ` [Intel-gfx] " Thomas Zimmermann 2021-07-13 20:51 ` [PATCH v4 4/4] drm/vgem: use shmem helpers Daniel Vetter 2021-07-13 20:51 ` [Intel-gfx] " Daniel Vetter 2021-07-14 12:45 ` [PATCH] " Daniel Vetter 2021-07-14 12:45 ` [Intel-gfx] " Daniel Vetter 2021-07-22 18:50 ` [PATCH v4 4/4] " Thomas Zimmermann 2021-07-22 18:50 ` [Intel-gfx] " Thomas Zimmermann 2021-07-23 7:38 ` Daniel Vetter 2021-07-23 7:38 ` [Intel-gfx] " Daniel Vetter 2021-07-13 23:43 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for shmem helpers for vgem (rev6) Patchwork 2021-07-14 0:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork 2021-07-16 13:29 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for shmem helpers for vgem (rev8) Patchwork 2021-07-16 13:58 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2021-07-16 16:43 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=0e4eefe0-9282-672c-7678-8d3162de35e3@suse.de \ --to=tzimmermann@suse.de \ --cc=airlied@linux.ie \ --cc=christian.koenig@amd.com \ --cc=daniel.vetter@ffwll.ch \ --cc=daniel.vetter@intel.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=intel-gfx@lists.freedesktop.org \ --cc=thomas.hellstrom@linux.intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.