intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH v4 0/4] shmem helpers for vgem
@ 2021-07-13 20:51 Daniel Vetter
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 1/4] dma-buf: Require VM_PFNMAP vma for mmap Daniel Vetter
                   ` (8 more replies)
  0 siblings, 9 replies; 26+ messages in thread
From: Daniel Vetter @ 2021-07-13 20:51 UTC (permalink / raw)
  To: Intel Graphics Development; +Cc: Daniel Vetter, DRI Development

Hi all

I've found another potential issue, so lets try this again and see what
intel-gfx-ci says. Also Thomas tried to unify vgem more, which motivated
me to dig this all out again.

Test-with: 20210527140732.5762-1-daniel.vetter@ffwll.ch

Review very much welcome, as always!

Cheers, Daniel

Daniel Vetter (4):
  dma-buf: Require VM_PFNMAP vma for mmap
  drm/shmem-helper: Switch to vmf_insert_pfn
  drm/shmem-helpers: Allocate wc pages on x86
  drm/vgem: use shmem helpers

 drivers/dma-buf/dma-buf.c              |  15 +-
 drivers/gpu/drm/Kconfig                |   7 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c |  18 +-
 drivers/gpu/drm/gud/Kconfig            |   2 +-
 drivers/gpu/drm/tiny/Kconfig           |   4 +-
 drivers/gpu/drm/udl/Kconfig            |   1 +
 drivers/gpu/drm/vgem/vgem_drv.c        | 315 +------------------------
 7 files changed, 49 insertions(+), 313 deletions(-)

-- 
2.32.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH v4 1/4] dma-buf: Require VM_PFNMAP vma for mmap
  2021-07-13 20:51 [Intel-gfx] [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter
@ 2021-07-13 20:51 ` Daniel Vetter
  2021-07-23 18:45   ` Thomas Zimmermann
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 2/4] drm/shmem-helper: Switch to vmf_insert_pfn Daniel Vetter
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Daniel Vetter @ 2021-07-13 20:51 UTC (permalink / raw)
  To: Intel Graphics Development
  Cc: Daniel Vetter, Matthew Wilcox, Sumit Semwal, linaro-mm-sig,
	Jason Gunthorpe, John Stultz, DRI Development, Daniel Vetter,
	Suren Baghdasaryan, Christian König, linux-media

tldr; DMA buffers aren't normal memory, expecting that you can use
them like that (like calling get_user_pages works, or that they're
accounting like any other normal memory) cannot be guaranteed.

Since some userspace only runs on integrated devices, where all
buffers are actually all resident system memory, there's a huge
temptation to assume that a struct page is always present and useable
like for any more pagecache backed mmap. This has the potential to
result in a uapi nightmare.

To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
blocks get_user_pages and all the other struct page based
infrastructure for everyone. In spirit this is the uapi counterpart to
the kernel-internal CONFIG_DMABUF_DEBUG.

Motivated by a recent patch which wanted to swich the system dma-buf
heap to vm_insert_page instead of vm_insert_pfn.

v2:

Jason brought up that we also want to guarantee that all ptes have the
pte_special flag set, to catch fast get_user_pages (on architectures
that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.

From auditing the various functions to insert pfn pte entires
(vm_insert_pfn_prot, remap_pfn_range and all it's callers like
dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
this should be the correct flag to check for.

References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-WbWzkRg@mail.gmail.com/
Acked-by: Christian König <christian.koenig@amd.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: John Stultz <john.stultz@linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
--
Resending this so I can test the next two patches for vgem/shmem in
intel-gfx-ci. Last round failed somehow, but I can't repro that at all
locally here.

No immediate plans to merge this patch here since ttm isn't addressed
yet (and there we have the hugepte issue, for which I don't think we
have a clear consensus yet).
-Daniel
---
 drivers/dma-buf/dma-buf.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 510b42771974..65cbd7f0f16a 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -130,6 +130,7 @@ static struct file_system_type dma_buf_fs_type = {
 static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
 {
 	struct dma_buf *dmabuf;
+	int ret;
 
 	if (!is_dma_buf_file(file))
 		return -EINVAL;
@@ -145,7 +146,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
 	    dmabuf->size >> PAGE_SHIFT)
 		return -EINVAL;
 
-	return dmabuf->ops->mmap(dmabuf, vma);
+	ret = dmabuf->ops->mmap(dmabuf, vma);
+
+	WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+	return ret;
 }
 
 static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
@@ -1276,6 +1281,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
 int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
 		 unsigned long pgoff)
 {
+	int ret;
+
 	if (WARN_ON(!dmabuf || !vma))
 		return -EINVAL;
 
@@ -1296,7 +1303,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
 	vma_set_file(vma, dmabuf->file);
 	vma->vm_pgoff = pgoff;
 
-	return dmabuf->ops->mmap(dmabuf, vma);
+	ret = dmabuf->ops->mmap(dmabuf, vma);
+
+	WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+	return ret;
 }
 EXPORT_SYMBOL_GPL(dma_buf_mmap);
 
-- 
2.32.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH v4 2/4] drm/shmem-helper: Switch to vmf_insert_pfn
  2021-07-13 20:51 [Intel-gfx] [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 1/4] dma-buf: Require VM_PFNMAP vma for mmap Daniel Vetter
@ 2021-07-13 20:51 ` Daniel Vetter
  2021-07-22 18:22   ` Thomas Zimmermann
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86 Daniel Vetter
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Daniel Vetter @ 2021-07-13 20:51 UTC (permalink / raw)
  To: Intel Graphics Development
  Cc: David Airlie, Daniel Vetter, Maxime Ripard, DRI Development,
	Thomas Zimmermann, Daniel Vetter

We want to stop gup, which isn't the case if we use vmf_insert_page
and VM_MIXEDMAP, because that does not set pte_special.

v2: With this shmem gem helpers now definitely need CONFIG_MMU (0day)

v3: add more depends on MMU. For usb drivers this is a bit awkward,
but really it's correct: To be able to provide a contig mapping of
buffers to userspace on !MMU platforms we'd need to use the cma
helpers for these drivers on those platforms. As-is this wont work.

Also not exactly sure why vm_insert_page doesn't go boom, because that
definitely wont fly in practice since the pages are non-contig to
begin with.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
---
 drivers/gpu/drm/Kconfig                | 2 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++--
 drivers/gpu/drm/gud/Kconfig            | 2 +-
 drivers/gpu/drm/tiny/Kconfig           | 4 ++--
 drivers/gpu/drm/udl/Kconfig            | 1 +
 5 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 0d372354c2d0..314eefa39892 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -211,7 +211,7 @@ config DRM_KMS_CMA_HELPER
 
 config DRM_GEM_SHMEM_HELPER
 	bool
-	depends on DRM
+	depends on DRM && MMU
 	help
 	  Choose this if you need the GEM shmem helper functions
 
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index d5e6d4568f99..296ab1b7c07f 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -542,7 +542,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	} else {
 		page = shmem->pages[page_offset];
 
-		ret = vmf_insert_page(vma, vmf->address, page);
+		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
 	}
 
 	mutex_unlock(&shmem->pages_lock);
@@ -612,7 +612,7 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 		return ret;
 	}
 
-	vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
+	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND;
 	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
 	if (shmem->map_wc)
 		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
diff --git a/drivers/gpu/drm/gud/Kconfig b/drivers/gpu/drm/gud/Kconfig
index 1c8601bf4d91..9c1e61f9eec3 100644
--- a/drivers/gpu/drm/gud/Kconfig
+++ b/drivers/gpu/drm/gud/Kconfig
@@ -2,7 +2,7 @@
 
 config DRM_GUD
 	tristate "GUD USB Display"
-	depends on DRM && USB
+	depends on DRM && USB && MMU
 	select LZ4_COMPRESS
 	select DRM_KMS_HELPER
 	select DRM_GEM_SHMEM_HELPER
diff --git a/drivers/gpu/drm/tiny/Kconfig b/drivers/gpu/drm/tiny/Kconfig
index 5593128eeff9..c11fb5be7d09 100644
--- a/drivers/gpu/drm/tiny/Kconfig
+++ b/drivers/gpu/drm/tiny/Kconfig
@@ -44,7 +44,7 @@ config DRM_CIRRUS_QEMU
 
 config DRM_GM12U320
 	tristate "GM12U320 driver for USB projectors"
-	depends on DRM && USB
+	depends on DRM && USB && MMU
 	select DRM_KMS_HELPER
 	select DRM_GEM_SHMEM_HELPER
 	help
@@ -53,7 +53,7 @@ config DRM_GM12U320
 
 config DRM_SIMPLEDRM
 	tristate "Simple framebuffer driver"
-	depends on DRM
+	depends on DRM && MMU
 	select DRM_GEM_SHMEM_HELPER
 	select DRM_KMS_HELPER
 	help
diff --git a/drivers/gpu/drm/udl/Kconfig b/drivers/gpu/drm/udl/Kconfig
index 1f497d8f1ae5..c744175c6992 100644
--- a/drivers/gpu/drm/udl/Kconfig
+++ b/drivers/gpu/drm/udl/Kconfig
@@ -4,6 +4,7 @@ config DRM_UDL
 	depends on DRM
 	depends on USB
 	depends on USB_ARCH_HAS_HCD
+	depends on MMU
 	select DRM_GEM_SHMEM_HELPER
 	select DRM_KMS_HELPER
 	help
-- 
2.32.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86
  2021-07-13 20:51 [Intel-gfx] [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 1/4] dma-buf: Require VM_PFNMAP vma for mmap Daniel Vetter
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 2/4] drm/shmem-helper: Switch to vmf_insert_pfn Daniel Vetter
@ 2021-07-13 20:51 ` Daniel Vetter
  2021-07-14 11:54   ` Christian König
  2021-07-22 18:40   ` Thomas Zimmermann
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 4/4] drm/vgem: use shmem helpers Daniel Vetter
                   ` (5 subsequent siblings)
  8 siblings, 2 replies; 26+ messages in thread
From: Daniel Vetter @ 2021-07-13 20:51 UTC (permalink / raw)
  To: Intel Graphics Development
  Cc: Thomas Hellström, David Airlie, Daniel Vetter,
	DRI Development, Maxime Ripard, Thomas Zimmermann, Daniel Vetter,
	Christian König

intel-gfx-ci realized that something is not quite coherent anymore on
some platforms for our i915+vgem tests, when I tried to switch vgem
over to shmem helpers.

After lots of head-scratching I realized that I've removed calls to
drm_clflush. And we need those. To make this a bit cleaner use the
same page allocation tooling as ttm, which does internally clflush
(and more, as neeeded on any platform instead of just the intel x86
cpus i915 can be combined with).

Unfortunately this doesn't exist on arm, or as a generic feature. For
that I think only the dma-api can get at wc memory reliably, so maybe
we'd need some kind of GFP_WC flag to do this properly.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 296ab1b7c07f..657d2490aaa5 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -10,6 +10,10 @@
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
 
+#ifdef CONFIG_X86
+#include <asm/set_memory.h>
+#endif
+
 #include <drm/drm.h>
 #include <drm/drm_device.h>
 #include <drm/drm_drv.h>
@@ -162,6 +166,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
 		return PTR_ERR(pages);
 	}
 
+#ifdef CONFIG_X86
+	if (shmem->map_wc)
+		set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
+#endif
+
 	shmem->pages = pages;
 
 	return 0;
@@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
 	if (--shmem->pages_use_count > 0)
 		return;
 
+#ifdef CONFIG_X86
+	if (shmem->map_wc)
+		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
+#endif
+
 	drm_gem_put_pages(obj, shmem->pages,
 			  shmem->pages_mark_dirty_on_put,
 			  shmem->pages_mark_accessed_on_put);
-- 
2.32.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH v4 4/4] drm/vgem: use shmem helpers
  2021-07-13 20:51 [Intel-gfx] [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter
                   ` (2 preceding siblings ...)
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86 Daniel Vetter
@ 2021-07-13 20:51 ` Daniel Vetter
  2021-07-14 12:45   ` [Intel-gfx] [PATCH] " Daniel Vetter
  2021-07-22 18:50   ` [Intel-gfx] [PATCH v4 4/4] " Thomas Zimmermann
  2021-07-13 23:43 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for shmem helpers for vgem (rev6) Patchwork
                   ` (4 subsequent siblings)
  8 siblings, 2 replies; 26+ messages in thread
From: Daniel Vetter @ 2021-07-13 20:51 UTC (permalink / raw)
  To: Intel Graphics Development
  Cc: Daniel Vetter, DRI Development, Sumit Semwal, Melissa Wen,
	John Stultz, Thomas Zimmermann, Daniel Vetter, Chris Wilson,
	Christian König

Aside from deleting lots of code the real motivation here is to switch
the mmap over to VM_PFNMAP, to be more consistent with what real gpu
drivers do. They're all VM_PFNMP, which means get_user_pages doesn't
work, and even if you try and there's a struct page behind that,
touching it and mucking around with its refcount can upset drivers
real bad.

v2: Review from Thomas:
- sort #include
- drop more dead code that I didn't spot somehow

v3: select DRM_GEM_SHMEM_HELPER to make it build (intel-gfx-ci)

v4: I got tricked by 0cf2ef46c6c0 ("drm/shmem-helper: Use cached
mappings by default"), and we need WC in vgem because vgem doesn't
have explicit begin/end cpu access ioctls.

Also add a comment why exactly vgem has to use wc.

v5: Don't set obj->base.funcs, it will default to drm_gem_shmem_funcs
(Thomas)

v6: vgem also needs an MMU for remapping

Cc: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Melissa Wen <melissa.srw@gmail.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/Kconfig         |   5 +-
 drivers/gpu/drm/vgem/vgem_drv.c | 315 ++------------------------------
 2 files changed, 15 insertions(+), 305 deletions(-)

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 314eefa39892..28f7d2006e8b 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -272,7 +272,8 @@ source "drivers/gpu/drm/kmb/Kconfig"
 
 config DRM_VGEM
 	tristate "Virtual GEM provider"
-	depends on DRM
+	depends on DRM && MMU
+	select DRM_GEM_SHMEM_HELPER
 	help
 	  Choose this option to get a virtual graphics memory manager,
 	  as used by Mesa's software renderer for enhanced performance.
@@ -280,7 +281,7 @@ config DRM_VGEM
 
 config DRM_VKMS
 	tristate "Virtual KMS (EXPERIMENTAL)"
-	depends on DRM
+	depends on DRM && MMU
 	select DRM_KMS_HELPER
 	select DRM_GEM_SHMEM_HELPER
 	select CRC32
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index bf38a7e319d1..ba410ba6b7f7 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -38,6 +38,7 @@
 
 #include <drm/drm_drv.h>
 #include <drm/drm_file.h>
+#include <drm/drm_gem_shmem_helper.h>
 #include <drm/drm_ioctl.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_prime.h>
@@ -50,87 +51,11 @@
 #define DRIVER_MAJOR	1
 #define DRIVER_MINOR	0
 
-static const struct drm_gem_object_funcs vgem_gem_object_funcs;
-
 static struct vgem_device {
 	struct drm_device drm;
 	struct platform_device *platform;
 } *vgem_device;
 
-static void vgem_gem_free_object(struct drm_gem_object *obj)
-{
-	struct drm_vgem_gem_object *vgem_obj = to_vgem_bo(obj);
-
-	kvfree(vgem_obj->pages);
-	mutex_destroy(&vgem_obj->pages_lock);
-
-	if (obj->import_attach)
-		drm_prime_gem_destroy(obj, vgem_obj->table);
-
-	drm_gem_object_release(obj);
-	kfree(vgem_obj);
-}
-
-static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
-{
-	struct vm_area_struct *vma = vmf->vma;
-	struct drm_vgem_gem_object *obj = vma->vm_private_data;
-	/* We don't use vmf->pgoff since that has the fake offset */
-	unsigned long vaddr = vmf->address;
-	vm_fault_t ret = VM_FAULT_SIGBUS;
-	loff_t num_pages;
-	pgoff_t page_offset;
-	page_offset = (vaddr - vma->vm_start) >> PAGE_SHIFT;
-
-	num_pages = DIV_ROUND_UP(obj->base.size, PAGE_SIZE);
-
-	if (page_offset >= num_pages)
-		return VM_FAULT_SIGBUS;
-
-	mutex_lock(&obj->pages_lock);
-	if (obj->pages) {
-		get_page(obj->pages[page_offset]);
-		vmf->page = obj->pages[page_offset];
-		ret = 0;
-	}
-	mutex_unlock(&obj->pages_lock);
-	if (ret) {
-		struct page *page;
-
-		page = shmem_read_mapping_page(
-					file_inode(obj->base.filp)->i_mapping,
-					page_offset);
-		if (!IS_ERR(page)) {
-			vmf->page = page;
-			ret = 0;
-		} else switch (PTR_ERR(page)) {
-			case -ENOSPC:
-			case -ENOMEM:
-				ret = VM_FAULT_OOM;
-				break;
-			case -EBUSY:
-				ret = VM_FAULT_RETRY;
-				break;
-			case -EFAULT:
-			case -EINVAL:
-				ret = VM_FAULT_SIGBUS;
-				break;
-			default:
-				WARN_ON(PTR_ERR(page));
-				ret = VM_FAULT_SIGBUS;
-				break;
-		}
-
-	}
-	return ret;
-}
-
-static const struct vm_operations_struct vgem_gem_vm_ops = {
-	.fault = vgem_gem_fault,
-	.open = drm_gem_vm_open,
-	.close = drm_gem_vm_close,
-};
-
 static int vgem_open(struct drm_device *dev, struct drm_file *file)
 {
 	struct vgem_file *vfile;
@@ -159,81 +84,6 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
 	kfree(vfile);
 }
 
-static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
-						unsigned long size)
-{
-	struct drm_vgem_gem_object *obj;
-	int ret;
-
-	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
-	if (!obj)
-		return ERR_PTR(-ENOMEM);
-
-	obj->base.funcs = &vgem_gem_object_funcs;
-
-	ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
-	if (ret) {
-		kfree(obj);
-		return ERR_PTR(ret);
-	}
-
-	mutex_init(&obj->pages_lock);
-
-	return obj;
-}
-
-static void __vgem_gem_destroy(struct drm_vgem_gem_object *obj)
-{
-	drm_gem_object_release(&obj->base);
-	kfree(obj);
-}
-
-static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
-					      struct drm_file *file,
-					      unsigned int *handle,
-					      unsigned long size)
-{
-	struct drm_vgem_gem_object *obj;
-	int ret;
-
-	obj = __vgem_gem_create(dev, size);
-	if (IS_ERR(obj))
-		return ERR_CAST(obj);
-
-	ret = drm_gem_handle_create(file, &obj->base, handle);
-	if (ret) {
-		drm_gem_object_put(&obj->base);
-		return ERR_PTR(ret);
-	}
-
-	return &obj->base;
-}
-
-static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
-				struct drm_mode_create_dumb *args)
-{
-	struct drm_gem_object *gem_object;
-	u64 pitch, size;
-
-	pitch = args->width * DIV_ROUND_UP(args->bpp, 8);
-	size = args->height * pitch;
-	if (size == 0)
-		return -EINVAL;
-
-	gem_object = vgem_gem_create(dev, file, &args->handle, size);
-	if (IS_ERR(gem_object))
-		return PTR_ERR(gem_object);
-
-	args->size = gem_object->size;
-	args->pitch = pitch;
-
-	drm_gem_object_put(gem_object);
-
-	DRM_DEBUG("Created object of size %llu\n", args->size);
-
-	return 0;
-}
-
 static struct drm_ioctl_desc vgem_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
@@ -266,159 +116,23 @@ static const struct file_operations vgem_driver_fops = {
 	.release	= drm_release,
 };
 
-static struct page **vgem_pin_pages(struct drm_vgem_gem_object *bo)
-{
-	mutex_lock(&bo->pages_lock);
-	if (bo->pages_pin_count++ == 0) {
-		struct page **pages;
-
-		pages = drm_gem_get_pages(&bo->base);
-		if (IS_ERR(pages)) {
-			bo->pages_pin_count--;
-			mutex_unlock(&bo->pages_lock);
-			return pages;
-		}
-
-		bo->pages = pages;
-	}
-	mutex_unlock(&bo->pages_lock);
-
-	return bo->pages;
-}
-
-static void vgem_unpin_pages(struct drm_vgem_gem_object *bo)
+static struct drm_gem_object *vgem_gem_create_object(struct drm_device *dev, size_t size)
 {
-	mutex_lock(&bo->pages_lock);
-	if (--bo->pages_pin_count == 0) {
-		drm_gem_put_pages(&bo->base, bo->pages, true, true);
-		bo->pages = NULL;
-	}
-	mutex_unlock(&bo->pages_lock);
-}
+	struct drm_gem_shmem_object *obj;
 
-static int vgem_prime_pin(struct drm_gem_object *obj)
-{
-	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
-	long n_pages = obj->size >> PAGE_SHIFT;
-	struct page **pages;
-
-	pages = vgem_pin_pages(bo);
-	if (IS_ERR(pages))
-		return PTR_ERR(pages);
+	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+	if (!obj)
+		return NULL;
 
-	/* Flush the object from the CPU cache so that importers can rely
-	 * on coherent indirect access via the exported dma-address.
+	/*
+	 * vgem doesn't have any begin/end cpu access ioctls, therefore must use
+	 * coherent memory or dma-buf sharing just wont work.
 	 */
-	drm_clflush_pages(pages, n_pages);
-
-	return 0;
-}
-
-static void vgem_prime_unpin(struct drm_gem_object *obj)
-{
-	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
-
-	vgem_unpin_pages(bo);
-}
-
-static struct sg_table *vgem_prime_get_sg_table(struct drm_gem_object *obj)
-{
-	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
-
-	return drm_prime_pages_to_sg(obj->dev, bo->pages, bo->base.size >> PAGE_SHIFT);
-}
-
-static struct drm_gem_object* vgem_prime_import(struct drm_device *dev,
-						struct dma_buf *dma_buf)
-{
-	struct vgem_device *vgem = container_of(dev, typeof(*vgem), drm);
-
-	return drm_gem_prime_import_dev(dev, dma_buf, &vgem->platform->dev);
-}
-
-static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
-			struct dma_buf_attachment *attach, struct sg_table *sg)
-{
-	struct drm_vgem_gem_object *obj;
-	int npages;
-
-	obj = __vgem_gem_create(dev, attach->dmabuf->size);
-	if (IS_ERR(obj))
-		return ERR_CAST(obj);
-
-	npages = PAGE_ALIGN(attach->dmabuf->size) / PAGE_SIZE;
-
-	obj->table = sg;
-	obj->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
-	if (!obj->pages) {
-		__vgem_gem_destroy(obj);
-		return ERR_PTR(-ENOMEM);
-	}
+	obj->map_wc = true;
 
-	obj->pages_pin_count++; /* perma-pinned */
-	drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
 	return &obj->base;
 }
 
-static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
-{
-	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
-	long n_pages = obj->size >> PAGE_SHIFT;
-	struct page **pages;
-	void *vaddr;
-
-	pages = vgem_pin_pages(bo);
-	if (IS_ERR(pages))
-		return PTR_ERR(pages);
-
-	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
-	if (!vaddr)
-		return -ENOMEM;
-	dma_buf_map_set_vaddr(map, vaddr);
-
-	return 0;
-}
-
-static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
-{
-	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
-
-	vunmap(map->vaddr);
-	vgem_unpin_pages(bo);
-}
-
-static int vgem_prime_mmap(struct drm_gem_object *obj,
-			   struct vm_area_struct *vma)
-{
-	int ret;
-
-	if (obj->size < vma->vm_end - vma->vm_start)
-		return -EINVAL;
-
-	if (!obj->filp)
-		return -ENODEV;
-
-	ret = call_mmap(obj->filp, vma);
-	if (ret)
-		return ret;
-
-	vma_set_file(vma, obj->filp);
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
-	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
-
-	return 0;
-}
-
-static const struct drm_gem_object_funcs vgem_gem_object_funcs = {
-	.free = vgem_gem_free_object,
-	.pin = vgem_prime_pin,
-	.unpin = vgem_prime_unpin,
-	.get_sg_table = vgem_prime_get_sg_table,
-	.vmap = vgem_prime_vmap,
-	.vunmap = vgem_prime_vunmap,
-	.vm_ops = &vgem_gem_vm_ops,
-};
-
 static const struct drm_driver vgem_driver = {
 	.driver_features		= DRIVER_GEM | DRIVER_RENDER,
 	.open				= vgem_open,
@@ -427,13 +141,8 @@ static const struct drm_driver vgem_driver = {
 	.num_ioctls 			= ARRAY_SIZE(vgem_ioctls),
 	.fops				= &vgem_driver_fops,
 
-	.dumb_create			= vgem_gem_dumb_create,
-
-	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
-	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
-	.gem_prime_import = vgem_prime_import,
-	.gem_prime_import_sg_table = vgem_prime_import_sg_table,
-	.gem_prime_mmap = vgem_prime_mmap,
+	DRM_GEM_SHMEM_DRIVER_OPS,
+	.gem_create_object		= vgem_gem_create_object,
 
 	.name	= DRIVER_NAME,
 	.desc	= DRIVER_DESC,
-- 
2.32.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for shmem helpers for vgem (rev6)
  2021-07-13 20:51 [Intel-gfx] [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter
                   ` (3 preceding siblings ...)
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 4/4] drm/vgem: use shmem helpers Daniel Vetter
@ 2021-07-13 23:43 ` Patchwork
  2021-07-14  0:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2021-07-13 23:43 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

== Series Details ==

Series: shmem helpers for vgem (rev6)
URL   : https://patchwork.freedesktop.org/series/90670/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
00c2c63016d4 dma-buf: Require VM_PFNMAP vma for mmap
-:34: WARNING:TYPO_SPELLING: 'entires' may be misspelled - perhaps 'entries'?
#34: 
From auditing the various functions to insert pfn pte entires
                                                      ^^^^^^^

-:39: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#39: 
References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-WbWzkRg@mail.gmail.com/

-:97: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Daniel Vetter <daniel.vetter@ffwll.ch>' != 'Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>'

total: 0 errors, 3 warnings, 0 checks, 39 lines checked
28388a6256a1 drm/shmem-helper: Switch to vmf_insert_pfn
-:14: WARNING:TYPO_SPELLING: 'wont' may be misspelled - perhaps 'won't'?
#14: 
helpers for these drivers on those platforms. As-is this wont work.
                                                         ^^^^

-:17: WARNING:TYPO_SPELLING: 'wont' may be misspelled - perhaps 'won't'?
#17: 
definitely wont fly in practice since the pages are non-contig to
           ^^^^

-:108: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Daniel Vetter <daniel.vetter@ffwll.ch>' != 'Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>'

total: 0 errors, 3 warnings, 0 checks, 55 lines checked
7c9f29d37c9c drm/shmem-helpers: Allocate wc pages on x86
-:70: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Daniel Vetter <daniel.vetter@ffwll.ch>' != 'Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>'

total: 0 errors, 1 warnings, 0 checks, 32 lines checked
2fb90a33618a drm/vgem: use shmem helpers
-:22: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 0cf2ef46c6c0 ("drm/shmem-helper: Use cached mappings by default")'
#22: 
v4: I got tricked by 0cf2ef46c6c0 ("drm/shmem-helper: Use cached

-:300: WARNING:TYPO_SPELLING: 'wont' may be misspelled - perhaps 'won't'?
#300: FILE: drivers/gpu/drm/vgem/vgem_drv.c:129:
+	 * coherent memory or dma-buf sharing just wont work.
 	                                           ^^^^

-:431: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Daniel Vetter <daniel.vetter@ffwll.ch>' != 'Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>'

total: 1 errors, 2 warnings, 0 checks, 375 lines checked


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for shmem helpers for vgem (rev6)
  2021-07-13 20:51 [Intel-gfx] [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter
                   ` (4 preceding siblings ...)
  2021-07-13 23:43 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for shmem helpers for vgem (rev6) Patchwork
@ 2021-07-14  0:11 ` Patchwork
  2021-07-16 13:29 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for shmem helpers for vgem (rev8) Patchwork
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2021-07-14  0:11 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 16678 bytes --]

== Series Details ==

Series: shmem helpers for vgem (rev6)
URL   : https://patchwork.freedesktop.org/series/90670/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_10343 -> Patchwork_20593
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_20593 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_20593, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_20593:

### IGT changes ###

#### Possible regressions ####

  * igt@prime_vgem@basic-fence-mmap:
    - fi-skl-6700k2:      [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-skl-6700k2/igt@prime_vgem@basic-fence-mmap.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-skl-6700k2/igt@prime_vgem@basic-fence-mmap.html
    - fi-hsw-4770:        [PASS][3] -> [INCOMPLETE][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-hsw-4770/igt@prime_vgem@basic-fence-mmap.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-hsw-4770/igt@prime_vgem@basic-fence-mmap.html
    - fi-cfl-8700k:       [PASS][5] -> [INCOMPLETE][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-cfl-8700k/igt@prime_vgem@basic-fence-mmap.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-cfl-8700k/igt@prime_vgem@basic-fence-mmap.html
    - fi-bxt-dsi:         [PASS][7] -> [INCOMPLETE][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-bxt-dsi/igt@prime_vgem@basic-fence-mmap.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-bxt-dsi/igt@prime_vgem@basic-fence-mmap.html
    - fi-cml-u2:          [PASS][9] -> [INCOMPLETE][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-cml-u2/igt@prime_vgem@basic-fence-mmap.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-cml-u2/igt@prime_vgem@basic-fence-mmap.html
    - fi-elk-e7500:       [PASS][11] -> [INCOMPLETE][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-elk-e7500/igt@prime_vgem@basic-fence-mmap.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-elk-e7500/igt@prime_vgem@basic-fence-mmap.html
    - fi-ilk-650:         [PASS][13] -> [INCOMPLETE][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-ilk-650/igt@prime_vgem@basic-fence-mmap.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-ilk-650/igt@prime_vgem@basic-fence-mmap.html
    - fi-ivb-3770:        [PASS][15] -> [INCOMPLETE][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-ivb-3770/igt@prime_vgem@basic-fence-mmap.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-ivb-3770/igt@prime_vgem@basic-fence-mmap.html
    - fi-cfl-guc:         [PASS][17] -> [INCOMPLETE][18]
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-cfl-guc/igt@prime_vgem@basic-fence-mmap.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-cfl-guc/igt@prime_vgem@basic-fence-mmap.html
    - fi-kbl-soraka:      [PASS][19] -> [INCOMPLETE][20]
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-kbl-soraka/igt@prime_vgem@basic-fence-mmap.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-soraka/igt@prime_vgem@basic-fence-mmap.html
    - fi-tgl-y:           [PASS][21] -> [INCOMPLETE][22]
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-tgl-y/igt@prime_vgem@basic-fence-mmap.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-tgl-y/igt@prime_vgem@basic-fence-mmap.html
    - fi-bsw-kefka:       [PASS][23] -> [INCOMPLETE][24]
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-bsw-kefka/igt@prime_vgem@basic-fence-mmap.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-bsw-kefka/igt@prime_vgem@basic-fence-mmap.html
    - fi-kbl-x1275:       [PASS][25] -> [INCOMPLETE][26]
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-kbl-x1275/igt@prime_vgem@basic-fence-mmap.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-x1275/igt@prime_vgem@basic-fence-mmap.html
    - fi-glk-dsi:         [PASS][27] -> [INCOMPLETE][28]
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-glk-dsi/igt@prime_vgem@basic-fence-mmap.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-glk-dsi/igt@prime_vgem@basic-fence-mmap.html
    - fi-kbl-8809g:       [PASS][29] -> [INCOMPLETE][30]
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-kbl-8809g/igt@prime_vgem@basic-fence-mmap.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-8809g/igt@prime_vgem@basic-fence-mmap.html
    - fi-icl-y:           [PASS][31] -> [INCOMPLETE][32]
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-icl-y/igt@prime_vgem@basic-fence-mmap.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-icl-y/igt@prime_vgem@basic-fence-mmap.html
    - fi-kbl-guc:         [PASS][33] -> [INCOMPLETE][34]
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-kbl-guc/igt@prime_vgem@basic-fence-mmap.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-guc/igt@prime_vgem@basic-fence-mmap.html
    - fi-bsw-nick:        [PASS][35] -> [INCOMPLETE][36]
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-bsw-nick/igt@prime_vgem@basic-fence-mmap.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-bsw-nick/igt@prime_vgem@basic-fence-mmap.html
    - fi-kbl-7500u:       [PASS][37] -> [INCOMPLETE][38]
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-kbl-7500u/igt@prime_vgem@basic-fence-mmap.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-7500u/igt@prime_vgem@basic-fence-mmap.html
    - fi-cfl-8109u:       [PASS][39] -> [INCOMPLETE][40]
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-cfl-8109u/igt@prime_vgem@basic-fence-mmap.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-cfl-8109u/igt@prime_vgem@basic-fence-mmap.html
    - fi-bwr-2160:        [PASS][41] -> [INCOMPLETE][42]
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-bwr-2160/igt@prime_vgem@basic-fence-mmap.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-bwr-2160/igt@prime_vgem@basic-fence-mmap.html
    - fi-bdw-5557u:       [PASS][43] -> [INCOMPLETE][44]
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-bdw-5557u/igt@prime_vgem@basic-fence-mmap.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-bdw-5557u/igt@prime_vgem@basic-fence-mmap.html
    - fi-skl-guc:         [PASS][45] -> [INCOMPLETE][46]
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-skl-guc/igt@prime_vgem@basic-fence-mmap.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-skl-guc/igt@prime_vgem@basic-fence-mmap.html
    - fi-kbl-7567u:       [PASS][47] -> [INCOMPLETE][48]
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-kbl-7567u/igt@prime_vgem@basic-fence-mmap.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-7567u/igt@prime_vgem@basic-fence-mmap.html
    - fi-snb-2520m:       [PASS][49] -> [INCOMPLETE][50]
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-snb-2520m/igt@prime_vgem@basic-fence-mmap.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-snb-2520m/igt@prime_vgem@basic-fence-mmap.html

  * igt@runner@aborted:
    - fi-ilk-650:         NOTRUN -> [FAIL][51]
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-ilk-650/igt@runner@aborted.html
    - fi-snb-2520m:       NOTRUN -> [FAIL][52]
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-snb-2520m/igt@runner@aborted.html
    - fi-ivb-3770:        NOTRUN -> [FAIL][53]
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-ivb-3770/igt@runner@aborted.html
    - fi-elk-e7500:       NOTRUN -> [FAIL][54]
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-elk-e7500/igt@runner@aborted.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@prime_vgem@basic-fence-mmap:
    - {fi-tgl-1115g4}:    [PASS][55] -> [INCOMPLETE][56]
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-tgl-1115g4/igt@prime_vgem@basic-fence-mmap.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-tgl-1115g4/igt@prime_vgem@basic-fence-mmap.html
    - {fi-jsl-1}:         [PASS][57] -> [INCOMPLETE][58]
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-jsl-1/igt@prime_vgem@basic-fence-mmap.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-jsl-1/igt@prime_vgem@basic-fence-mmap.html
    - {fi-ehl-2}:         [PASS][59] -> [INCOMPLETE][60]
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-ehl-2/igt@prime_vgem@basic-fence-mmap.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-ehl-2/igt@prime_vgem@basic-fence-mmap.html
    - {fi-hsw-gt1}:       [PASS][61] -> [INCOMPLETE][62]
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-hsw-gt1/igt@prime_vgem@basic-fence-mmap.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-hsw-gt1/igt@prime_vgem@basic-fence-mmap.html
    - {fi-tgl-dsi}:       [PASS][63] -> [INCOMPLETE][64]
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-tgl-dsi/igt@prime_vgem@basic-fence-mmap.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-tgl-dsi/igt@prime_vgem@basic-fence-mmap.html

  * igt@runner@aborted:
    - {fi-ehl-2}:         NOTRUN -> [FAIL][65]
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-ehl-2/igt@runner@aborted.html

  
Known issues
------------

  Here are the changes found in Patchwork_20593 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@prime_vgem@basic-fence-mmap:
    - fi-pnv-d510:        [PASS][66] -> [INCOMPLETE][67] ([i915#299])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10343/fi-pnv-d510/igt@prime_vgem@basic-fence-mmap.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-pnv-d510/igt@prime_vgem@basic-fence-mmap.html

  * igt@runner@aborted:
    - fi-pnv-d510:        NOTRUN -> [FAIL][68] ([i915#2403] / [i915#2505] / [i915#2722])
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-pnv-d510/igt@runner@aborted.html
    - fi-kbl-x1275:       NOTRUN -> [FAIL][69] ([i915#2722] / [i915#3363] / [i915#409])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-x1275/igt@runner@aborted.html
    - fi-bsw-kefka:       NOTRUN -> [FAIL][70] ([i915#2722])
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-bsw-kefka/igt@runner@aborted.html
    - fi-cfl-8700k:       NOTRUN -> [FAIL][71] ([i915#2722] / [i915#3363] / [i915#409])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-cfl-8700k/igt@runner@aborted.html
    - fi-tgl-y:           NOTRUN -> [FAIL][72] ([i915#2722] / [i915#409])
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-tgl-y/igt@runner@aborted.html
    - fi-cfl-8109u:       NOTRUN -> [FAIL][73] ([i915#2722] / [i915#3363] / [i915#409])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-cfl-8109u/igt@runner@aborted.html
    - fi-glk-dsi:         NOTRUN -> [FAIL][74] ([i915#2722] / [i915#3363] / [i915#409] / [k.org#202321])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-glk-dsi/igt@runner@aborted.html
    - fi-bsw-nick:        NOTRUN -> [FAIL][75] ([i915#2722])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-bsw-nick/igt@runner@aborted.html
    - fi-kbl-8809g:       NOTRUN -> [FAIL][76] ([i915#2722] / [i915#3363] / [i915#409])
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-8809g/igt@runner@aborted.html
    - fi-bdw-5557u:       NOTRUN -> [FAIL][77] ([i915#2722])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-bdw-5557u/igt@runner@aborted.html
    - fi-bwr-2160:        NOTRUN -> [FAIL][78] ([i915#2505] / [i915#2722])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-bwr-2160/igt@runner@aborted.html
    - fi-kbl-soraka:      NOTRUN -> [FAIL][79] ([i915#2722] / [i915#3363] / [i915#409])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-soraka/igt@runner@aborted.html
    - fi-hsw-4770:        NOTRUN -> [FAIL][80] ([i915#2505] / [i915#2722])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-hsw-4770/igt@runner@aborted.html
    - fi-kbl-7500u:       NOTRUN -> [FAIL][81] ([i915#2722] / [i915#3363] / [i915#409])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-7500u/igt@runner@aborted.html
    - fi-kbl-guc:         NOTRUN -> [FAIL][82] ([i915#2722] / [i915#3363] / [i915#409])
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-guc/igt@runner@aborted.html
    - fi-cml-u2:          NOTRUN -> [FAIL][83] ([i915#2722] / [i915#3363] / [i915#409])
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-cml-u2/igt@runner@aborted.html
    - fi-bxt-dsi:         NOTRUN -> [FAIL][84] ([i915#2722] / [i915#3363] / [i915#409])
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-bxt-dsi/igt@runner@aborted.html
    - fi-cfl-guc:         NOTRUN -> [FAIL][85] ([i915#2722] / [i915#3363] / [i915#409])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-cfl-guc/igt@runner@aborted.html
    - fi-icl-y:           NOTRUN -> [FAIL][86] ([i915#409])
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-icl-y/igt@runner@aborted.html
    - fi-kbl-7567u:       NOTRUN -> [FAIL][87] ([i915#2722] / [i915#3363] / [i915#409])
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-kbl-7567u/igt@runner@aborted.html
    - fi-skl-guc:         NOTRUN -> [FAIL][88] ([i915#2722] / [i915#3363] / [i915#409])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-skl-guc/igt@runner@aborted.html
    - fi-skl-6700k2:      NOTRUN -> [FAIL][89] ([i915#2722] / [i915#3363] / [i915#409])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/fi-skl-6700k2/igt@runner@aborted.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [i915#2403]: https://gitlab.freedesktop.org/drm/intel/issues/2403
  [i915#2505]: https://gitlab.freedesktop.org/drm/intel/issues/2505
  [i915#2722]: https://gitlab.freedesktop.org/drm/intel/issues/2722
  [i915#299]: https://gitlab.freedesktop.org/drm/intel/issues/299
  [i915#3363]: https://gitlab.freedesktop.org/drm/intel/issues/3363
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#3717]: https://gitlab.freedesktop.org/drm/intel/issues/3717
  [i915#409]: https://gitlab.freedesktop.org/drm/intel/issues/409
  [k.org#202321]: https://bugzilla.kernel.org/show_bug.cgi?id=202321


Participating hosts (39 -> 36)
------------------------------

  Missing    (3): fi-ilk-m540 fi-bdw-samus fi-hsw-4200u 


Build changes
-------------

  * IGT: IGT_6137 -> IGTPW_6018
  * Linux: CI_DRM_10343 -> Patchwork_20593

  CI-20190529: 20190529
  CI_DRM_10343: 5b5a6e26ea2a5dc93aba918c28159c46a1cb3b02 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_6018: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6018/index.html
  IGT_6137: 2fee489255f7a8cd6a584373c30e3d44a07a78ea @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_20593: 2fb90a33618a73f3ba8024460210bd520f60a6e7 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

2fb90a33618a drm/vgem: use shmem helpers
7c9f29d37c9c drm/shmem-helpers: Allocate wc pages on x86
28388a6256a1 drm/shmem-helper: Switch to vmf_insert_pfn
00c2c63016d4 dma-buf: Require VM_PFNMAP vma for mmap

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20593/index.html

[-- Attachment #1.2: Type: text/html, Size: 21328 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86 Daniel Vetter
@ 2021-07-14 11:54   ` Christian König
  2021-07-14 12:48     ` Daniel Vetter
  2021-07-22 18:40   ` Thomas Zimmermann
  1 sibling, 1 reply; 26+ messages in thread
From: Christian König @ 2021-07-14 11:54 UTC (permalink / raw)
  To: Daniel Vetter, Intel Graphics Development
  Cc: Thomas Hellström, David Airlie, Maxime Ripard,
	DRI Development, Thomas Zimmermann, Daniel Vetter

Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> intel-gfx-ci realized that something is not quite coherent anymore on
> some platforms for our i915+vgem tests, when I tried to switch vgem
> over to shmem helpers.
>
> After lots of head-scratching I realized that I've removed calls to
> drm_clflush. And we need those. To make this a bit cleaner use the
> same page allocation tooling as ttm, which does internally clflush
> (and more, as neeeded on any platform instead of just the intel x86
> cpus i915 can be combined with).
>
> Unfortunately this doesn't exist on arm, or as a generic feature. For
> that I think only the dma-api can get at wc memory reliably, so maybe
> we'd need some kind of GFP_WC flag to do this properly.

The problem is that this stuff is extremely architecture specific. So 
GFP_WC and GFP_UNCACHED are really what we should aim for in the long term.

And as far as I know we have at least the following possibilities how it 
is implemented:

* A fixed amount of registers which tells the CPU the caching behavior 
for a memory region, e.g. MTRR.
* Some bits of the memory pointers used, e.g. you see the same memory at 
different locations with different caching attributes.
* Some bits in the CPUs page table.
* Some bits in a separate page table.

On top of that there is the PCIe specification which defines non-cache 
snooping access as an extension.

Mixing that with the CPU caching behavior gets you some really nice ways 
to break a driver. In general x86 seems to be rather graceful, but arm 
and PowerPC are easily pissed if you mess that up.

> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>

Acked-by: Christian könig <christian.koenig@amd.com>

Regards,
Christian.

> ---
>   drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 296ab1b7c07f..657d2490aaa5 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -10,6 +10,10 @@
>   #include <linux/slab.h>
>   #include <linux/vmalloc.h>
>   
> +#ifdef CONFIG_X86
> +#include <asm/set_memory.h>
> +#endif
> +
>   #include <drm/drm.h>
>   #include <drm/drm_device.h>
>   #include <drm/drm_drv.h>
> @@ -162,6 +166,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
>   		return PTR_ERR(pages);
>   	}
>   
> +#ifdef CONFIG_X86
> +	if (shmem->map_wc)
> +		set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
> +#endif
> +
>   	shmem->pages = pages;
>   
>   	return 0;
> @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
>   	if (--shmem->pages_use_count > 0)
>   		return;
>   
> +#ifdef CONFIG_X86
> +	if (shmem->map_wc)
> +		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> +#endif
> +
>   	drm_gem_put_pages(obj, shmem->pages,
>   			  shmem->pages_mark_dirty_on_put,
>   			  shmem->pages_mark_accessed_on_put);

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH] drm/vgem: use shmem helpers
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 4/4] drm/vgem: use shmem helpers Daniel Vetter
@ 2021-07-14 12:45   ` Daniel Vetter
  2021-07-22 18:50   ` [Intel-gfx] [PATCH v4 4/4] " Thomas Zimmermann
  1 sibling, 0 replies; 26+ messages in thread
From: Daniel Vetter @ 2021-07-14 12:45 UTC (permalink / raw)
  To: Intel Graphics Development
  Cc: Daniel Vetter, DRI Development, Sumit Semwal, Melissa Wen,
	John Stultz, Thomas Zimmermann, Daniel Vetter, Chris Wilson,
	Christian König

Aside from deleting lots of code the real motivation here is to switch
the mmap over to VM_PFNMAP, to be more consistent with what real gpu
drivers do. They're all VM_PFNMP, which means get_user_pages doesn't
work, and even if you try and there's a struct page behind that,
touching it and mucking around with its refcount can upset drivers
real bad.

v2: Review from Thomas:
- sort #include
- drop more dead code that I didn't spot somehow

v3: select DRM_GEM_SHMEM_HELPER to make it build (intel-gfx-ci)

v4: I got tricked by 0cf2ef46c6c0 ("drm/shmem-helper: Use cached
mappings by default"), and we need WC in vgem because vgem doesn't
have explicit begin/end cpu access ioctls.

Also add a comment why exactly vgem has to use wc.

v5: Don't set obj->base.funcs, it will default to drm_gem_shmem_funcs
(Thomas)

v6: vgem also needs an MMU for remapping

v7: I absolutely butchered the rebases over the vgem mmap change and
revert and broke the patch. Actually go back to v6 from before the
vgem mmap changes.

Cc: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Melissa Wen <melissa.srw@gmail.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/Kconfig         |   5 +-
 drivers/gpu/drm/vgem/vgem_drv.c | 342 ++------------------------------
 2 files changed, 16 insertions(+), 331 deletions(-)

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 314eefa39892..28f7d2006e8b 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -272,7 +272,8 @@ source "drivers/gpu/drm/kmb/Kconfig"
 
 config DRM_VGEM
 	tristate "Virtual GEM provider"
-	depends on DRM
+	depends on DRM && MMU
+	select DRM_GEM_SHMEM_HELPER
 	help
 	  Choose this option to get a virtual graphics memory manager,
 	  as used by Mesa's software renderer for enhanced performance.
@@ -280,7 +281,7 @@ config DRM_VGEM
 
 config DRM_VKMS
 	tristate "Virtual KMS (EXPERIMENTAL)"
-	depends on DRM
+	depends on DRM && MMU
 	select DRM_KMS_HELPER
 	select DRM_GEM_SHMEM_HELPER
 	select CRC32
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index bf38a7e319d1..a87eafa89e9f 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -38,6 +38,7 @@
 
 #include <drm/drm_drv.h>
 #include <drm/drm_file.h>
+#include <drm/drm_gem_shmem_helper.h>
 #include <drm/drm_ioctl.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_prime.h>
@@ -50,87 +51,11 @@
 #define DRIVER_MAJOR	1
 #define DRIVER_MINOR	0
 
-static const struct drm_gem_object_funcs vgem_gem_object_funcs;
-
 static struct vgem_device {
 	struct drm_device drm;
 	struct platform_device *platform;
 } *vgem_device;
 
-static void vgem_gem_free_object(struct drm_gem_object *obj)
-{
-	struct drm_vgem_gem_object *vgem_obj = to_vgem_bo(obj);
-
-	kvfree(vgem_obj->pages);
-	mutex_destroy(&vgem_obj->pages_lock);
-
-	if (obj->import_attach)
-		drm_prime_gem_destroy(obj, vgem_obj->table);
-
-	drm_gem_object_release(obj);
-	kfree(vgem_obj);
-}
-
-static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
-{
-	struct vm_area_struct *vma = vmf->vma;
-	struct drm_vgem_gem_object *obj = vma->vm_private_data;
-	/* We don't use vmf->pgoff since that has the fake offset */
-	unsigned long vaddr = vmf->address;
-	vm_fault_t ret = VM_FAULT_SIGBUS;
-	loff_t num_pages;
-	pgoff_t page_offset;
-	page_offset = (vaddr - vma->vm_start) >> PAGE_SHIFT;
-
-	num_pages = DIV_ROUND_UP(obj->base.size, PAGE_SIZE);
-
-	if (page_offset >= num_pages)
-		return VM_FAULT_SIGBUS;
-
-	mutex_lock(&obj->pages_lock);
-	if (obj->pages) {
-		get_page(obj->pages[page_offset]);
-		vmf->page = obj->pages[page_offset];
-		ret = 0;
-	}
-	mutex_unlock(&obj->pages_lock);
-	if (ret) {
-		struct page *page;
-
-		page = shmem_read_mapping_page(
-					file_inode(obj->base.filp)->i_mapping,
-					page_offset);
-		if (!IS_ERR(page)) {
-			vmf->page = page;
-			ret = 0;
-		} else switch (PTR_ERR(page)) {
-			case -ENOSPC:
-			case -ENOMEM:
-				ret = VM_FAULT_OOM;
-				break;
-			case -EBUSY:
-				ret = VM_FAULT_RETRY;
-				break;
-			case -EFAULT:
-			case -EINVAL:
-				ret = VM_FAULT_SIGBUS;
-				break;
-			default:
-				WARN_ON(PTR_ERR(page));
-				ret = VM_FAULT_SIGBUS;
-				break;
-		}
-
-	}
-	return ret;
-}
-
-static const struct vm_operations_struct vgem_gem_vm_ops = {
-	.fault = vgem_gem_fault,
-	.open = drm_gem_vm_open,
-	.close = drm_gem_vm_close,
-};
-
 static int vgem_open(struct drm_device *dev, struct drm_file *file)
 {
 	struct vgem_file *vfile;
@@ -159,266 +84,30 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
 	kfree(vfile);
 }
 
-static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
-						unsigned long size)
-{
-	struct drm_vgem_gem_object *obj;
-	int ret;
-
-	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
-	if (!obj)
-		return ERR_PTR(-ENOMEM);
-
-	obj->base.funcs = &vgem_gem_object_funcs;
-
-	ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
-	if (ret) {
-		kfree(obj);
-		return ERR_PTR(ret);
-	}
-
-	mutex_init(&obj->pages_lock);
-
-	return obj;
-}
-
-static void __vgem_gem_destroy(struct drm_vgem_gem_object *obj)
-{
-	drm_gem_object_release(&obj->base);
-	kfree(obj);
-}
-
-static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
-					      struct drm_file *file,
-					      unsigned int *handle,
-					      unsigned long size)
-{
-	struct drm_vgem_gem_object *obj;
-	int ret;
-
-	obj = __vgem_gem_create(dev, size);
-	if (IS_ERR(obj))
-		return ERR_CAST(obj);
-
-	ret = drm_gem_handle_create(file, &obj->base, handle);
-	if (ret) {
-		drm_gem_object_put(&obj->base);
-		return ERR_PTR(ret);
-	}
-
-	return &obj->base;
-}
-
-static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
-				struct drm_mode_create_dumb *args)
-{
-	struct drm_gem_object *gem_object;
-	u64 pitch, size;
-
-	pitch = args->width * DIV_ROUND_UP(args->bpp, 8);
-	size = args->height * pitch;
-	if (size == 0)
-		return -EINVAL;
-
-	gem_object = vgem_gem_create(dev, file, &args->handle, size);
-	if (IS_ERR(gem_object))
-		return PTR_ERR(gem_object);
-
-	args->size = gem_object->size;
-	args->pitch = pitch;
-
-	drm_gem_object_put(gem_object);
-
-	DRM_DEBUG("Created object of size %llu\n", args->size);
-
-	return 0;
-}
-
 static struct drm_ioctl_desc vgem_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
 };
 
-static int vgem_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	unsigned long flags = vma->vm_flags;
-	int ret;
-
-	ret = drm_gem_mmap(filp, vma);
-	if (ret)
-		return ret;
-
-	/* Keep the WC mmaping set by drm_gem_mmap() but our pages
-	 * are ordinary and not special.
-	 */
-	vma->vm_flags = flags | VM_DONTEXPAND | VM_DONTDUMP;
-	return 0;
-}
+DEFINE_DRM_GEM_FOPS(vgem_driver_fops);
 
-static const struct file_operations vgem_driver_fops = {
-	.owner		= THIS_MODULE,
-	.open		= drm_open,
-	.mmap		= vgem_mmap,
-	.poll		= drm_poll,
-	.read		= drm_read,
-	.unlocked_ioctl = drm_ioctl,
-	.compat_ioctl	= drm_compat_ioctl,
-	.release	= drm_release,
-};
-
-static struct page **vgem_pin_pages(struct drm_vgem_gem_object *bo)
-{
-	mutex_lock(&bo->pages_lock);
-	if (bo->pages_pin_count++ == 0) {
-		struct page **pages;
-
-		pages = drm_gem_get_pages(&bo->base);
-		if (IS_ERR(pages)) {
-			bo->pages_pin_count--;
-			mutex_unlock(&bo->pages_lock);
-			return pages;
-		}
-
-		bo->pages = pages;
-	}
-	mutex_unlock(&bo->pages_lock);
-
-	return bo->pages;
-}
-
-static void vgem_unpin_pages(struct drm_vgem_gem_object *bo)
-{
-	mutex_lock(&bo->pages_lock);
-	if (--bo->pages_pin_count == 0) {
-		drm_gem_put_pages(&bo->base, bo->pages, true, true);
-		bo->pages = NULL;
-	}
-	mutex_unlock(&bo->pages_lock);
-}
-
-static int vgem_prime_pin(struct drm_gem_object *obj)
+static struct drm_gem_object *vgem_gem_create_object(struct drm_device *dev, size_t size)
 {
-	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
-	long n_pages = obj->size >> PAGE_SHIFT;
-	struct page **pages;
+	struct drm_gem_shmem_object *obj;
 
-	pages = vgem_pin_pages(bo);
-	if (IS_ERR(pages))
-		return PTR_ERR(pages);
+	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+	if (!obj)
+		return NULL;
 
-	/* Flush the object from the CPU cache so that importers can rely
-	 * on coherent indirect access via the exported dma-address.
+	/*
+	 * vgem doesn't have any begin/end cpu access ioctls, therefore must use
+	 * coherent memory or dma-buf sharing just wont work.
 	 */
-	drm_clflush_pages(pages, n_pages);
-
-	return 0;
-}
-
-static void vgem_prime_unpin(struct drm_gem_object *obj)
-{
-	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
-
-	vgem_unpin_pages(bo);
-}
-
-static struct sg_table *vgem_prime_get_sg_table(struct drm_gem_object *obj)
-{
-	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
-
-	return drm_prime_pages_to_sg(obj->dev, bo->pages, bo->base.size >> PAGE_SHIFT);
-}
-
-static struct drm_gem_object* vgem_prime_import(struct drm_device *dev,
-						struct dma_buf *dma_buf)
-{
-	struct vgem_device *vgem = container_of(dev, typeof(*vgem), drm);
-
-	return drm_gem_prime_import_dev(dev, dma_buf, &vgem->platform->dev);
-}
-
-static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
-			struct dma_buf_attachment *attach, struct sg_table *sg)
-{
-	struct drm_vgem_gem_object *obj;
-	int npages;
-
-	obj = __vgem_gem_create(dev, attach->dmabuf->size);
-	if (IS_ERR(obj))
-		return ERR_CAST(obj);
+	obj->map_wc = true;
 
-	npages = PAGE_ALIGN(attach->dmabuf->size) / PAGE_SIZE;
-
-	obj->table = sg;
-	obj->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
-	if (!obj->pages) {
-		__vgem_gem_destroy(obj);
-		return ERR_PTR(-ENOMEM);
-	}
-
-	obj->pages_pin_count++; /* perma-pinned */
-	drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
 	return &obj->base;
 }
 
-static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
-{
-	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
-	long n_pages = obj->size >> PAGE_SHIFT;
-	struct page **pages;
-	void *vaddr;
-
-	pages = vgem_pin_pages(bo);
-	if (IS_ERR(pages))
-		return PTR_ERR(pages);
-
-	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
-	if (!vaddr)
-		return -ENOMEM;
-	dma_buf_map_set_vaddr(map, vaddr);
-
-	return 0;
-}
-
-static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
-{
-	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
-
-	vunmap(map->vaddr);
-	vgem_unpin_pages(bo);
-}
-
-static int vgem_prime_mmap(struct drm_gem_object *obj,
-			   struct vm_area_struct *vma)
-{
-	int ret;
-
-	if (obj->size < vma->vm_end - vma->vm_start)
-		return -EINVAL;
-
-	if (!obj->filp)
-		return -ENODEV;
-
-	ret = call_mmap(obj->filp, vma);
-	if (ret)
-		return ret;
-
-	vma_set_file(vma, obj->filp);
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
-	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
-
-	return 0;
-}
-
-static const struct drm_gem_object_funcs vgem_gem_object_funcs = {
-	.free = vgem_gem_free_object,
-	.pin = vgem_prime_pin,
-	.unpin = vgem_prime_unpin,
-	.get_sg_table = vgem_prime_get_sg_table,
-	.vmap = vgem_prime_vmap,
-	.vunmap = vgem_prime_vunmap,
-	.vm_ops = &vgem_gem_vm_ops,
-};
-
 static const struct drm_driver vgem_driver = {
 	.driver_features		= DRIVER_GEM | DRIVER_RENDER,
 	.open				= vgem_open,
@@ -427,13 +116,8 @@ static const struct drm_driver vgem_driver = {
 	.num_ioctls 			= ARRAY_SIZE(vgem_ioctls),
 	.fops				= &vgem_driver_fops,
 
-	.dumb_create			= vgem_gem_dumb_create,
-
-	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
-	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
-	.gem_prime_import = vgem_prime_import,
-	.gem_prime_import_sg_table = vgem_prime_import_sg_table,
-	.gem_prime_mmap = vgem_prime_mmap,
+	DRM_GEM_SHMEM_DRIVER_OPS,
+	.gem_create_object		= vgem_gem_create_object,
 
 	.name	= DRIVER_NAME,
 	.desc	= DRIVER_DESC,
-- 
2.32.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86
  2021-07-14 11:54   ` Christian König
@ 2021-07-14 12:48     ` Daniel Vetter
  2021-07-14 12:58       ` Christian König
  0 siblings, 1 reply; 26+ messages in thread
From: Daniel Vetter @ 2021-07-14 12:48 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Hellström, David Airlie, Daniel Vetter,
	Intel Graphics Development, DRI Development, Maxime Ripard,
	Thomas Zimmermann, Daniel Vetter

On Wed, Jul 14, 2021 at 01:54:50PM +0200, Christian König wrote:
> Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> > intel-gfx-ci realized that something is not quite coherent anymore on
> > some platforms for our i915+vgem tests, when I tried to switch vgem
> > over to shmem helpers.
> > 
> > After lots of head-scratching I realized that I've removed calls to
> > drm_clflush. And we need those. To make this a bit cleaner use the
> > same page allocation tooling as ttm, which does internally clflush
> > (and more, as neeeded on any platform instead of just the intel x86
> > cpus i915 can be combined with).
> > 
> > Unfortunately this doesn't exist on arm, or as a generic feature. For
> > that I think only the dma-api can get at wc memory reliably, so maybe
> > we'd need some kind of GFP_WC flag to do this properly.
> 
> The problem is that this stuff is extremely architecture specific. So GFP_WC
> and GFP_UNCACHED are really what we should aim for in the long term.
> 
> And as far as I know we have at least the following possibilities how it is
> implemented:
> 
> * A fixed amount of registers which tells the CPU the caching behavior for a
> memory region, e.g. MTRR.
> * Some bits of the memory pointers used, e.g. you see the same memory at
> different locations with different caching attributes.
> * Some bits in the CPUs page table.
> * Some bits in a separate page table.
> 
> On top of that there is the PCIe specification which defines non-cache
> snooping access as an extension.

Yeah dma-buf is extremely ill-defined even on x86 if you combine these
all. We just play a game of whack-a-mole with the cacheline dirt until
it's gone.

That's the other piece here, how do you even make sure that the page is
properly flushed and ready for wc access:
- easy case is x86 with clflush available pretty much everywhere (since
  10+ years at least)
- next are cpus which have some cache flush instructions, but it's highly
  cpu model specific
- next up is the same, but you absolutely have to make sure there's no
  other mapping around anymore or the coherency fabric just dies
- and I'm pretty sure there's worse stuff where you defacto can only
  allocate wc memory that's set aside at boot-up and that's all you ever
  get.

Cheers, Daniel

> Mixing that with the CPU caching behavior gets you some really nice ways to
> break a driver. In general x86 seems to be rather graceful, but arm and
> PowerPC are easily pissed if you mess that up.
> 
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Maxime Ripard <mripard@kernel.org>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: David Airlie <airlied@linux.ie>
> > Cc: Daniel Vetter <daniel@ffwll.ch>
> 
> Acked-by: Christian könig <christian.koenig@amd.com>
> 
> Regards,
> Christian.
> 
> > ---
> >   drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++++++++++
> >   1 file changed, 14 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index 296ab1b7c07f..657d2490aaa5 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -10,6 +10,10 @@
> >   #include <linux/slab.h>
> >   #include <linux/vmalloc.h>
> > +#ifdef CONFIG_X86
> > +#include <asm/set_memory.h>
> > +#endif
> > +
> >   #include <drm/drm.h>
> >   #include <drm/drm_device.h>
> >   #include <drm/drm_drv.h>
> > @@ -162,6 +166,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> >   		return PTR_ERR(pages);
> >   	}
> > +#ifdef CONFIG_X86
> > +	if (shmem->map_wc)
> > +		set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
> > +#endif
> > +
> >   	shmem->pages = pages;
> >   	return 0;
> > @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> >   	if (--shmem->pages_use_count > 0)
> >   		return;
> > +#ifdef CONFIG_X86
> > +	if (shmem->map_wc)
> > +		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> > +#endif
> > +
> >   	drm_gem_put_pages(obj, shmem->pages,
> >   			  shmem->pages_mark_dirty_on_put,
> >   			  shmem->pages_mark_accessed_on_put);
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86
  2021-07-14 12:48     ` Daniel Vetter
@ 2021-07-14 12:58       ` Christian König
  2021-07-14 16:16         ` Daniel Vetter
  0 siblings, 1 reply; 26+ messages in thread
From: Christian König @ 2021-07-14 12:58 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Thomas Hellström, David Airlie, Daniel Vetter,
	Intel Graphics Development, DRI Development, Maxime Ripard,
	Thomas Zimmermann, Daniel Vetter

Am 14.07.21 um 14:48 schrieb Daniel Vetter:
> On Wed, Jul 14, 2021 at 01:54:50PM +0200, Christian König wrote:
>> Am 13.07.21 um 22:51 schrieb Daniel Vetter:
>>> intel-gfx-ci realized that something is not quite coherent anymore on
>>> some platforms for our i915+vgem tests, when I tried to switch vgem
>>> over to shmem helpers.
>>>
>>> After lots of head-scratching I realized that I've removed calls to
>>> drm_clflush. And we need those. To make this a bit cleaner use the
>>> same page allocation tooling as ttm, which does internally clflush
>>> (and more, as neeeded on any platform instead of just the intel x86
>>> cpus i915 can be combined with).
>>>
>>> Unfortunately this doesn't exist on arm, or as a generic feature. For
>>> that I think only the dma-api can get at wc memory reliably, so maybe
>>> we'd need some kind of GFP_WC flag to do this properly.
>> The problem is that this stuff is extremely architecture specific. So GFP_WC
>> and GFP_UNCACHED are really what we should aim for in the long term.
>>
>> And as far as I know we have at least the following possibilities how it is
>> implemented:
>>
>> * A fixed amount of registers which tells the CPU the caching behavior for a
>> memory region, e.g. MTRR.
>> * Some bits of the memory pointers used, e.g. you see the same memory at
>> different locations with different caching attributes.
>> * Some bits in the CPUs page table.
>> * Some bits in a separate page table.
>>
>> On top of that there is the PCIe specification which defines non-cache
>> snooping access as an extension.
> Yeah dma-buf is extremely ill-defined even on x86 if you combine these
> all. We just play a game of whack-a-mole with the cacheline dirt until
> it's gone.
>
> That's the other piece here, how do you even make sure that the page is
> properly flushed and ready for wc access:
> - easy case is x86 with clflush available pretty much everywhere (since
>    10+ years at least)
> - next are cpus which have some cache flush instructions, but it's highly
>    cpu model specific
> - next up is the same, but you absolutely have to make sure there's no
>    other mapping around anymore or the coherency fabric just dies
> - and I'm pretty sure there's worse stuff where you defacto can only
>    allocate wc memory that's set aside at boot-up and that's all you ever
>    get.

Well long story short you don't make sure that the page is flushed at all.

What you do is to allocate the page as WC in the first place, if you 
fail to do this you can't use it.

The whole idea TTM try to sell until a while ago that you can actually 
change that on the fly only works on x86 and even there only very very 
limited.

Cheers,
Christian.

>
> Cheers, Daniel
>
>> Mixing that with the CPU caching behavior gets you some really nice ways to
>> break a driver. In general x86 seems to be rather graceful, but arm and
>> PowerPC are easily pissed if you mess that up.
>>
>>> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
>>> Cc: Christian König <christian.koenig@amd.com>
>>> Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>> Cc: Maxime Ripard <mripard@kernel.org>
>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>>> Cc: David Airlie <airlied@linux.ie>
>>> Cc: Daniel Vetter <daniel@ffwll.ch>
>> Acked-by: Christian könig <christian.koenig@amd.com>
>>
>> Regards,
>> Christian.
>>
>>> ---
>>>    drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++++++++++
>>>    1 file changed, 14 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
>>> index 296ab1b7c07f..657d2490aaa5 100644
>>> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
>>> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
>>> @@ -10,6 +10,10 @@
>>>    #include <linux/slab.h>
>>>    #include <linux/vmalloc.h>
>>> +#ifdef CONFIG_X86
>>> +#include <asm/set_memory.h>
>>> +#endif
>>> +
>>>    #include <drm/drm.h>
>>>    #include <drm/drm_device.h>
>>>    #include <drm/drm_drv.h>
>>> @@ -162,6 +166,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
>>>    		return PTR_ERR(pages);
>>>    	}
>>> +#ifdef CONFIG_X86
>>> +	if (shmem->map_wc)
>>> +		set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
>>> +#endif
>>> +
>>>    	shmem->pages = pages;
>>>    	return 0;
>>> @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
>>>    	if (--shmem->pages_use_count > 0)
>>>    		return;
>>> +#ifdef CONFIG_X86
>>> +	if (shmem->map_wc)
>>> +		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
>>> +#endif
>>> +
>>>    	drm_gem_put_pages(obj, shmem->pages,
>>>    			  shmem->pages_mark_dirty_on_put,
>>>    			  shmem->pages_mark_accessed_on_put);

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86
  2021-07-14 12:58       ` Christian König
@ 2021-07-14 16:16         ` Daniel Vetter
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Vetter @ 2021-07-14 16:16 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Hellström, Thomas Zimmermann, David Airlie,
	Daniel Vetter, Intel Graphics Development, DRI Development,
	Maxime Ripard, Daniel Vetter

On Wed, Jul 14, 2021 at 02:58:26PM +0200, Christian König wrote:
> Am 14.07.21 um 14:48 schrieb Daniel Vetter:
> > On Wed, Jul 14, 2021 at 01:54:50PM +0200, Christian König wrote:
> > > Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> > > > intel-gfx-ci realized that something is not quite coherent anymore on
> > > > some platforms for our i915+vgem tests, when I tried to switch vgem
> > > > over to shmem helpers.
> > > > 
> > > > After lots of head-scratching I realized that I've removed calls to
> > > > drm_clflush. And we need those. To make this a bit cleaner use the
> > > > same page allocation tooling as ttm, which does internally clflush
> > > > (and more, as neeeded on any platform instead of just the intel x86
> > > > cpus i915 can be combined with).
> > > > 
> > > > Unfortunately this doesn't exist on arm, or as a generic feature. For
> > > > that I think only the dma-api can get at wc memory reliably, so maybe
> > > > we'd need some kind of GFP_WC flag to do this properly.
> > > The problem is that this stuff is extremely architecture specific. So GFP_WC
> > > and GFP_UNCACHED are really what we should aim for in the long term.
> > > 
> > > And as far as I know we have at least the following possibilities how it is
> > > implemented:
> > > 
> > > * A fixed amount of registers which tells the CPU the caching behavior for a
> > > memory region, e.g. MTRR.
> > > * Some bits of the memory pointers used, e.g. you see the same memory at
> > > different locations with different caching attributes.
> > > * Some bits in the CPUs page table.
> > > * Some bits in a separate page table.
> > > 
> > > On top of that there is the PCIe specification which defines non-cache
> > > snooping access as an extension.
> > Yeah dma-buf is extremely ill-defined even on x86 if you combine these
> > all. We just play a game of whack-a-mole with the cacheline dirt until
> > it's gone.
> > 
> > That's the other piece here, how do you even make sure that the page is
> > properly flushed and ready for wc access:
> > - easy case is x86 with clflush available pretty much everywhere (since
> >    10+ years at least)
> > - next are cpus which have some cache flush instructions, but it's highly
> >    cpu model specific
> > - next up is the same, but you absolutely have to make sure there's no
> >    other mapping around anymore or the coherency fabric just dies
> > - and I'm pretty sure there's worse stuff where you defacto can only
> >    allocate wc memory that's set aside at boot-up and that's all you ever
> >    get.
> 
> Well long story short you don't make sure that the page is flushed at all.
> 
> What you do is to allocate the page as WC in the first place, if you fail to
> do this you can't use it.

Oh sure, but even when you allocate as wc you need to make sure the page
you have is actually wc coherent from the start. I'm chasing some fun
trying to convert vgem over to shmem helpers right now (i.e. this patch
series), and if you don't start out with flushed pages some of the vgem +
i915 igts just fail on the less coherent igpu platforms we have.

And if you look into what set_pages_wc actually does, then you spot the
clflush somewhere deep down (aside from all the other things it does).

On some ARM platforms that's just not possible, and you have to do a
carveout that you never even map as wb (so needs to be excluded from the
kernel map too and treated as highmem). There's some really bonkers stuff
here.

> The whole idea TTM try to sell until a while ago that you can actually
> change that on the fly only works on x86 and even there only very very
> limited.

Yeah that's clear, this is why we're locking down the i915 gem uapi a lot
for dgpu. All the tricks are out the window.
-Daniel


> 
> Cheers,
> Christian.
> 
> > 
> > Cheers, Daniel
> > 
> > > Mixing that with the CPU caching behavior gets you some really nice ways to
> > > break a driver. In general x86 seems to be rather graceful, but arm and
> > > PowerPC are easily pissed if you mess that up.
> > > 
> > > > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > > > Cc: Christian König <christian.koenig@amd.com>
> > > > Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
> > > > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > > > Cc: Maxime Ripard <mripard@kernel.org>
> > > > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > > > Cc: David Airlie <airlied@linux.ie>
> > > > Cc: Daniel Vetter <daniel@ffwll.ch>
> > > Acked-by: Christian könig <christian.koenig@amd.com>
> > > 
> > > Regards,
> > > Christian.
> > > 
> > > > ---
> > > >    drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++++++++++
> > > >    1 file changed, 14 insertions(+)
> > > > 
> > > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > > index 296ab1b7c07f..657d2490aaa5 100644
> > > > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > > @@ -10,6 +10,10 @@
> > > >    #include <linux/slab.h>
> > > >    #include <linux/vmalloc.h>
> > > > +#ifdef CONFIG_X86
> > > > +#include <asm/set_memory.h>
> > > > +#endif
> > > > +
> > > >    #include <drm/drm.h>
> > > >    #include <drm/drm_device.h>
> > > >    #include <drm/drm_drv.h>
> > > > @@ -162,6 +166,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> > > >    		return PTR_ERR(pages);
> > > >    	}
> > > > +#ifdef CONFIG_X86
> > > > +	if (shmem->map_wc)
> > > > +		set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
> > > > +#endif
> > > > +
> > > >    	shmem->pages = pages;
> > > >    	return 0;
> > > > @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> > > >    	if (--shmem->pages_use_count > 0)
> > > >    		return;
> > > > +#ifdef CONFIG_X86
> > > > +	if (shmem->map_wc)
> > > > +		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> > > > +#endif
> > > > +
> > > >    	drm_gem_put_pages(obj, shmem->pages,
> > > >    			  shmem->pages_mark_dirty_on_put,
> > > >    			  shmem->pages_mark_accessed_on_put);
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for shmem helpers for vgem (rev8)
  2021-07-13 20:51 [Intel-gfx] [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter
                   ` (5 preceding siblings ...)
  2021-07-14  0:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
@ 2021-07-16 13:29 ` Patchwork
  2021-07-16 13:58 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
  2021-07-16 16:43 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
  8 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2021-07-16 13:29 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

== Series Details ==

Series: shmem helpers for vgem (rev8)
URL   : https://patchwork.freedesktop.org/series/90670/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
00aa5f72115a dma-buf: Require VM_PFNMAP vma for mmap
-:34: WARNING:TYPO_SPELLING: 'entires' may be misspelled - perhaps 'entries'?
#34: 
From auditing the various functions to insert pfn pte entires
                                                      ^^^^^^^

-:39: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#39: 
References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-WbWzkRg@mail.gmail.com/

-:97: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Daniel Vetter <daniel.vetter@ffwll.ch>' != 'Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>'

total: 0 errors, 3 warnings, 0 checks, 39 lines checked
2b0ccd0f00e9 drm/shmem-helper: Switch to vmf_insert_pfn
-:14: WARNING:TYPO_SPELLING: 'wont' may be misspelled - perhaps 'won't'?
#14: 
helpers for these drivers on those platforms. As-is this wont work.
                                                         ^^^^

-:17: WARNING:TYPO_SPELLING: 'wont' may be misspelled - perhaps 'won't'?
#17: 
definitely wont fly in practice since the pages are non-contig to
           ^^^^

-:108: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Daniel Vetter <daniel.vetter@ffwll.ch>' != 'Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>'

total: 0 errors, 3 warnings, 0 checks, 55 lines checked
5fe5ed0ff28d drm/shmem-helpers: Allocate wc pages on x86
-:71: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Daniel Vetter <daniel.vetter@ffwll.ch>' != 'Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>'

total: 0 errors, 1 warnings, 0 checks, 32 lines checked
89fea3e9a7a7 drm/vgem: use shmem helpers
-:22: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 0cf2ef46c6c0 ("drm/shmem-helper: Use cached mappings by default")'
#22: 
v4: I got tricked by 0cf2ef46c6c0 ("drm/shmem-helper: Use cached

-:330: WARNING:TYPO_SPELLING: 'wont' may be misspelled - perhaps 'won't'?
#330: FILE: drivers/gpu/drm/vgem/vgem_drv.c:104:
+	 * coherent memory or dma-buf sharing just wont work.
 	                                           ^^^^

-:461: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Daniel Vetter <daniel.vetter@ffwll.ch>' != 'Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>'

total: 1 errors, 2 warnings, 0 checks, 402 lines checked


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for shmem helpers for vgem (rev8)
  2021-07-13 20:51 [Intel-gfx] [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter
                   ` (6 preceding siblings ...)
  2021-07-16 13:29 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for shmem helpers for vgem (rev8) Patchwork
@ 2021-07-16 13:58 ` Patchwork
  2021-07-16 16:43 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
  8 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2021-07-16 13:58 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 3583 bytes --]

== Series Details ==

Series: shmem helpers for vgem (rev8)
URL   : https://patchwork.freedesktop.org/series/90670/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10345 -> Patchwork_20618
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/index.html

Known issues
------------

  Here are the changes found in Patchwork_20618 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_basic@semaphore:
    - fi-bdw-5557u:       NOTRUN -> [SKIP][1] ([fdo#109271]) +27 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/fi-bdw-5557u/igt@amdgpu/amd_basic@semaphore.html

  * igt@core_hotunplug@unbind-rebind:
    - fi-bdw-5557u:       NOTRUN -> [WARN][2] ([i915#3718])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/fi-bdw-5557u/igt@core_hotunplug@unbind-rebind.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-bdw-5557u:       NOTRUN -> [SKIP][3] ([fdo#109271] / [fdo#111827]) +8 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/fi-bdw-5557u/igt@kms_chamelium@dp-crc-fast.html

  
#### Possible fixes ####

  * igt@gem_exec_suspend@basic-s3:
    - {fi-tgl-1115g4}:    [FAIL][4] ([i915#1888]) -> [PASS][5]
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/fi-tgl-1115g4/igt@gem_exec_suspend@basic-s3.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/fi-tgl-1115g4/igt@gem_exec_suspend@basic-s3.html

  * igt@kms_chamelium@common-hpd-after-suspend:
    - fi-kbl-7500u:       [DMESG-FAIL][6] ([i915#165]) -> [PASS][7]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/fi-kbl-7500u/igt@kms_chamelium@common-hpd-after-suspend.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/fi-kbl-7500u/igt@kms_chamelium@common-hpd-after-suspend.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#165]: https://gitlab.freedesktop.org/drm/intel/issues/165
  [i915#1888]: https://gitlab.freedesktop.org/drm/intel/issues/1888
  [i915#3303]: https://gitlab.freedesktop.org/drm/intel/issues/3303
  [i915#3718]: https://gitlab.freedesktop.org/drm/intel/issues/3718
  [i915#541]: https://gitlab.freedesktop.org/drm/intel/issues/541


Participating hosts (41 -> 36)
------------------------------

  Missing    (5): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan bat-jsl-1 fi-bdw-samus 


Build changes
-------------

  * IGT: IGT_6142 -> IGTPW_6018
  * Linux: CI_DRM_10345 -> Patchwork_20618

  CI-20190529: 20190529
  CI_DRM_10345: 8c6a974b932fbaa798102b4713ceedf3b04227d9 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_6018: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6018/index.html
  IGT_6142: 16e753fc5e1e51395e1df40865c569984a74c5ed @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_20618: 89fea3e9a7a7c96084ed62d0c3b71d6ed923db06 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

89fea3e9a7a7 drm/vgem: use shmem helpers
5fe5ed0ff28d drm/shmem-helpers: Allocate wc pages on x86
2b0ccd0f00e9 drm/shmem-helper: Switch to vmf_insert_pfn
00aa5f72115a dma-buf: Require VM_PFNMAP vma for mmap

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/index.html

[-- Attachment #1.2: Type: text/html, Size: 4274 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for shmem helpers for vgem (rev8)
  2021-07-13 20:51 [Intel-gfx] [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter
                   ` (7 preceding siblings ...)
  2021-07-16 13:58 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2021-07-16 16:43 ` Patchwork
  8 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2021-07-16 16:43 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 30250 bytes --]

== Series Details ==

Series: shmem helpers for vgem (rev8)
URL   : https://patchwork.freedesktop.org/series/90670/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_10345_full -> Patchwork_20618_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_20618_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_20618_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_20618_full:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-pwrite:
    - shard-apl:          [PASS][1] -> [FAIL][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-apl8/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-pwrite.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl2/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-pwrite.html

  * igt@kms_mmap_write_crc@main:
    - shard-iclb:         NOTRUN -> [INCOMPLETE][3]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb5/igt@kms_mmap_write_crc@main.html
    - shard-tglb:         NOTRUN -> [INCOMPLETE][4]
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb5/igt@kms_mmap_write_crc@main.html

  * igt@prime_mmap@test_aperture_limit:
    - shard-apl:          NOTRUN -> [DMESG-WARN][5] +3 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl1/igt@prime_mmap@test_aperture_limit.html

  * igt@prime_mmap@test_dup:
    - shard-apl:          [PASS][6] -> [DMESG-WARN][7]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-apl7/igt@prime_mmap@test_dup.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl8/igt@prime_mmap@test_dup.html

  * igt@prime_mmap@test_forked:
    - shard-iclb:         [PASS][8] -> [DMESG-WARN][9] +6 similar issues
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-iclb2/igt@prime_mmap@test_forked.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb5/igt@prime_mmap@test_forked.html
    - shard-snb:          [PASS][10] -> [DMESG-WARN][11] +3 similar issues
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-snb2/igt@prime_mmap@test_forked.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-snb2/igt@prime_mmap@test_forked.html

  * igt@prime_mmap@test_refcounting:
    - shard-snb:          NOTRUN -> [DMESG-WARN][12] +3 similar issues
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-snb7/igt@prime_mmap@test_refcounting.html
    - shard-kbl:          [PASS][13] -> [DMESG-WARN][14] +6 similar issues
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-kbl6/igt@prime_mmap@test_refcounting.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl1/igt@prime_mmap@test_refcounting.html
    - shard-tglb:         [PASS][15] -> [DMESG-WARN][16] +7 similar issues
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-tglb1/igt@prime_mmap@test_refcounting.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb7/igt@prime_mmap@test_refcounting.html
    - shard-skl:          [PASS][17] -> [DMESG-WARN][18] +6 similar issues
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-skl4/igt@prime_mmap@test_refcounting.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl5/igt@prime_mmap@test_refcounting.html

  * igt@prime_mmap@test_reprime:
    - shard-glk:          [PASS][19] -> [DMESG-WARN][20] +7 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-glk4/igt@prime_mmap@test_reprime.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-glk9/igt@prime_mmap@test_reprime.html

  * igt@prime_mmap_coherency@write:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][21] +2 similar issues
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl4/igt@prime_mmap_coherency@write.html
    - shard-glk:          NOTRUN -> [DMESG-WARN][22] +1 similar issue
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-glk7/igt@prime_mmap_coherency@write.html

  * igt@prime_vgem@basic-blt:
    - shard-iclb:         [PASS][23] -> [INCOMPLETE][24]
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-iclb1/igt@prime_vgem@basic-blt.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb4/igt@prime_vgem@basic-blt.html

  
#### Warnings ####

  * igt@runner@aborted:
    - shard-iclb:         ([FAIL][25], [FAIL][26], [FAIL][27], [FAIL][28]) ([i915#1814] / [i915#3002] / [i915#3702]) -> ([FAIL][29], [FAIL][30], [FAIL][31], [FAIL][32], [FAIL][33], [FAIL][34], [FAIL][35], [FAIL][36], [FAIL][37], [FAIL][38], [FAIL][39]) ([i915#1814] / [i915#3002])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-iclb5/igt@runner@aborted.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-iclb5/igt@runner@aborted.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-iclb2/igt@runner@aborted.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-iclb7/igt@runner@aborted.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb2/igt@runner@aborted.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb1/igt@runner@aborted.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb1/igt@runner@aborted.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb1/igt@runner@aborted.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb2/igt@runner@aborted.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb6/igt@runner@aborted.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb5/igt@runner@aborted.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb7/igt@runner@aborted.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb5/igt@runner@aborted.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb3/igt@runner@aborted.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb5/igt@runner@aborted.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@prime_mmap@test_map_unmap:
    - {shard-rkl}:        [PASS][40] -> [DMESG-WARN][41] +8 similar issues
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-rkl-2/igt@prime_mmap@test_map_unmap.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-rkl-6/igt@prime_mmap@test_map_unmap.html

  * igt@vgem_basic@unload:
    - {shard-rkl}:        [PASS][42] -> [FAIL][43]
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-rkl-2/igt@vgem_basic@unload.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-rkl-2/igt@vgem_basic@unload.html

  
Known issues
------------

  Here are the changes found in Patchwork_20618_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@feature_discovery@display-4x:
    - shard-iclb:         NOTRUN -> [SKIP][44] ([i915#1839]) +2 similar issues
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb1/igt@feature_discovery@display-4x.html

  * igt@gem_ctx_persistence@legacy-engines-queued:
    - shard-snb:          NOTRUN -> [SKIP][45] ([fdo#109271] / [i915#1099]) +3 similar issues
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-snb2/igt@gem_ctx_persistence@legacy-engines-queued.html

  * igt@gem_exec_fair@basic-none-vip@rcs0:
    - shard-kbl:          [PASS][46] -> [FAIL][47] ([i915#2842]) +2 similar issues
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-kbl2/igt@gem_exec_fair@basic-none-vip@rcs0.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl7/igt@gem_exec_fair@basic-none-vip@rcs0.html

  * igt@gem_exec_fair@basic-pace@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][48] ([i915#2842])
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb2/igt@gem_exec_fair@basic-pace@vcs1.html

  * igt@gem_exec_params@no-blt:
    - shard-iclb:         NOTRUN -> [SKIP][49] ([fdo#109283])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb6/igt@gem_exec_params@no-blt.html

  * igt@gem_exec_schedule@independent@vcs0:
    - shard-tglb:         [PASS][50] -> [FAIL][51] ([i915#3795]) +1 similar issue
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-tglb2/igt@gem_exec_schedule@independent@vcs0.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb2/igt@gem_exec_schedule@independent@vcs0.html

  * igt@gem_exec_schedule@u-independent@rcs0:
    - shard-iclb:         NOTRUN -> [FAIL][52] ([i915#3795])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb8/igt@gem_exec_schedule@u-independent@rcs0.html

  * igt@gem_pread@exhaustion:
    - shard-apl:          NOTRUN -> [WARN][53] ([i915#2658])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl1/igt@gem_pread@exhaustion.html

  * igt@gem_render_copy@x-tiled-to-vebox-yf-tiled:
    - shard-iclb:         NOTRUN -> [SKIP][54] ([i915#768]) +2 similar issues
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb6/igt@gem_render_copy@x-tiled-to-vebox-yf-tiled.html

  * igt@gem_render_copy@y-tiled-to-vebox-x-tiled:
    - shard-glk:          NOTRUN -> [SKIP][55] ([fdo#109271]) +66 similar issues
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-glk7/igt@gem_render_copy@y-tiled-to-vebox-x-tiled.html

  * igt@gem_userptr_blits@dmabuf-sync:
    - shard-kbl:          NOTRUN -> [SKIP][56] ([fdo#109271] / [i915#3323])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl4/igt@gem_userptr_blits@dmabuf-sync.html
    - shard-apl:          NOTRUN -> [SKIP][57] ([fdo#109271] / [i915#3323])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl7/igt@gem_userptr_blits@dmabuf-sync.html

  * igt@gem_userptr_blits@input-checking:
    - shard-apl:          NOTRUN -> [DMESG-WARN][58] ([i915#3002]) +1 similar issue
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl7/igt@gem_userptr_blits@input-checking.html

  * igt@gem_userptr_blits@unsync-overlap:
    - shard-tglb:         NOTRUN -> [SKIP][59] ([i915#3297])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb5/igt@gem_userptr_blits@unsync-overlap.html
    - shard-iclb:         NOTRUN -> [SKIP][60] ([i915#3297])
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb5/igt@gem_userptr_blits@unsync-overlap.html

  * igt@gem_userptr_blits@vma-merge:
    - shard-iclb:         NOTRUN -> [FAIL][61] ([i915#3318])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb6/igt@gem_userptr_blits@vma-merge.html
    - shard-tglb:         NOTRUN -> [FAIL][62] ([i915#3318])
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb6/igt@gem_userptr_blits@vma-merge.html
    - shard-skl:          NOTRUN -> [FAIL][63] ([i915#3318])
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl10/igt@gem_userptr_blits@vma-merge.html

  * igt@gem_workarounds@suspend-resume-fd:
    - shard-kbl:          [PASS][64] -> [DMESG-WARN][65] ([i915#180]) +3 similar issues
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-kbl7/igt@gem_workarounds@suspend-resume-fd.html
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl3/igt@gem_workarounds@suspend-resume-fd.html

  * igt@gen7_exec_parse@oacontrol-tracking:
    - shard-iclb:         NOTRUN -> [SKIP][66] ([fdo#109289]) +1 similar issue
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb2/igt@gen7_exec_parse@oacontrol-tracking.html

  * igt@gen9_exec_parse@allowed-all:
    - shard-iclb:         NOTRUN -> [SKIP][67] ([fdo#112306]) +2 similar issues
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb8/igt@gen9_exec_parse@allowed-all.html
    - shard-tglb:         NOTRUN -> [SKIP][68] ([fdo#112306]) +2 similar issues
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb6/igt@gen9_exec_parse@allowed-all.html

  * igt@gen9_exec_parse@bb-large:
    - shard-kbl:          NOTRUN -> [FAIL][69] ([i915#3296])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl7/igt@gen9_exec_parse@bb-large.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         NOTRUN -> [DMESG-WARN][70] ([i915#3698])
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb2/igt@i915_pm_dc@dc6-psr.html
    - shard-tglb:         NOTRUN -> [FAIL][71] ([i915#454])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb2/igt@i915_pm_dc@dc6-psr.html
    - shard-skl:          NOTRUN -> [FAIL][72] ([i915#454])
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl9/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-dp:
    - shard-apl:          NOTRUN -> [SKIP][73] ([fdo#109271] / [i915#1937])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl1/igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-dp.html

  * igt@kms_atomic_transition@plane-all-modeset-transition:
    - shard-iclb:         NOTRUN -> [SKIP][74] ([i915#1769])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb8/igt@kms_atomic_transition@plane-all-modeset-transition.html

  * igt@kms_big_fb@linear-32bpp-rotate-180:
    - shard-glk:          [PASS][75] -> [DMESG-WARN][76] ([i915#118] / [i915#95])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-glk5/igt@kms_big_fb@linear-32bpp-rotate-180.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-glk3/igt@kms_big_fb@linear-32bpp-rotate-180.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip:
    - shard-kbl:          NOTRUN -> [SKIP][77] ([fdo#109271] / [i915#3777]) +1 similar issue
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl6/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip.html
    - shard-glk:          NOTRUN -> [SKIP][78] ([fdo#109271] / [i915#3777])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-glk2/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip.html
    - shard-skl:          NOTRUN -> [SKIP][79] ([fdo#109271] / [i915#3777])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl7/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
    - shard-skl:          NOTRUN -> [FAIL][80] ([i915#3763])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl2/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip:
    - shard-apl:          NOTRUN -> [SKIP][81] ([fdo#109271] / [i915#3777]) +1 similar issue
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl6/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180:
    - shard-iclb:         NOTRUN -> [SKIP][82] ([fdo#110723])
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb2/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180.html
    - shard-tglb:         NOTRUN -> [SKIP][83] ([fdo#111615]) +1 similar issue
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb3/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180.html

  * igt@kms_ccs@pipe-a-random-ccs-data-yf_tiled_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][84] ([i915#3689]) +5 similar issues
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb6/igt@kms_ccs@pipe-a-random-ccs-data-yf_tiled_ccs.html

  * igt@kms_cdclk@plane-scaling:
    - shard-iclb:         NOTRUN -> [SKIP][85] ([i915#3742])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb5/igt@kms_cdclk@plane-scaling.html
    - shard-tglb:         NOTRUN -> [SKIP][86] ([i915#3742])
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb5/igt@kms_cdclk@plane-scaling.html

  * igt@kms_chamelium@dp-crc-multiple:
    - shard-apl:          NOTRUN -> [SKIP][87] ([fdo#109271] / [fdo#111827]) +14 similar issues
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl2/igt@kms_chamelium@dp-crc-multiple.html

  * igt@kms_chamelium@dp-hpd-fast:
    - shard-kbl:          NOTRUN -> [SKIP][88] ([fdo#109271] / [fdo#111827]) +7 similar issues
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl4/igt@kms_chamelium@dp-hpd-fast.html

  * igt@kms_chamelium@hdmi-hpd-enable-disable-mode:
    - shard-iclb:         NOTRUN -> [SKIP][89] ([fdo#109284] / [fdo#111827]) +11 similar issues
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb2/igt@kms_chamelium@hdmi-hpd-enable-disable-mode.html

  * igt@kms_color@pipe-d-ctm-max:
    - shard-iclb:         NOTRUN -> [SKIP][90] ([fdo#109278] / [i915#1149])
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb8/igt@kms_color@pipe-d-ctm-max.html

  * igt@kms_color_chamelium@pipe-b-ctm-max:
    - shard-skl:          NOTRUN -> [SKIP][91] ([fdo#109271] / [fdo#111827]) +10 similar issues
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl7/igt@kms_color_chamelium@pipe-b-ctm-max.html
    - shard-tglb:         NOTRUN -> [SKIP][92] ([fdo#109284] / [fdo#111827]) +10 similar issues
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb6/igt@kms_color_chamelium@pipe-b-ctm-max.html

  * igt@kms_color_chamelium@pipe-b-ctm-negative:
    - shard-snb:          NOTRUN -> [SKIP][93] ([fdo#109271] / [fdo#111827]) +12 similar issues
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-snb6/igt@kms_color_chamelium@pipe-b-ctm-negative.html

  * igt@kms_color_chamelium@pipe-d-ctm-0-25:
    - shard-glk:          NOTRUN -> [SKIP][94] ([fdo#109271] / [fdo#111827]) +7 similar issues
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-glk1/igt@kms_color_chamelium@pipe-d-ctm-0-25.html
    - shard-iclb:         NOTRUN -> [SKIP][95] ([fdo#109278] / [fdo#109284] / [fdo#111827])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb5/igt@kms_color_chamelium@pipe-d-ctm-0-25.html

  * igt@kms_content_protection@dp-mst-lic-type-1:
    - shard-iclb:         NOTRUN -> [SKIP][96] ([i915#3116])
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb6/igt@kms_content_protection@dp-mst-lic-type-1.html

  * igt@kms_content_protection@legacy:
    - shard-iclb:         NOTRUN -> [SKIP][97] ([fdo#109300] / [fdo#111066])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb1/igt@kms_content_protection@legacy.html

  * igt@kms_content_protection@lic:
    - shard-apl:          NOTRUN -> [TIMEOUT][98] ([i915#1319])
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl6/igt@kms_content_protection@lic.html

  * igt@kms_cursor_crc@pipe-a-cursor-512x170-onscreen:
    - shard-tglb:         NOTRUN -> [SKIP][99] ([fdo#109279] / [i915#3359])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb1/igt@kms_cursor_crc@pipe-a-cursor-512x170-onscreen.html

  * igt@kms_cursor_crc@pipe-a-cursor-512x170-sliding:
    - shard-kbl:          NOTRUN -> [SKIP][100] ([fdo#109271]) +111 similar issues
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl7/igt@kms_cursor_crc@pipe-a-cursor-512x170-sliding.html

  * igt@kms_cursor_crc@pipe-c-cursor-max-size-offscreen:
    - shard-tglb:         NOTRUN -> [SKIP][101] ([i915#3359]) +5 similar issues
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb6/igt@kms_cursor_crc@pipe-c-cursor-max-size-offscreen.html

  * igt@kms_cursor_edge_walk@pipe-d-128x128-right-edge:
    - shard-snb:          NOTRUN -> [SKIP][102] ([fdo#109271]) +301 similar issues
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-snb2/igt@kms_cursor_edge_walk@pipe-d-128x128-right-edge.html

  * igt@kms_cursor_edge_walk@pipe-d-128x128-top-edge:
    - shard-iclb:         NOTRUN -> [SKIP][103] ([fdo#109278]) +32 similar issues
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb3/igt@kms_cursor_edge_walk@pipe-d-128x128-top-edge.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size:
    - shard-iclb:         NOTRUN -> [SKIP][104] ([fdo#109274] / [fdo#109278]) +1 similar issue
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb1/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html

  * igt@kms_cursor_legacy@pipe-d-torture-move:
    - shard-skl:          NOTRUN -> [SKIP][105] ([fdo#109271]) +103 similar issues
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl3/igt@kms_cursor_legacy@pipe-d-torture-move.html

  * igt@kms_flip@2x-absolute-wf_vblank-interruptible:
    - shard-iclb:         NOTRUN -> [SKIP][106] ([fdo#109274]) +4 similar issues
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb5/igt@kms_flip@2x-absolute-wf_vblank-interruptible.html

  * igt@kms_flip@flip-vs-suspend-interruptible@c-dp1:
    - shard-apl:          [PASS][107] -> [DMESG-WARN][108] ([i915#180]) +2 similar issues
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-apl7/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl2/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html

  * igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1:
    - shard-skl:          [PASS][109] -> [FAIL][110] ([i915#2122])
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-skl4/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl7/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs:
    - shard-apl:          NOTRUN -> [SKIP][111] ([fdo#109271] / [i915#2672])
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl8/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-pwrite:
    - shard-glk:          [PASS][112] -> [FAIL][113] ([i915#2546])
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-glk5/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-pwrite.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-glk5/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-pwrite.html
    - shard-kbl:          [PASS][114] -> [FAIL][115] ([i915#2546])
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-kbl6/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-pwrite.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl6/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-pwrite.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc:
    - shard-apl:          NOTRUN -> [SKIP][116] ([fdo#109271]) +211 similar issues
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-blt:
    - shard-tglb:         NOTRUN -> [SKIP][117] ([fdo#111825]) +21 similar issues
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb5/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-mmap-cpu:
    - shard-iclb:         NOTRUN -> [SKIP][118] ([fdo#109280]) +20 similar issues
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb2/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-mmap-cpu.html

  * igt@kms_hdr@bpc-switch-dpms:
    - shard-skl:          [PASS][119] -> [FAIL][120] ([i915#1188])
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-skl7/igt@kms_hdr@bpc-switch-dpms.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl4/igt@kms_hdr@bpc-switch-dpms.html

  * igt@kms_mmap_write_crc@main:
    - shard-skl:          NOTRUN -> [INCOMPLETE][121] ([i915#1982])
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl1/igt@kms_mmap_write_crc@main.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-tglb:         NOTRUN -> [SKIP][122] ([i915#1839]) +1 similar issue
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb3/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-basic:
    - shard-apl:          NOTRUN -> [FAIL][123] ([fdo#108145] / [i915#265]) +2 similar issues
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl2/igt@kms_plane_alpha_blend@pipe-a-alpha-basic.html
    - shard-kbl:          NOTRUN -> [FAIL][124] ([fdo#108145] / [i915#265])
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl6/igt@kms_plane_alpha_blend@pipe-a-alpha-basic.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb:
    - shard-glk:          NOTRUN -> [FAIL][125] ([i915#265])
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-glk7/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html
    - shard-kbl:          NOTRUN -> [FAIL][126] ([i915#265])
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl1/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html
    - shard-skl:          NOTRUN -> [FAIL][127] ([i915#265])
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl4/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html

  * igt@kms_plane_lowres@pipe-b-tiling-yf:
    - shard-tglb:         NOTRUN -> [SKIP][128] ([fdo#112054])
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb5/igt@kms_plane_lowres@pipe-b-tiling-yf.html

  * igt@kms_plane_lowres@pipe-c-tiling-none:
    - shard-iclb:         NOTRUN -> [SKIP][129] ([i915#3536])
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb1/igt@kms_plane_lowres@pipe-c-tiling-none.html
    - shard-tglb:         NOTRUN -> [SKIP][130] ([i915#3536]) +2 similar issues
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb3/igt@kms_plane_lowres@pipe-c-tiling-none.html

  * igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping:
    - shard-skl:          NOTRUN -> [SKIP][131] ([fdo#109271] / [i915#2733])
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl6/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping.html

  * igt@kms_psr2_sf@plane-move-sf-dmg-area-1:
    - shard-skl:          NOTRUN -> [SKIP][132] ([fdo#109271] / [i915#658])
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-skl4/igt@kms_psr2_sf@plane-move-sf-dmg-area-1.html
    - shard-tglb:         NOTRUN -> [SKIP][133] ([i915#2920]) +1 similar issue
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-tglb7/igt@kms_psr2_sf@plane-move-sf-dmg-area-1.html
    - shard-glk:          NOTRUN -> [SKIP][134] ([fdo#109271] / [i915#658])
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-glk7/igt@kms_psr2_sf@plane-move-sf-dmg-area-1.html
    - shard-iclb:         NOTRUN -> [SKIP][135] ([i915#658])
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb4/igt@kms_psr2_sf@plane-move-sf-dmg-area-1.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-5:
    - shard-kbl:          NOTRUN -> [SKIP][136] ([fdo#109271] / [i915#658]) +4 similar issues
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-kbl4/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-5.html
    - shard-apl:          NOTRUN -> [SKIP][137] ([fdo#109271] / [i915#658]) +3 similar issues
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-apl6/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-5.html

  * igt@kms_psr2_su@page_flip:
    - shard-iclb:         [PASS][138] -> [SKIP][139] ([fdo#109642] / [fdo#111068] / [i915#658])
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-iclb2/igt@kms_psr2_su@page_flip.html
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb8/igt@kms_psr2_su@page_flip.html

  * igt@kms_psr@psr2_cursor_render:
    - shard-iclb:         NOTRUN -> [SKIP][140] ([fdo#109441])
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb5/igt@kms_psr@psr2_cursor_render.html

  * igt@kms_psr@psr2_dpms:
    - shard-iclb:         [PASS][141] -> [SKIP][142] ([fdo#109441])
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10345/shard-iclb2/igt@kms_psr@psr2_dpms.html
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/shard-iclb3/igt@kms_psr@psr2_dpms.html

  * igt@kms_psr@psr2_sprite_plane_onoff:

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20618/index.html

[-- Attachment #1.2: Type: text/html, Size: 34213 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 2/4] drm/shmem-helper: Switch to vmf_insert_pfn
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 2/4] drm/shmem-helper: Switch to vmf_insert_pfn Daniel Vetter
@ 2021-07-22 18:22   ` Thomas Zimmermann
  2021-07-23  7:32     ` Daniel Vetter
  2021-08-12 13:05     ` Daniel Vetter
  0 siblings, 2 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2021-07-22 18:22 UTC (permalink / raw)
  To: Daniel Vetter, Intel Graphics Development
  Cc: David Airlie, Daniel Vetter, DRI Development


[-- Attachment #1.1.1: Type: text/plain, Size: 4870 bytes --]

Hi,

I'm not knowledgeable enougth to give this a full review. If you can 
just answer my questions, fell free to add an

Acked-by: Thomas Zimmermann <tzimmermann@suse.de>

to the patch. :)

Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> We want to stop gup, which isn't the case if we use vmf_insert_page

What is gup?

> and VM_MIXEDMAP, because that does not set pte_special.
> 
> v2: With this shmem gem helpers now definitely need CONFIG_MMU (0day)
> 
> v3: add more depends on MMU. For usb drivers this is a bit awkward,
> but really it's correct: To be able to provide a contig mapping of
> buffers to userspace on !MMU platforms we'd need to use the cma
> helpers for these drivers on those platforms. As-is this wont work.
> 
> Also not exactly sure why vm_insert_page doesn't go boom, because that
> definitely wont fly in practice since the pages are non-contig to
> begin with.
> 
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> ---
>   drivers/gpu/drm/Kconfig                | 2 +-
>   drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++--
>   drivers/gpu/drm/gud/Kconfig            | 2 +-
>   drivers/gpu/drm/tiny/Kconfig           | 4 ++--
>   drivers/gpu/drm/udl/Kconfig            | 1 +
>   5 files changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 0d372354c2d0..314eefa39892 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -211,7 +211,7 @@ config DRM_KMS_CMA_HELPER
>   
>   config DRM_GEM_SHMEM_HELPER
>   	bool
> -	depends on DRM
> +	depends on DRM && MMU
>   	help
>   	  Choose this if you need the GEM shmem helper functions
>   
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index d5e6d4568f99..296ab1b7c07f 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -542,7 +542,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>   	} else {
>   		page = shmem->pages[page_offset];
>   
> -		ret = vmf_insert_page(vma, vmf->address, page);
> +		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
>   	}
>   
>   	mutex_unlock(&shmem->pages_lock);
> @@ -612,7 +612,7 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>   		return ret;
>   	}
>   
> -	vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
> +	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND;
>   	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>   	if (shmem->map_wc)
>   		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
> diff --git a/drivers/gpu/drm/gud/Kconfig b/drivers/gpu/drm/gud/Kconfig
> index 1c8601bf4d91..9c1e61f9eec3 100644
> --- a/drivers/gpu/drm/gud/Kconfig
> +++ b/drivers/gpu/drm/gud/Kconfig
> @@ -2,7 +2,7 @@
>   
>   config DRM_GUD
>   	tristate "GUD USB Display"
> -	depends on DRM && USB
> +	depends on DRM && USB && MMU
>   	select LZ4_COMPRESS
>   	select DRM_KMS_HELPER
>   	select DRM_GEM_SHMEM_HELPER

I'm a kconfig noob, so this is rather a question than a review comment:



If DRM_GEM_SHMEM_HELPER already depends on MMU, this select will fail on 
non-MMU platforms? Why does the driver also depend on MMU? Simply to 
make the item disappear in menuconfig?

Best regards
Thomas

> diff --git a/drivers/gpu/drm/tiny/Kconfig b/drivers/gpu/drm/tiny/Kconfig
> index 5593128eeff9..c11fb5be7d09 100644
> --- a/drivers/gpu/drm/tiny/Kconfig
> +++ b/drivers/gpu/drm/tiny/Kconfig
> @@ -44,7 +44,7 @@ config DRM_CIRRUS_QEMU
>   
>   config DRM_GM12U320
>   	tristate "GM12U320 driver for USB projectors"
> -	depends on DRM && USB
> +	depends on DRM && USB && MMU
>   	select DRM_KMS_HELPER
>   	select DRM_GEM_SHMEM_HELPER
>   	help
> @@ -53,7 +53,7 @@ config DRM_GM12U320
>   
>   config DRM_SIMPLEDRM
>   	tristate "Simple framebuffer driver"
> -	depends on DRM
> +	depends on DRM && MMU
>   	select DRM_GEM_SHMEM_HELPER
>   	select DRM_KMS_HELPER
>   	help
> diff --git a/drivers/gpu/drm/udl/Kconfig b/drivers/gpu/drm/udl/Kconfig
> index 1f497d8f1ae5..c744175c6992 100644
> --- a/drivers/gpu/drm/udl/Kconfig
> +++ b/drivers/gpu/drm/udl/Kconfig
> @@ -4,6 +4,7 @@ config DRM_UDL
>   	depends on DRM
>   	depends on USB
>   	depends on USB_ARCH_HAS_HCD
> +	depends on MMU
>   	select DRM_GEM_SHMEM_HELPER
>   	select DRM_KMS_HELPER
>   	help
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86 Daniel Vetter
  2021-07-14 11:54   ` Christian König
@ 2021-07-22 18:40   ` Thomas Zimmermann
  2021-07-23  7:36     ` Daniel Vetter
  1 sibling, 1 reply; 26+ messages in thread
From: Thomas Zimmermann @ 2021-07-22 18:40 UTC (permalink / raw)
  To: Daniel Vetter, Intel Graphics Development
  Cc: Thomas Hellström, David Airlie, Maxime Ripard,
	DRI Development, Daniel Vetter, Christian König


[-- Attachment #1.1.1: Type: text/plain, Size: 3277 bytes --]

Hi

Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> intel-gfx-ci realized that something is not quite coherent anymore on
> some platforms for our i915+vgem tests, when I tried to switch vgem
> over to shmem helpers.
> 
> After lots of head-scratching I realized that I've removed calls to
> drm_clflush. And we need those. To make this a bit cleaner use the
> same page allocation tooling as ttm, which does internally clflush
> (and more, as neeeded on any platform instead of just the intel x86
> cpus i915 can be combined with).

Vgem would therefore not work correctly on non-X86 platforms?

> 
> Unfortunately this doesn't exist on arm, or as a generic feature. For
> that I think only the dma-api can get at wc memory reliably, so maybe
> we'd need some kind of GFP_WC flag to do this properly.
> 
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> ---
>   drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 296ab1b7c07f..657d2490aaa5 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -10,6 +10,10 @@
>   #include <linux/slab.h>
>   #include <linux/vmalloc.h>
>   
> +#ifdef CONFIG_X86
> +#include <asm/set_memory.h>
> +#endif
> +
>   #include <drm/drm.h>
>   #include <drm/drm_device.h>
>   #include <drm/drm_drv.h>
> @@ -162,6 +166,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
>   		return PTR_ERR(pages);
>   	}
>   
> +#ifdef CONFIG_X86
> +	if (shmem->map_wc)
> +		set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
> +#endif

I cannot comment much on the technical details of the caching of various 
architectures. If this patch goes in, there should be a longer comment 
that reflects the discussion in this thread. It's apparently a workaround.

I think the call itself should be hidden behind a DRM API, which depends 
on CONFIG_X86. Something simple like

ifdef CONFIG_X86
drm_set_pages_array_wc()
{
	set_pages_array_wc();
}
else
drm_set_pages_array_wc()
  {
  }
#endif

Maybe in drm_cache.h?

Best regard
Thomas

> +
>   	shmem->pages = pages;
>   
>   	return 0;
> @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
>   	if (--shmem->pages_use_count > 0)
>   		return;
>   
> +#ifdef CONFIG_X86
> +	if (shmem->map_wc)
> +		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> +#endif
> +
>   	drm_gem_put_pages(obj, shmem->pages,
>   			  shmem->pages_mark_dirty_on_put,
>   			  shmem->pages_mark_accessed_on_put);
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 4/4] drm/vgem: use shmem helpers
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 4/4] drm/vgem: use shmem helpers Daniel Vetter
  2021-07-14 12:45   ` [Intel-gfx] [PATCH] " Daniel Vetter
@ 2021-07-22 18:50   ` Thomas Zimmermann
  2021-07-23  7:38     ` Daniel Vetter
  1 sibling, 1 reply; 26+ messages in thread
From: Thomas Zimmermann @ 2021-07-22 18:50 UTC (permalink / raw)
  To: Daniel Vetter, Intel Graphics Development
  Cc: DRI Development, Christian König, Melissa Wen, John Stultz,
	Daniel Vetter, Chris Wilson, Sumit Semwal


[-- Attachment #1.1.1: Type: text/plain, Size: 13491 bytes --]

Hi

Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> Aside from deleting lots of code the real motivation here is to switch
> the mmap over to VM_PFNMAP, to be more consistent with what real gpu
> drivers do. They're all VM_PFNMP, which means get_user_pages doesn't
> work, and even if you try and there's a struct page behind that,
> touching it and mucking around with its refcount can upset drivers
> real bad.
> 
> v2: Review from Thomas:
> - sort #include
> - drop more dead code that I didn't spot somehow
> 
> v3: select DRM_GEM_SHMEM_HELPER to make it build (intel-gfx-ci)
> 
> v4: I got tricked by 0cf2ef46c6c0 ("drm/shmem-helper: Use cached
> mappings by default"), and we need WC in vgem because vgem doesn't
> have explicit begin/end cpu access ioctls.
> 
> Also add a comment why exactly vgem has to use wc.
> 
> v5: Don't set obj->base.funcs, it will default to drm_gem_shmem_funcs
> (Thomas)
> 
> v6: vgem also needs an MMU for remapping
> 
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: John Stultz <john.stultz@linaro.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: "Christian König" <christian.koenig@amd.com>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Melissa Wen <melissa.srw@gmail.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/Kconfig         |   5 +-
>   drivers/gpu/drm/vgem/vgem_drv.c | 315 ++------------------------------
>   2 files changed, 15 insertions(+), 305 deletions(-)
> 
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 314eefa39892..28f7d2006e8b 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -272,7 +272,8 @@ source "drivers/gpu/drm/kmb/Kconfig"
>   
>   config DRM_VGEM
>   	tristate "Virtual GEM provider"
> -	depends on DRM
> +	depends on DRM && MMU
> +	select DRM_GEM_SHMEM_HELPER
>   	help
>   	  Choose this option to get a virtual graphics memory manager,
>   	  as used by Mesa's software renderer for enhanced performance.
> @@ -280,7 +281,7 @@ config DRM_VGEM
>   
>   config DRM_VKMS
>   	tristate "Virtual KMS (EXPERIMENTAL)"
> -	depends on DRM
> +	depends on DRM && MMU
>   	select DRM_KMS_HELPER
>   	select DRM_GEM_SHMEM_HELPER
>   	select CRC32
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index bf38a7e319d1..ba410ba6b7f7 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -38,6 +38,7 @@
>   
>   #include <drm/drm_drv.h>
>   #include <drm/drm_file.h>
> +#include <drm/drm_gem_shmem_helper.h>
>   #include <drm/drm_ioctl.h>
>   #include <drm/drm_managed.h>
>   #include <drm/drm_prime.h>
> @@ -50,87 +51,11 @@
>   #define DRIVER_MAJOR	1
>   #define DRIVER_MINOR	0
>   
> -static const struct drm_gem_object_funcs vgem_gem_object_funcs;
> -
>   static struct vgem_device {
>   	struct drm_device drm;
>   	struct platform_device *platform;
>   } *vgem_device;
>   
> -static void vgem_gem_free_object(struct drm_gem_object *obj)
> -{
> -	struct drm_vgem_gem_object *vgem_obj = to_vgem_bo(obj);
> -
> -	kvfree(vgem_obj->pages);
> -	mutex_destroy(&vgem_obj->pages_lock);
> -
> -	if (obj->import_attach)
> -		drm_prime_gem_destroy(obj, vgem_obj->table);
> -
> -	drm_gem_object_release(obj);
> -	kfree(vgem_obj);
> -}
> -
> -static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
> -{
> -	struct vm_area_struct *vma = vmf->vma;
> -	struct drm_vgem_gem_object *obj = vma->vm_private_data;
> -	/* We don't use vmf->pgoff since that has the fake offset */
> -	unsigned long vaddr = vmf->address;
> -	vm_fault_t ret = VM_FAULT_SIGBUS;
> -	loff_t num_pages;
> -	pgoff_t page_offset;
> -	page_offset = (vaddr - vma->vm_start) >> PAGE_SHIFT;
> -
> -	num_pages = DIV_ROUND_UP(obj->base.size, PAGE_SIZE);
> -
> -	if (page_offset >= num_pages)
> -		return VM_FAULT_SIGBUS;
> -
> -	mutex_lock(&obj->pages_lock);
> -	if (obj->pages) {
> -		get_page(obj->pages[page_offset]);
> -		vmf->page = obj->pages[page_offset];
> -		ret = 0;
> -	}
> -	mutex_unlock(&obj->pages_lock);
> -	if (ret) {
> -		struct page *page;
> -
> -		page = shmem_read_mapping_page(
> -					file_inode(obj->base.filp)->i_mapping,
> -					page_offset);
> -		if (!IS_ERR(page)) {
> -			vmf->page = page;
> -			ret = 0;
> -		} else switch (PTR_ERR(page)) {
> -			case -ENOSPC:
> -			case -ENOMEM:
> -				ret = VM_FAULT_OOM;
> -				break;
> -			case -EBUSY:
> -				ret = VM_FAULT_RETRY;
> -				break;
> -			case -EFAULT:
> -			case -EINVAL:
> -				ret = VM_FAULT_SIGBUS;
> -				break;
> -			default:
> -				WARN_ON(PTR_ERR(page));
> -				ret = VM_FAULT_SIGBUS;
> -				break;
> -		}
> -
> -	}
> -	return ret;
> -}
> -
> -static const struct vm_operations_struct vgem_gem_vm_ops = {
> -	.fault = vgem_gem_fault,
> -	.open = drm_gem_vm_open,
> -	.close = drm_gem_vm_close,
> -};
> -
>   static int vgem_open(struct drm_device *dev, struct drm_file *file)
>   {
>   	struct vgem_file *vfile;
> @@ -159,81 +84,6 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
>   	kfree(vfile);
>   }
>   
> -static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> -						unsigned long size)
> -{
> -	struct drm_vgem_gem_object *obj;
> -	int ret;
> -
> -	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
> -	if (!obj)
> -		return ERR_PTR(-ENOMEM);
> -
> -	obj->base.funcs = &vgem_gem_object_funcs;
> -
> -	ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
> -	if (ret) {
> -		kfree(obj);
> -		return ERR_PTR(ret);
> -	}
> -
> -	mutex_init(&obj->pages_lock);
> -
> -	return obj;
> -}
> -
> -static void __vgem_gem_destroy(struct drm_vgem_gem_object *obj)
> -{
> -	drm_gem_object_release(&obj->base);
> -	kfree(obj);
> -}
> -
> -static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
> -					      struct drm_file *file,
> -					      unsigned int *handle,
> -					      unsigned long size)
> -{
> -	struct drm_vgem_gem_object *obj;
> -	int ret;
> -
> -	obj = __vgem_gem_create(dev, size);
> -	if (IS_ERR(obj))
> -		return ERR_CAST(obj);
> -
> -	ret = drm_gem_handle_create(file, &obj->base, handle);
> -	if (ret) {
> -		drm_gem_object_put(&obj->base);
> -		return ERR_PTR(ret);
> -	}
> -
> -	return &obj->base;
> -}
> -
> -static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
> -				struct drm_mode_create_dumb *args)
> -{
> -	struct drm_gem_object *gem_object;
> -	u64 pitch, size;
> -
> -	pitch = args->width * DIV_ROUND_UP(args->bpp, 8);
> -	size = args->height * pitch;
> -	if (size == 0)
> -		return -EINVAL;
> -
> -	gem_object = vgem_gem_create(dev, file, &args->handle, size);
> -	if (IS_ERR(gem_object))
> -		return PTR_ERR(gem_object);
> -
> -	args->size = gem_object->size;
> -	args->pitch = pitch;
> -
> -	drm_gem_object_put(gem_object);
> -
> -	DRM_DEBUG("Created object of size %llu\n", args->size);
> -
> -	return 0;
> -}
> -
>   static struct drm_ioctl_desc vgem_ioctls[] = {
>   	DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
>   	DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
> @@ -266,159 +116,23 @@ static const struct file_operations vgem_driver_fops = {
>   	.release	= drm_release,
>   };
>   
> -static struct page **vgem_pin_pages(struct drm_vgem_gem_object *bo)
> -{
> -	mutex_lock(&bo->pages_lock);
> -	if (bo->pages_pin_count++ == 0) {
> -		struct page **pages;
> -
> -		pages = drm_gem_get_pages(&bo->base);
> -		if (IS_ERR(pages)) {
> -			bo->pages_pin_count--;
> -			mutex_unlock(&bo->pages_lock);
> -			return pages;
> -		}
> -
> -		bo->pages = pages;
> -	}
> -	mutex_unlock(&bo->pages_lock);
> -
> -	return bo->pages;
> -}
> -
> -static void vgem_unpin_pages(struct drm_vgem_gem_object *bo)
> +static struct drm_gem_object *vgem_gem_create_object(struct drm_device *dev, size_t size)
>   {
> -	mutex_lock(&bo->pages_lock);
> -	if (--bo->pages_pin_count == 0) {
> -		drm_gem_put_pages(&bo->base, bo->pages, true, true);
> -		bo->pages = NULL;
> -	}
> -	mutex_unlock(&bo->pages_lock);
> -}
> +	struct drm_gem_shmem_object *obj;
>   
> -static int vgem_prime_pin(struct drm_gem_object *obj)
> -{
> -	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> -	long n_pages = obj->size >> PAGE_SHIFT;
> -	struct page **pages;
> -
> -	pages = vgem_pin_pages(bo);
> -	if (IS_ERR(pages))
> -		return PTR_ERR(pages);
> +	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
> +	if (!obj)
> +		return NULL;
>   
> -	/* Flush the object from the CPU cache so that importers can rely
> -	 * on coherent indirect access via the exported dma-address.
> +	/*
> +	 * vgem doesn't have any begin/end cpu access ioctls, therefore must use
> +	 * coherent memory or dma-buf sharing just wont work.
>   	 */
> -	drm_clflush_pages(pages, n_pages);

Instead of shoehorning GEM SHMEM to get caching right (patch 2) have you 
considered to set your own GEM funcs object for vgem. All function 
pointers would point to SHMEM functions, except for pin, which would be
drm_gem_shmem_pin() + drm_clflush_pages(). If this works, I think it 
would be much preferable to the current patch 2. You can override the 
default GEM functions from within vgem_gem_create_object().

Best regards
Thomas


> -
> -	return 0;
> -}
> -
> -static void vgem_prime_unpin(struct drm_gem_object *obj)
> -{
> -	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> -
> -	vgem_unpin_pages(bo);
> -}
> -
> -static struct sg_table *vgem_prime_get_sg_table(struct drm_gem_object *obj)
> -{
> -	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> -
> -	return drm_prime_pages_to_sg(obj->dev, bo->pages, bo->base.size >> PAGE_SHIFT);
> -}
> -
> -static struct drm_gem_object* vgem_prime_import(struct drm_device *dev,
> -						struct dma_buf *dma_buf)
> -{
> -	struct vgem_device *vgem = container_of(dev, typeof(*vgem), drm);
> -
> -	return drm_gem_prime_import_dev(dev, dma_buf, &vgem->platform->dev);
> -}
> -
> -static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> -			struct dma_buf_attachment *attach, struct sg_table *sg)
> -{
> -	struct drm_vgem_gem_object *obj;
> -	int npages;
> -
> -	obj = __vgem_gem_create(dev, attach->dmabuf->size);
> -	if (IS_ERR(obj))
> -		return ERR_CAST(obj);
> -
> -	npages = PAGE_ALIGN(attach->dmabuf->size) / PAGE_SIZE;
> -
> -	obj->table = sg;
> -	obj->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> -	if (!obj->pages) {
> -		__vgem_gem_destroy(obj);
> -		return ERR_PTR(-ENOMEM);
> -	}
> +	obj->map_wc = true;
>   
> -	obj->pages_pin_count++; /* perma-pinned */
> -	drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
>   	return &obj->base;
>   }
>   
> -static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> -{
> -	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> -	long n_pages = obj->size >> PAGE_SHIFT;
> -	struct page **pages;
> -	void *vaddr;
> -
> -	pages = vgem_pin_pages(bo);
> -	if (IS_ERR(pages))
> -		return PTR_ERR(pages);
> -
> -	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> -	if (!vaddr)
> -		return -ENOMEM;
> -	dma_buf_map_set_vaddr(map, vaddr);
> -
> -	return 0;
> -}
> -
> -static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> -{
> -	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> -
> -	vunmap(map->vaddr);
> -	vgem_unpin_pages(bo);
> -}
> -
> -static int vgem_prime_mmap(struct drm_gem_object *obj,
> -			   struct vm_area_struct *vma)
> -{
> -	int ret;
> -
> -	if (obj->size < vma->vm_end - vma->vm_start)
> -		return -EINVAL;
> -
> -	if (!obj->filp)
> -		return -ENODEV;
> -
> -	ret = call_mmap(obj->filp, vma);
> -	if (ret)
> -		return ret;
> -
> -	vma_set_file(vma, obj->filp);
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> -	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
> -
> -	return 0;
> -}
> -
> -static const struct drm_gem_object_funcs vgem_gem_object_funcs = {
> -	.free = vgem_gem_free_object,
> -	.pin = vgem_prime_pin,
> -	.unpin = vgem_prime_unpin,
> -	.get_sg_table = vgem_prime_get_sg_table,
> -	.vmap = vgem_prime_vmap,
> -	.vunmap = vgem_prime_vunmap,
> -	.vm_ops = &vgem_gem_vm_ops,
> -};
> -
>   static const struct drm_driver vgem_driver = {
>   	.driver_features		= DRIVER_GEM | DRIVER_RENDER,
>   	.open				= vgem_open,
> @@ -427,13 +141,8 @@ static const struct drm_driver vgem_driver = {
>   	.num_ioctls 			= ARRAY_SIZE(vgem_ioctls),
>   	.fops				= &vgem_driver_fops,
>   
> -	.dumb_create			= vgem_gem_dumb_create,
> -
> -	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
> -	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> -	.gem_prime_import = vgem_prime_import,
> -	.gem_prime_import_sg_table = vgem_prime_import_sg_table,
> -	.gem_prime_mmap = vgem_prime_mmap,
> +	DRM_GEM_SHMEM_DRIVER_OPS,
> +	.gem_create_object		= vgem_gem_create_object,
>   
>   	.name	= DRIVER_NAME,
>   	.desc	= DRIVER_DESC,
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 2/4] drm/shmem-helper: Switch to vmf_insert_pfn
  2021-07-22 18:22   ` Thomas Zimmermann
@ 2021-07-23  7:32     ` Daniel Vetter
  2021-08-12 13:05     ` Daniel Vetter
  1 sibling, 0 replies; 26+ messages in thread
From: Daniel Vetter @ 2021-07-23  7:32 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: David Airlie, Daniel Vetter, Intel Graphics Development,
	DRI Development, Daniel Vetter

On Thu, Jul 22, 2021 at 08:22:43PM +0200, Thomas Zimmermann wrote:
> Hi,
> 
> I'm not knowledgeable enougth to give this a full review. If you can just
> answer my questions, fell free to add an
> 
> Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
> 
> to the patch. :)
> 
> Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> > We want to stop gup, which isn't the case if we use vmf_insert_page
> 
> What is gup?

get_user_pages. It pins memory wherever it is, which badly wreaks at least
ttm and could also cause trouble with cma allocations. In both cases
becaue we can't move/reuse these pages anymore.

Now get_user_pages fails when the memory isn't considered "normal", like
with VM_PFNMAP and using vm_insert_pfn. For consistency across all dma-buf
I'm trying (together with Christian König) to roll this out everywhere,
for fewer surprises.

E.g. for 5.14 iirc we merged a patch to do the same for ttm, where it
closes an actual bug (ttm gets really badly confused when there's suddenly
pinned pages where it thought it can move them).

cma allcoations already use VM_PFNMAP (because that's what dma_mmap is
using underneath), as is anything that's using remap_pfn_range. Worst case
we have to revert this patch for shmem helpers if it breaks something, but
I hope that's not the case. On the ttm side we've also had some fallout
that we needed to paper over with clever tricks.

I'll add the above explanation to the commit message.

> 
> > and VM_MIXEDMAP, because that does not set pte_special.
> > 
> > v2: With this shmem gem helpers now definitely need CONFIG_MMU (0day)
> > 
> > v3: add more depends on MMU. For usb drivers this is a bit awkward,
> > but really it's correct: To be able to provide a contig mapping of
> > buffers to userspace on !MMU platforms we'd need to use the cma
> > helpers for these drivers on those platforms. As-is this wont work.
> > 
> > Also not exactly sure why vm_insert_page doesn't go boom, because that
> > definitely wont fly in practice since the pages are non-contig to
> > begin with.
> > 
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Maxime Ripard <mripard@kernel.org>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: David Airlie <airlied@linux.ie>
> > Cc: Daniel Vetter <daniel@ffwll.ch>
> > ---
> >   drivers/gpu/drm/Kconfig                | 2 +-
> >   drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++--
> >   drivers/gpu/drm/gud/Kconfig            | 2 +-
> >   drivers/gpu/drm/tiny/Kconfig           | 4 ++--
> >   drivers/gpu/drm/udl/Kconfig            | 1 +
> >   5 files changed, 7 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> > index 0d372354c2d0..314eefa39892 100644
> > --- a/drivers/gpu/drm/Kconfig
> > +++ b/drivers/gpu/drm/Kconfig
> > @@ -211,7 +211,7 @@ config DRM_KMS_CMA_HELPER
> >   config DRM_GEM_SHMEM_HELPER
> >   	bool
> > -	depends on DRM
> > +	depends on DRM && MMU
> >   	help
> >   	  Choose this if you need the GEM shmem helper functions
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index d5e6d4568f99..296ab1b7c07f 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -542,7 +542,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> >   	} else {
> >   		page = shmem->pages[page_offset];
> > -		ret = vmf_insert_page(vma, vmf->address, page);
> > +		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
> >   	}
> >   	mutex_unlock(&shmem->pages_lock);
> > @@ -612,7 +612,7 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> >   		return ret;
> >   	}
> > -	vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
> > +	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND;
> >   	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
> >   	if (shmem->map_wc)
> >   		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
> > diff --git a/drivers/gpu/drm/gud/Kconfig b/drivers/gpu/drm/gud/Kconfig
> > index 1c8601bf4d91..9c1e61f9eec3 100644
> > --- a/drivers/gpu/drm/gud/Kconfig
> > +++ b/drivers/gpu/drm/gud/Kconfig
> > @@ -2,7 +2,7 @@
> >   config DRM_GUD
> >   	tristate "GUD USB Display"
> > -	depends on DRM && USB
> > +	depends on DRM && USB && MMU
> >   	select LZ4_COMPRESS
> >   	select DRM_KMS_HELPER
> >   	select DRM_GEM_SHMEM_HELPER
> 
> I'm a kconfig noob, so this is rather a question than a review comment:
> 
> 
> 
> If DRM_GEM_SHMEM_HELPER already depends on MMU, this select will fail on
> non-MMU platforms? Why does the driver also depend on MMU? Simply to make
> the item disappear in menuconfig?
> 
> Best regards
> Thomas
> 
> > diff --git a/drivers/gpu/drm/tiny/Kconfig b/drivers/gpu/drm/tiny/Kconfig
> > index 5593128eeff9..c11fb5be7d09 100644
> > --- a/drivers/gpu/drm/tiny/Kconfig
> > +++ b/drivers/gpu/drm/tiny/Kconfig
> > @@ -44,7 +44,7 @@ config DRM_CIRRUS_QEMU
> >   config DRM_GM12U320
> >   	tristate "GM12U320 driver for USB projectors"
> > -	depends on DRM && USB
> > +	depends on DRM && USB && MMU
> >   	select DRM_KMS_HELPER
> >   	select DRM_GEM_SHMEM_HELPER
> >   	help
> > @@ -53,7 +53,7 @@ config DRM_GM12U320
> >   config DRM_SIMPLEDRM
> >   	tristate "Simple framebuffer driver"
> > -	depends on DRM
> > +	depends on DRM && MMU
> >   	select DRM_GEM_SHMEM_HELPER
> >   	select DRM_KMS_HELPER
> >   	help
> > diff --git a/drivers/gpu/drm/udl/Kconfig b/drivers/gpu/drm/udl/Kconfig
> > index 1f497d8f1ae5..c744175c6992 100644
> > --- a/drivers/gpu/drm/udl/Kconfig
> > +++ b/drivers/gpu/drm/udl/Kconfig
> > @@ -4,6 +4,7 @@ config DRM_UDL
> >   	depends on DRM
> >   	depends on USB
> >   	depends on USB_ARCH_HAS_HCD
> > +	depends on MMU
> >   	select DRM_GEM_SHMEM_HELPER
> >   	select DRM_KMS_HELPER
> >   	help
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86
  2021-07-22 18:40   ` Thomas Zimmermann
@ 2021-07-23  7:36     ` Daniel Vetter
  2021-07-23  8:02       ` Christian König
  2021-08-05 18:40       ` Thomas Zimmermann
  0 siblings, 2 replies; 26+ messages in thread
From: Daniel Vetter @ 2021-07-23  7:36 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Thomas Hellström, David Airlie, Daniel Vetter,
	Intel Graphics Development, DRI Development, Maxime Ripard,
	Daniel Vetter, Christian König

On Thu, Jul 22, 2021 at 08:40:56PM +0200, Thomas Zimmermann wrote:
> Hi
> 
> Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> > intel-gfx-ci realized that something is not quite coherent anymore on
> > some platforms for our i915+vgem tests, when I tried to switch vgem
> > over to shmem helpers.
> > 
> > After lots of head-scratching I realized that I've removed calls to
> > drm_clflush. And we need those. To make this a bit cleaner use the
> > same page allocation tooling as ttm, which does internally clflush
> > (and more, as neeeded on any platform instead of just the intel x86
> > cpus i915 can be combined with).
> 
> Vgem would therefore not work correctly on non-X86 platforms?

Anything using shmem helpers doesn't work correctly on non-x86 platforms.
At least if they use wc.

vgem with intel-gfx-ci is simply running some very nasty tests that catch
the bugs.

I'm kinda hoping that someone from the armsoc world would care enough to
fix this there. But it's a tricky issue.

> > 
> > Unfortunately this doesn't exist on arm, or as a generic feature. For
> > that I think only the dma-api can get at wc memory reliably, so maybe
> > we'd need some kind of GFP_WC flag to do this properly.
> > 
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Maxime Ripard <mripard@kernel.org>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: David Airlie <airlied@linux.ie>
> > Cc: Daniel Vetter <daniel@ffwll.ch>
> > ---
> >   drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++++++++++
> >   1 file changed, 14 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index 296ab1b7c07f..657d2490aaa5 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -10,6 +10,10 @@
> >   #include <linux/slab.h>
> >   #include <linux/vmalloc.h>
> > +#ifdef CONFIG_X86
> > +#include <asm/set_memory.h>
> > +#endif
> > +
> >   #include <drm/drm.h>
> >   #include <drm/drm_device.h>
> >   #include <drm/drm_drv.h>
> > @@ -162,6 +166,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> >   		return PTR_ERR(pages);
> >   	}
> > +#ifdef CONFIG_X86
> > +	if (shmem->map_wc)
> > +		set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
> > +#endif
> 
> I cannot comment much on the technical details of the caching of various
> architectures. If this patch goes in, there should be a longer comment that
> reflects the discussion in this thread. It's apparently a workaround.
> 
> I think the call itself should be hidden behind a DRM API, which depends on
> CONFIG_X86. Something simple like
> 
> ifdef CONFIG_X86
> drm_set_pages_array_wc()
> {
> 	set_pages_array_wc();
> }
> else
> drm_set_pages_array_wc()
>  {
>  }
> #endif
> 
> Maybe in drm_cache.h?

We do have a bunch of this in drm_cache.h already, and architecture
maintainers hate us for it.

The real fix is to get at the architecture-specific wc allocator, which is
currently not something that's exposed, but hidden within the dma api. I
think having this stick out like this is better than hiding it behind fake
generic code (like we do with drm_clflush, which defacto also only really
works on x86).

Also note that ttm has the exact same ifdef in its page allocator, but it
does fall back to using dma_alloc_coherent on other platforms.
-Daniel

> Best regard
> Thomas
> 
> > +
> >   	shmem->pages = pages;
> >   	return 0;
> > @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> >   	if (--shmem->pages_use_count > 0)
> >   		return;
> > +#ifdef CONFIG_X86
> > +	if (shmem->map_wc)
> > +		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> > +#endif
> > +
> >   	drm_gem_put_pages(obj, shmem->pages,
> >   			  shmem->pages_mark_dirty_on_put,
> >   			  shmem->pages_mark_accessed_on_put);
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 4/4] drm/vgem: use shmem helpers
  2021-07-22 18:50   ` [Intel-gfx] [PATCH v4 4/4] " Thomas Zimmermann
@ 2021-07-23  7:38     ` Daniel Vetter
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Vetter @ 2021-07-23  7:38 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Daniel Vetter, Intel Graphics Development, DRI Development,
	Christian König, Melissa Wen, John Stultz, Daniel Vetter,
	Chris Wilson, Sumit Semwal

On Thu, Jul 22, 2021 at 08:50:48PM +0200, Thomas Zimmermann wrote:
> Hi
> 
> Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> > Aside from deleting lots of code the real motivation here is to switch
> > the mmap over to VM_PFNMAP, to be more consistent with what real gpu
> > drivers do. They're all VM_PFNMP, which means get_user_pages doesn't
> > work, and even if you try and there's a struct page behind that,
> > touching it and mucking around with its refcount can upset drivers
> > real bad.
> > 
> > v2: Review from Thomas:
> > - sort #include
> > - drop more dead code that I didn't spot somehow
> > 
> > v3: select DRM_GEM_SHMEM_HELPER to make it build (intel-gfx-ci)
> > 
> > v4: I got tricked by 0cf2ef46c6c0 ("drm/shmem-helper: Use cached
> > mappings by default"), and we need WC in vgem because vgem doesn't
> > have explicit begin/end cpu access ioctls.
> > 
> > Also add a comment why exactly vgem has to use wc.
> > 
> > v5: Don't set obj->base.funcs, it will default to drm_gem_shmem_funcs
> > (Thomas)
> > 
> > v6: vgem also needs an MMU for remapping
> > 
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: John Stultz <john.stultz@linaro.org>
> > Cc: Sumit Semwal <sumit.semwal@linaro.org>
> > Cc: "Christian König" <christian.koenig@amd.com>
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Melissa Wen <melissa.srw@gmail.com>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >   drivers/gpu/drm/Kconfig         |   5 +-
> >   drivers/gpu/drm/vgem/vgem_drv.c | 315 ++------------------------------
> >   2 files changed, 15 insertions(+), 305 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> > index 314eefa39892..28f7d2006e8b 100644
> > --- a/drivers/gpu/drm/Kconfig
> > +++ b/drivers/gpu/drm/Kconfig
> > @@ -272,7 +272,8 @@ source "drivers/gpu/drm/kmb/Kconfig"
> >   config DRM_VGEM
> >   	tristate "Virtual GEM provider"
> > -	depends on DRM
> > +	depends on DRM && MMU
> > +	select DRM_GEM_SHMEM_HELPER
> >   	help
> >   	  Choose this option to get a virtual graphics memory manager,
> >   	  as used by Mesa's software renderer for enhanced performance.
> > @@ -280,7 +281,7 @@ config DRM_VGEM
> >   config DRM_VKMS
> >   	tristate "Virtual KMS (EXPERIMENTAL)"
> > -	depends on DRM
> > +	depends on DRM && MMU
> >   	select DRM_KMS_HELPER
> >   	select DRM_GEM_SHMEM_HELPER
> >   	select CRC32
> > diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> > index bf38a7e319d1..ba410ba6b7f7 100644
> > --- a/drivers/gpu/drm/vgem/vgem_drv.c
> > +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> > @@ -38,6 +38,7 @@
> >   #include <drm/drm_drv.h>
> >   #include <drm/drm_file.h>
> > +#include <drm/drm_gem_shmem_helper.h>
> >   #include <drm/drm_ioctl.h>
> >   #include <drm/drm_managed.h>
> >   #include <drm/drm_prime.h>
> > @@ -50,87 +51,11 @@
> >   #define DRIVER_MAJOR	1
> >   #define DRIVER_MINOR	0
> > -static const struct drm_gem_object_funcs vgem_gem_object_funcs;
> > -
> >   static struct vgem_device {
> >   	struct drm_device drm;
> >   	struct platform_device *platform;
> >   } *vgem_device;
> > -static void vgem_gem_free_object(struct drm_gem_object *obj)
> > -{
> > -	struct drm_vgem_gem_object *vgem_obj = to_vgem_bo(obj);
> > -
> > -	kvfree(vgem_obj->pages);
> > -	mutex_destroy(&vgem_obj->pages_lock);
> > -
> > -	if (obj->import_attach)
> > -		drm_prime_gem_destroy(obj, vgem_obj->table);
> > -
> > -	drm_gem_object_release(obj);
> > -	kfree(vgem_obj);
> > -}
> > -
> > -static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
> > -{
> > -	struct vm_area_struct *vma = vmf->vma;
> > -	struct drm_vgem_gem_object *obj = vma->vm_private_data;
> > -	/* We don't use vmf->pgoff since that has the fake offset */
> > -	unsigned long vaddr = vmf->address;
> > -	vm_fault_t ret = VM_FAULT_SIGBUS;
> > -	loff_t num_pages;
> > -	pgoff_t page_offset;
> > -	page_offset = (vaddr - vma->vm_start) >> PAGE_SHIFT;
> > -
> > -	num_pages = DIV_ROUND_UP(obj->base.size, PAGE_SIZE);
> > -
> > -	if (page_offset >= num_pages)
> > -		return VM_FAULT_SIGBUS;
> > -
> > -	mutex_lock(&obj->pages_lock);
> > -	if (obj->pages) {
> > -		get_page(obj->pages[page_offset]);
> > -		vmf->page = obj->pages[page_offset];
> > -		ret = 0;
> > -	}
> > -	mutex_unlock(&obj->pages_lock);
> > -	if (ret) {
> > -		struct page *page;
> > -
> > -		page = shmem_read_mapping_page(
> > -					file_inode(obj->base.filp)->i_mapping,
> > -					page_offset);
> > -		if (!IS_ERR(page)) {
> > -			vmf->page = page;
> > -			ret = 0;
> > -		} else switch (PTR_ERR(page)) {
> > -			case -ENOSPC:
> > -			case -ENOMEM:
> > -				ret = VM_FAULT_OOM;
> > -				break;
> > -			case -EBUSY:
> > -				ret = VM_FAULT_RETRY;
> > -				break;
> > -			case -EFAULT:
> > -			case -EINVAL:
> > -				ret = VM_FAULT_SIGBUS;
> > -				break;
> > -			default:
> > -				WARN_ON(PTR_ERR(page));
> > -				ret = VM_FAULT_SIGBUS;
> > -				break;
> > -		}
> > -
> > -	}
> > -	return ret;
> > -}
> > -
> > -static const struct vm_operations_struct vgem_gem_vm_ops = {
> > -	.fault = vgem_gem_fault,
> > -	.open = drm_gem_vm_open,
> > -	.close = drm_gem_vm_close,
> > -};
> > -
> >   static int vgem_open(struct drm_device *dev, struct drm_file *file)
> >   {
> >   	struct vgem_file *vfile;
> > @@ -159,81 +84,6 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
> >   	kfree(vfile);
> >   }
> > -static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> > -						unsigned long size)
> > -{
> > -	struct drm_vgem_gem_object *obj;
> > -	int ret;
> > -
> > -	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
> > -	if (!obj)
> > -		return ERR_PTR(-ENOMEM);
> > -
> > -	obj->base.funcs = &vgem_gem_object_funcs;
> > -
> > -	ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
> > -	if (ret) {
> > -		kfree(obj);
> > -		return ERR_PTR(ret);
> > -	}
> > -
> > -	mutex_init(&obj->pages_lock);
> > -
> > -	return obj;
> > -}
> > -
> > -static void __vgem_gem_destroy(struct drm_vgem_gem_object *obj)
> > -{
> > -	drm_gem_object_release(&obj->base);
> > -	kfree(obj);
> > -}
> > -
> > -static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
> > -					      struct drm_file *file,
> > -					      unsigned int *handle,
> > -					      unsigned long size)
> > -{
> > -	struct drm_vgem_gem_object *obj;
> > -	int ret;
> > -
> > -	obj = __vgem_gem_create(dev, size);
> > -	if (IS_ERR(obj))
> > -		return ERR_CAST(obj);
> > -
> > -	ret = drm_gem_handle_create(file, &obj->base, handle);
> > -	if (ret) {
> > -		drm_gem_object_put(&obj->base);
> > -		return ERR_PTR(ret);
> > -	}
> > -
> > -	return &obj->base;
> > -}
> > -
> > -static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
> > -				struct drm_mode_create_dumb *args)
> > -{
> > -	struct drm_gem_object *gem_object;
> > -	u64 pitch, size;
> > -
> > -	pitch = args->width * DIV_ROUND_UP(args->bpp, 8);
> > -	size = args->height * pitch;
> > -	if (size == 0)
> > -		return -EINVAL;
> > -
> > -	gem_object = vgem_gem_create(dev, file, &args->handle, size);
> > -	if (IS_ERR(gem_object))
> > -		return PTR_ERR(gem_object);
> > -
> > -	args->size = gem_object->size;
> > -	args->pitch = pitch;
> > -
> > -	drm_gem_object_put(gem_object);
> > -
> > -	DRM_DEBUG("Created object of size %llu\n", args->size);
> > -
> > -	return 0;
> > -}
> > -
> >   static struct drm_ioctl_desc vgem_ioctls[] = {
> >   	DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
> >   	DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
> > @@ -266,159 +116,23 @@ static const struct file_operations vgem_driver_fops = {
> >   	.release	= drm_release,
> >   };
> > -static struct page **vgem_pin_pages(struct drm_vgem_gem_object *bo)
> > -{
> > -	mutex_lock(&bo->pages_lock);
> > -	if (bo->pages_pin_count++ == 0) {
> > -		struct page **pages;
> > -
> > -		pages = drm_gem_get_pages(&bo->base);
> > -		if (IS_ERR(pages)) {
> > -			bo->pages_pin_count--;
> > -			mutex_unlock(&bo->pages_lock);
> > -			return pages;
> > -		}
> > -
> > -		bo->pages = pages;
> > -	}
> > -	mutex_unlock(&bo->pages_lock);
> > -
> > -	return bo->pages;
> > -}
> > -
> > -static void vgem_unpin_pages(struct drm_vgem_gem_object *bo)
> > +static struct drm_gem_object *vgem_gem_create_object(struct drm_device *dev, size_t size)
> >   {
> > -	mutex_lock(&bo->pages_lock);
> > -	if (--bo->pages_pin_count == 0) {
> > -		drm_gem_put_pages(&bo->base, bo->pages, true, true);
> > -		bo->pages = NULL;
> > -	}
> > -	mutex_unlock(&bo->pages_lock);
> > -}
> > +	struct drm_gem_shmem_object *obj;
> > -static int vgem_prime_pin(struct drm_gem_object *obj)
> > -{
> > -	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> > -	long n_pages = obj->size >> PAGE_SHIFT;
> > -	struct page **pages;
> > -
> > -	pages = vgem_pin_pages(bo);
> > -	if (IS_ERR(pages))
> > -		return PTR_ERR(pages);
> > +	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
> > +	if (!obj)
> > +		return NULL;
> > -	/* Flush the object from the CPU cache so that importers can rely
> > -	 * on coherent indirect access via the exported dma-address.
> > +	/*
> > +	 * vgem doesn't have any begin/end cpu access ioctls, therefore must use
> > +	 * coherent memory or dma-buf sharing just wont work.
> >   	 */
> > -	drm_clflush_pages(pages, n_pages);
> 
> Instead of shoehorning GEM SHMEM to get caching right (patch 2) have you
> considered to set your own GEM funcs object for vgem. All function pointers
> would point to SHMEM functions, except for pin, which would be
> drm_gem_shmem_pin() + drm_clflush_pages(). If this works, I think it would
> be much preferable to the current patch 2. You can override the default GEM
> functions from within vgem_gem_create_object().

The thing is: shmem helpers currently get the caching wrong for wc. vgem
is just the messenger.

Also, get_pages + drm_clflush is not actually guaranteed to be enough
across platforms. It is enough on intel x86 cpus (and I think all modern
amd x86 cpus, but not some earlier ones from way back), but not in general
across the board.
-Daniel

> 
> Best regards
> Thomas
> 
> 
> > -
> > -	return 0;
> > -}
> > -
> > -static void vgem_prime_unpin(struct drm_gem_object *obj)
> > -{
> > -	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> > -
> > -	vgem_unpin_pages(bo);
> > -}
> > -
> > -static struct sg_table *vgem_prime_get_sg_table(struct drm_gem_object *obj)
> > -{
> > -	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> > -
> > -	return drm_prime_pages_to_sg(obj->dev, bo->pages, bo->base.size >> PAGE_SHIFT);
> > -}
> > -
> > -static struct drm_gem_object* vgem_prime_import(struct drm_device *dev,
> > -						struct dma_buf *dma_buf)
> > -{
> > -	struct vgem_device *vgem = container_of(dev, typeof(*vgem), drm);
> > -
> > -	return drm_gem_prime_import_dev(dev, dma_buf, &vgem->platform->dev);
> > -}
> > -
> > -static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> > -			struct dma_buf_attachment *attach, struct sg_table *sg)
> > -{
> > -	struct drm_vgem_gem_object *obj;
> > -	int npages;
> > -
> > -	obj = __vgem_gem_create(dev, attach->dmabuf->size);
> > -	if (IS_ERR(obj))
> > -		return ERR_CAST(obj);
> > -
> > -	npages = PAGE_ALIGN(attach->dmabuf->size) / PAGE_SIZE;
> > -
> > -	obj->table = sg;
> > -	obj->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> > -	if (!obj->pages) {
> > -		__vgem_gem_destroy(obj);
> > -		return ERR_PTR(-ENOMEM);
> > -	}
> > +	obj->map_wc = true;
> > -	obj->pages_pin_count++; /* perma-pinned */
> > -	drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
> >   	return &obj->base;
> >   }
> > -static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> > -{
> > -	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> > -	long n_pages = obj->size >> PAGE_SHIFT;
> > -	struct page **pages;
> > -	void *vaddr;
> > -
> > -	pages = vgem_pin_pages(bo);
> > -	if (IS_ERR(pages))
> > -		return PTR_ERR(pages);
> > -
> > -	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> > -	if (!vaddr)
> > -		return -ENOMEM;
> > -	dma_buf_map_set_vaddr(map, vaddr);
> > -
> > -	return 0;
> > -}
> > -
> > -static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> > -{
> > -	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> > -
> > -	vunmap(map->vaddr);
> > -	vgem_unpin_pages(bo);
> > -}
> > -
> > -static int vgem_prime_mmap(struct drm_gem_object *obj,
> > -			   struct vm_area_struct *vma)
> > -{
> > -	int ret;
> > -
> > -	if (obj->size < vma->vm_end - vma->vm_start)
> > -		return -EINVAL;
> > -
> > -	if (!obj->filp)
> > -		return -ENODEV;
> > -
> > -	ret = call_mmap(obj->filp, vma);
> > -	if (ret)
> > -		return ret;
> > -
> > -	vma_set_file(vma, obj->filp);
> > -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> > -	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
> > -
> > -	return 0;
> > -}
> > -
> > -static const struct drm_gem_object_funcs vgem_gem_object_funcs = {
> > -	.free = vgem_gem_free_object,
> > -	.pin = vgem_prime_pin,
> > -	.unpin = vgem_prime_unpin,
> > -	.get_sg_table = vgem_prime_get_sg_table,
> > -	.vmap = vgem_prime_vmap,
> > -	.vunmap = vgem_prime_vunmap,
> > -	.vm_ops = &vgem_gem_vm_ops,
> > -};
> > -
> >   static const struct drm_driver vgem_driver = {
> >   	.driver_features		= DRIVER_GEM | DRIVER_RENDER,
> >   	.open				= vgem_open,
> > @@ -427,13 +141,8 @@ static const struct drm_driver vgem_driver = {
> >   	.num_ioctls 			= ARRAY_SIZE(vgem_ioctls),
> >   	.fops				= &vgem_driver_fops,
> > -	.dumb_create			= vgem_gem_dumb_create,
> > -
> > -	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
> > -	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> > -	.gem_prime_import = vgem_prime_import,
> > -	.gem_prime_import_sg_table = vgem_prime_import_sg_table,
> > -	.gem_prime_mmap = vgem_prime_mmap,
> > +	DRM_GEM_SHMEM_DRIVER_OPS,
> > +	.gem_create_object		= vgem_gem_create_object,
> >   	.name	= DRIVER_NAME,
> >   	.desc	= DRIVER_DESC,
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86
  2021-07-23  7:36     ` Daniel Vetter
@ 2021-07-23  8:02       ` Christian König
  2021-07-23  8:34         ` Daniel Vetter
  2021-08-05 18:40       ` Thomas Zimmermann
  1 sibling, 1 reply; 26+ messages in thread
From: Christian König @ 2021-07-23  8:02 UTC (permalink / raw)
  To: Daniel Vetter, Thomas Zimmermann
  Cc: Thomas Hellström, David Airlie, Daniel Vetter,
	Intel Graphics Development, DRI Development, Maxime Ripard,
	Daniel Vetter

Am 23.07.21 um 09:36 schrieb Daniel Vetter:
> On Thu, Jul 22, 2021 at 08:40:56PM +0200, Thomas Zimmermann wrote:
>> Hi
>>
>> Am 13.07.21 um 22:51 schrieb Daniel Vetter:
>> [SNIP]
>>> +#ifdef CONFIG_X86
>>> +	if (shmem->map_wc)
>>> +		set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
>>> +#endif
>> I cannot comment much on the technical details of the caching of various
>> architectures. If this patch goes in, there should be a longer comment that
>> reflects the discussion in this thread. It's apparently a workaround.
>>
>> I think the call itself should be hidden behind a DRM API, which depends on
>> CONFIG_X86. Something simple like
>>
>> ifdef CONFIG_X86
>> drm_set_pages_array_wc()
>> {
>> 	set_pages_array_wc();
>> }
>> else
>> drm_set_pages_array_wc()
>>   {
>>   }
>> #endif
>>
>> Maybe in drm_cache.h?
> We do have a bunch of this in drm_cache.h already, and architecture
> maintainers hate us for it.

Yeah, for good reasons :)

> The real fix is to get at the architecture-specific wc allocator, which is
> currently not something that's exposed, but hidden within the dma api. I
> think having this stick out like this is better than hiding it behind fake
> generic code (like we do with drm_clflush, which defacto also only really
> works on x86).

The DMA API also doesn't really touch that stuff as far as I know.

What we rather do on other architectures is to set the appropriate 
caching flags on the CPU mappings, see function ttm_prot_from_caching().

> Also note that ttm has the exact same ifdef in its page allocator, but it
> does fall back to using dma_alloc_coherent on other platforms.

This works surprisingly well on non x86 architectures as well. We just 
don't necessary update the kernel mappings everywhere which limits the 
kmap usage.

In other words radeon and nouveau still work on PowerPC AGP systems as 
far as I know for example.

Christian.

> -Daniel
>
>> Best regard
>> Thomas
>>
>>> +
>>>    	shmem->pages = pages;
>>>    	return 0;
>>> @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
>>>    	if (--shmem->pages_use_count > 0)
>>>    		return;
>>> +#ifdef CONFIG_X86
>>> +	if (shmem->map_wc)
>>> +		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
>>> +#endif
>>> +
>>>    	drm_gem_put_pages(obj, shmem->pages,
>>>    			  shmem->pages_mark_dirty_on_put,
>>>    			  shmem->pages_mark_accessed_on_put);
>>>
>> -- 
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>> (HRB 36809, AG Nürnberg)
>> Geschäftsführer: Felix Imendörffer
>>
>
>
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86
  2021-07-23  8:02       ` Christian König
@ 2021-07-23  8:34         ` Daniel Vetter
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Vetter @ 2021-07-23  8:34 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Hellström, David Airlie, Daniel Vetter,
	Intel Graphics Development, DRI Development, Maxime Ripard,
	Thomas Zimmermann, Daniel Vetter

On Fri, Jul 23, 2021 at 10:02:39AM +0200, Christian König wrote:
> Am 23.07.21 um 09:36 schrieb Daniel Vetter:
> > On Thu, Jul 22, 2021 at 08:40:56PM +0200, Thomas Zimmermann wrote:
> > > Hi
> > > 
> > > Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> > > [SNIP]
> > > > +#ifdef CONFIG_X86
> > > > +	if (shmem->map_wc)
> > > > +		set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
> > > > +#endif
> > > I cannot comment much on the technical details of the caching of various
> > > architectures. If this patch goes in, there should be a longer comment that
> > > reflects the discussion in this thread. It's apparently a workaround.
> > > 
> > > I think the call itself should be hidden behind a DRM API, which depends on
> > > CONFIG_X86. Something simple like
> > > 
> > > ifdef CONFIG_X86
> > > drm_set_pages_array_wc()
> > > {
> > > 	set_pages_array_wc();
> > > }
> > > else
> > > drm_set_pages_array_wc()
> > >   {
> > >   }
> > > #endif
> > > 
> > > Maybe in drm_cache.h?
> > We do have a bunch of this in drm_cache.h already, and architecture
> > maintainers hate us for it.
> 
> Yeah, for good reasons :)
> 
> > The real fix is to get at the architecture-specific wc allocator, which is
> > currently not something that's exposed, but hidden within the dma api. I
> > think having this stick out like this is better than hiding it behind fake
> > generic code (like we do with drm_clflush, which defacto also only really
> > works on x86).
> 
> The DMA API also doesn't really touch that stuff as far as I know.
> 
> What we rather do on other architectures is to set the appropriate caching
> flags on the CPU mappings, see function ttm_prot_from_caching().

This alone doesn't do cache flushes. And at least on some arm cpus having
inconsistent mappings can lead to interconnect hangs, so you have to at
least punch out the kernel linear map. Which on some arms isn't possible
(because the kernel map is a special linear map and not done with
pagetables). Which means you need to carve this out at boot and treat them
as GFP_HIGHMEM.

Afaik dma-api has that allocator somewhere which dtrt for
dma_alloc_coherent.

Also shmem helpers already set the caching pgprot.

> > Also note that ttm has the exact same ifdef in its page allocator, but it
> > does fall back to using dma_alloc_coherent on other platforms.
> 
> This works surprisingly well on non x86 architectures as well. We just don't
> necessary update the kernel mappings everywhere which limits the kmap usage.
> 
> In other words radeon and nouveau still work on PowerPC AGP systems as far
> as I know for example.

The thing is, on most cpus you get away with just pgprot set to wc, and on
many others it's only an issue while there's still some cpu dirt hanging
around because they don't prefetch badly enough. It's very few were it's a
persistent problem.

Really the only reason I've even caught this was because some of the
i915+vgem buffer sharing tests we have are very nasty and intentionally
try to provoke the worst case :-)

Anyway, since you're looking, can you pls review this and the previous
patch for shmem helpers?

The first one to make VM_PFNMAP standard for all dma-buf isn't ready yet,
because I need to audit all the driver still. And at least i915 dma-buf
mmap is still using gup-able memory too. So more work to do here.
-Danel

> 
> Christian.
> 
> > -Daniel
> > 
> > > Best regard
> > > Thomas
> > > 
> > > > +
> > > >    	shmem->pages = pages;
> > > >    	return 0;
> > > > @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> > > >    	if (--shmem->pages_use_count > 0)
> > > >    		return;
> > > > +#ifdef CONFIG_X86
> > > > +	if (shmem->map_wc)
> > > > +		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> > > > +#endif
> > > > +
> > > >    	drm_gem_put_pages(obj, shmem->pages,
> > > >    			  shmem->pages_mark_dirty_on_put,
> > > >    			  shmem->pages_mark_accessed_on_put);
> > > > 
> > > -- 
> > > Thomas Zimmermann
> > > Graphics Driver Developer
> > > SUSE Software Solutions Germany GmbH
> > > Maxfeldstr. 5, 90409 Nürnberg, Germany
> > > (HRB 36809, AG Nürnberg)
> > > Geschäftsführer: Felix Imendörffer
> > > 
> > 
> > 
> > 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 1/4] dma-buf: Require VM_PFNMAP vma for mmap
  2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 1/4] dma-buf: Require VM_PFNMAP vma for mmap Daniel Vetter
@ 2021-07-23 18:45   ` Thomas Zimmermann
  0 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2021-07-23 18:45 UTC (permalink / raw)
  To: Daniel Vetter, Intel Graphics Development
  Cc: DRI Development, linaro-mm-sig, Jason Gunthorpe, Matthew Wilcox,
	Daniel Vetter, Suren Baghdasaryan, Christian König,
	linux-media


[-- Attachment #1.1.1: Type: text/plain, Size: 4490 bytes --]

Hi

Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> tldr; DMA buffers aren't normal memory, expecting that you can use
> them like that (like calling get_user_pages works, or that they're
> accounting like any other normal memory) cannot be guaranteed.
> 
> Since some userspace only runs on integrated devices, where all
> buffers are actually all resident system memory, there's a huge
> temptation to assume that a struct page is always present and useable
> like for any more pagecache backed mmap. This has the potential to
> result in a uapi nightmare.
> 
> To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
> blocks get_user_pages and all the other struct page based
> infrastructure for everyone. In spirit this is the uapi counterpart to
> the kernel-internal CONFIG_DMABUF_DEBUG.
> 
> Motivated by a recent patch which wanted to swich the system dma-buf
> heap to vm_insert_page instead of vm_insert_pfn.
> 
> v2:
> 
> Jason brought up that we also want to guarantee that all ptes have the
> pte_special flag set, to catch fast get_user_pages (on architectures
> that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
> still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
> 
>  From auditing the various functions to insert pfn pte entires
> (vm_insert_pfn_prot, remap_pfn_range and all it's callers like
> dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
> this should be the correct flag to check for.
> 
> References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-WbWzkRg@mail.gmail.com/
> Acked-by: Christian König <christian.koenig@amd.com>
> Cc: Jason Gunthorpe <jgg@ziepe.ca>
> Cc: Suren Baghdasaryan <surenb@google.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: John Stultz <john.stultz@linaro.org>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org
> --
> Resending this so I can test the next two patches for vgem/shmem in
> intel-gfx-ci. Last round failed somehow, but I can't repro that at all
> locally here.
> 
> No immediate plans to merge this patch here since ttm isn't addressed
> yet (and there we have the hugepte issue, for which I don't think we
> have a clear consensus yet).
> -Daniel
> ---
>   drivers/dma-buf/dma-buf.c | 15 +++++++++++++--
>   1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 510b42771974..65cbd7f0f16a 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -130,6 +130,7 @@ static struct file_system_type dma_buf_fs_type = {
>   static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
>   {
>   	struct dma_buf *dmabuf;
> +	int ret;
>   
>   	if (!is_dma_buf_file(file))
>   		return -EINVAL;
> @@ -145,7 +146,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
>   	    dmabuf->size >> PAGE_SHIFT)
>   		return -EINVAL;
>   
> -	return dmabuf->ops->mmap(dmabuf, vma);
> +	ret = dmabuf->ops->mmap(dmabuf, vma);
> +
> +	WARN_ON(!(vma->vm_flags & VM_PFNMAP));

Maybe change this to WARN_ON_ONCE(), so it doesn't fill up the kernel 
log. Same comment below.

For either version

Acked-by: Thomas Zimmermann <tzimmermann@suse.de>

Best regards
Thomas

> +
> +	return ret;
>   }
>   
>   static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
> @@ -1276,6 +1281,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
>   int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
>   		 unsigned long pgoff)
>   {
> +	int ret;
> +
>   	if (WARN_ON(!dmabuf || !vma))
>   		return -EINVAL;
>   
> @@ -1296,7 +1303,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
>   	vma_set_file(vma, dmabuf->file);
>   	vma->vm_pgoff = pgoff;
>   
> -	return dmabuf->ops->mmap(dmabuf, vma);
> +	ret = dmabuf->ops->mmap(dmabuf, vma);
> +
> +	WARN_ON(!(vma->vm_flags & VM_PFNMAP));
> +
> +	return ret;
>   }
>   EXPORT_SYMBOL_GPL(dma_buf_mmap);
>   
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86
  2021-07-23  7:36     ` Daniel Vetter
  2021-07-23  8:02       ` Christian König
@ 2021-08-05 18:40       ` Thomas Zimmermann
  1 sibling, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2021-08-05 18:40 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Thomas Hellström, David Airlie, Daniel Vetter,
	Intel Graphics Development, DRI Development, Daniel Vetter,
	Christian König


[-- Attachment #1.1: Type: text/plain, Size: 1818 bytes --]

Hi

Am 23.07.21 um 09:36 schrieb Daniel Vetter:
> 
> The real fix is to get at the architecture-specific wc allocator, which is
> currently not something that's exposed, but hidden within the dma api. I
> think having this stick out like this is better than hiding it behind fake
> generic code (like we do with drm_clflush, which defacto also only really
> works on x86).
> 
> Also note that ttm has the exact same ifdef in its page allocator, but it
> does fall back to using dma_alloc_coherent on other platforms.

If this fixes a real problem and there's no full solution yet, let's 
take what we have. So if you can extract the essence of this comment 
into a TODO comment that tells how to fix the issue, fell free to add my

Acked-by: Thomas Zimmermann <tzimmermann@suse.de>

Best regards
Thomas

> -Daniel
> 
>> Best regard
>> Thomas
>>
>>> +
>>>    	shmem->pages = pages;
>>>    	return 0;
>>> @@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
>>>    	if (--shmem->pages_use_count > 0)
>>>    		return;
>>> +#ifdef CONFIG_X86
>>> +	if (shmem->map_wc)
>>> +		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
>>> +#endif
>>> +
>>>    	drm_gem_put_pages(obj, shmem->pages,
>>>    			  shmem->pages_mark_dirty_on_put,
>>>    			  shmem->pages_mark_accessed_on_put);
>>>
>>
>> -- 
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>> (HRB 36809, AG Nürnberg)
>> Geschäftsführer: Felix Imendörffer
>>
> 
> 
> 
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH v4 2/4] drm/shmem-helper: Switch to vmf_insert_pfn
  2021-07-22 18:22   ` Thomas Zimmermann
  2021-07-23  7:32     ` Daniel Vetter
@ 2021-08-12 13:05     ` Daniel Vetter
  1 sibling, 0 replies; 26+ messages in thread
From: Daniel Vetter @ 2021-08-12 13:05 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Daniel Vetter, Intel Graphics Development, David Airlie,
	DRI Development, Daniel Vetter

On Thu, Jul 22, 2021 at 08:22:43PM +0200, Thomas Zimmermann wrote:
> Hi,
> 
> I'm not knowledgeable enougth to give this a full review. If you can just
> answer my questions, fell free to add an
> 
> Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
> 
> to the patch. :)
> 
> Am 13.07.21 um 22:51 schrieb Daniel Vetter:
> > We want to stop gup, which isn't the case if we use vmf_insert_page
> 
> What is gup?
> 
> > and VM_MIXEDMAP, because that does not set pte_special.
> > 
> > v2: With this shmem gem helpers now definitely need CONFIG_MMU (0day)
> > 
> > v3: add more depends on MMU. For usb drivers this is a bit awkward,
> > but really it's correct: To be able to provide a contig mapping of
> > buffers to userspace on !MMU platforms we'd need to use the cma
> > helpers for these drivers on those platforms. As-is this wont work.
> > 
> > Also not exactly sure why vm_insert_page doesn't go boom, because that
> > definitely wont fly in practice since the pages are non-contig to
> > begin with.
> > 
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Maxime Ripard <mripard@kernel.org>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: David Airlie <airlied@linux.ie>
> > Cc: Daniel Vetter <daniel@ffwll.ch>
> > ---
> >   drivers/gpu/drm/Kconfig                | 2 +-
> >   drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++--
> >   drivers/gpu/drm/gud/Kconfig            | 2 +-
> >   drivers/gpu/drm/tiny/Kconfig           | 4 ++--
> >   drivers/gpu/drm/udl/Kconfig            | 1 +
> >   5 files changed, 7 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> > index 0d372354c2d0..314eefa39892 100644
> > --- a/drivers/gpu/drm/Kconfig
> > +++ b/drivers/gpu/drm/Kconfig
> > @@ -211,7 +211,7 @@ config DRM_KMS_CMA_HELPER
> >   config DRM_GEM_SHMEM_HELPER
> >   	bool
> > -	depends on DRM
> > +	depends on DRM && MMU
> >   	help
> >   	  Choose this if you need the GEM shmem helper functions
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index d5e6d4568f99..296ab1b7c07f 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -542,7 +542,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> >   	} else {
> >   		page = shmem->pages[page_offset];
> > -		ret = vmf_insert_page(vma, vmf->address, page);
> > +		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
> >   	}
> >   	mutex_unlock(&shmem->pages_lock);
> > @@ -612,7 +612,7 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> >   		return ret;
> >   	}
> > -	vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
> > +	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND;
> >   	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
> >   	if (shmem->map_wc)
> >   		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
> > diff --git a/drivers/gpu/drm/gud/Kconfig b/drivers/gpu/drm/gud/Kconfig
> > index 1c8601bf4d91..9c1e61f9eec3 100644
> > --- a/drivers/gpu/drm/gud/Kconfig
> > +++ b/drivers/gpu/drm/gud/Kconfig
> > @@ -2,7 +2,7 @@
> >   config DRM_GUD
> >   	tristate "GUD USB Display"
> > -	depends on DRM && USB
> > +	depends on DRM && USB && MMU
> >   	select LZ4_COMPRESS
> >   	select DRM_KMS_HELPER
> >   	select DRM_GEM_SHMEM_HELPER
> 
> I'm a kconfig noob, so this is rather a question than a review comment:
> 
> 
> 
> If DRM_GEM_SHMEM_HELPER already depends on MMU, this select will fail on
> non-MMU platforms? Why does the driver also depend on MMU? Simply to make
> the item disappear in menuconfig?

I totally missed this somehow. vmf_insert_pfn functions only exists for
MMU based system. So we can't compile vgem without that. And yes it just
makes it disappear.

tbh I'm not sure it even worked with the old code, because on !MMU
platforms it's the mmap's implementation job to make sure the pages are
physically contiguous. There's another mmap related callback which should
return the physical address where the memory starts.

The cma helpers otoh should work on !MMU platforms, because they will give
us a physically contig memory region.
-Daniel

> 
> Best regards
> Thomas
> 
> > diff --git a/drivers/gpu/drm/tiny/Kconfig b/drivers/gpu/drm/tiny/Kconfig
> > index 5593128eeff9..c11fb5be7d09 100644
> > --- a/drivers/gpu/drm/tiny/Kconfig
> > +++ b/drivers/gpu/drm/tiny/Kconfig
> > @@ -44,7 +44,7 @@ config DRM_CIRRUS_QEMU
> >   config DRM_GM12U320
> >   	tristate "GM12U320 driver for USB projectors"
> > -	depends on DRM && USB
> > +	depends on DRM && USB && MMU
> >   	select DRM_KMS_HELPER
> >   	select DRM_GEM_SHMEM_HELPER
> >   	help
> > @@ -53,7 +53,7 @@ config DRM_GM12U320
> >   config DRM_SIMPLEDRM
> >   	tristate "Simple framebuffer driver"
> > -	depends on DRM
> > +	depends on DRM && MMU
> >   	select DRM_GEM_SHMEM_HELPER
> >   	select DRM_KMS_HELPER
> >   	help
> > diff --git a/drivers/gpu/drm/udl/Kconfig b/drivers/gpu/drm/udl/Kconfig
> > index 1f497d8f1ae5..c744175c6992 100644
> > --- a/drivers/gpu/drm/udl/Kconfig
> > +++ b/drivers/gpu/drm/udl/Kconfig
> > @@ -4,6 +4,7 @@ config DRM_UDL
> >   	depends on DRM
> >   	depends on USB
> >   	depends on USB_ARCH_HAS_HCD
> > +	depends on MMU
> >   	select DRM_GEM_SHMEM_HELPER
> >   	select DRM_KMS_HELPER
> >   	help
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2021-08-12 13:05 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-13 20:51 [Intel-gfx] [PATCH v4 0/4] shmem helpers for vgem Daniel Vetter
2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 1/4] dma-buf: Require VM_PFNMAP vma for mmap Daniel Vetter
2021-07-23 18:45   ` Thomas Zimmermann
2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 2/4] drm/shmem-helper: Switch to vmf_insert_pfn Daniel Vetter
2021-07-22 18:22   ` Thomas Zimmermann
2021-07-23  7:32     ` Daniel Vetter
2021-08-12 13:05     ` Daniel Vetter
2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 3/4] drm/shmem-helpers: Allocate wc pages on x86 Daniel Vetter
2021-07-14 11:54   ` Christian König
2021-07-14 12:48     ` Daniel Vetter
2021-07-14 12:58       ` Christian König
2021-07-14 16:16         ` Daniel Vetter
2021-07-22 18:40   ` Thomas Zimmermann
2021-07-23  7:36     ` Daniel Vetter
2021-07-23  8:02       ` Christian König
2021-07-23  8:34         ` Daniel Vetter
2021-08-05 18:40       ` Thomas Zimmermann
2021-07-13 20:51 ` [Intel-gfx] [PATCH v4 4/4] drm/vgem: use shmem helpers Daniel Vetter
2021-07-14 12:45   ` [Intel-gfx] [PATCH] " Daniel Vetter
2021-07-22 18:50   ` [Intel-gfx] [PATCH v4 4/4] " Thomas Zimmermann
2021-07-23  7:38     ` Daniel Vetter
2021-07-13 23:43 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for shmem helpers for vgem (rev6) Patchwork
2021-07-14  0:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2021-07-16 13:29 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for shmem helpers for vgem (rev8) Patchwork
2021-07-16 13:58 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-07-16 16:43 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).