All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-07 16:00 ` Chris Wilson
  0 siblings, 0 replies; 56+ messages in thread
From: Chris Wilson @ 2020-07-07 16:00 UTC (permalink / raw)
  To: dri-devel
  Cc: intel-gfx, Chris Wilson, Lepton Wu, Daniel Vetter,
	Christian König, Thomas Hellström, stable

If we assign obj->filp, we believe that the create vgem bo is native and
allow direct operations like mmap() assuming it behaves as backed by a
shmemfs inode. When imported from a dmabuf, the obj->pages are
not always meaningful and the shmemfs backing store misleading.

Note, that regular mmap access to a vgem bo is via the dumb buffer API,
and that rejects attempts to mmap an imported dmabuf,

drm_gem_dumb_map_offset():
        if (obj->import_attach) return -EINVAL;

So the only route by which we might accidentally allow mmapping of an
imported buffer is via vgem_prime_mmap(), which checked for
obj->filp assuming that it would be NULL.

Well it would had it been updated to use the common
drm_gem_dum_map_offset() helper, instead it has

vgem_gem_dumb_map():
	if (!obj->filp) return -EINVAL;

falling foul of the same trap as above.

Reported-by: Lepton Wu <ytht.net@gmail.com>
Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lepton Wu <ytht.net@gmail.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
Cc: <stable@vger.kernel.org> # v4.13+
---
 drivers/gpu/drm/vgem/vgem_drv.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index 909eba43664a..eb3b7cdac941 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
 		ret = 0;
 	}
 	mutex_unlock(&obj->pages_lock);
-	if (ret) {
+	if (ret && obj->base.filp) {
 		struct page *page;
 
 		page = shmem_read_mapping_page(
@@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
 }
 
 static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
-						unsigned long size)
+						     struct file *shmem,
+						     unsigned long size)
 {
 	struct drm_vgem_gem_object *obj;
 	int ret;
@@ -166,11 +167,8 @@ static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
 	if (!obj)
 		return ERR_PTR(-ENOMEM);
 
-	ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
-	if (ret) {
-		kfree(obj);
-		return ERR_PTR(ret);
-	}
+	drm_gem_private_object_init(dev, &obj->base, size);
+	obj->base.filp = shmem;
 
 	mutex_init(&obj->pages_lock);
 
@@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
 					      unsigned long size)
 {
 	struct drm_vgem_gem_object *obj;
+	struct file *shmem;
 	int ret;
 
-	obj = __vgem_gem_create(dev, size);
-	if (IS_ERR(obj))
+	size = roundup(size, PAGE_SIZE);
+
+	shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
+	if (IS_ERR(shmem))
+		return ERR_CAST(shmem);
+
+	obj = __vgem_gem_create(dev, shmem, size);
+	if (IS_ERR(obj)) {
+		fput(shmem);
 		return ERR_CAST(obj);
+	}
 
 	ret = drm_gem_handle_create(file, &obj->base, handle);
 	if (ret) {
@@ -363,7 +370,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
 	struct drm_vgem_gem_object *obj;
 	int npages;
 
-	obj = __vgem_gem_create(dev, attach->dmabuf->size);
+	obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
 	if (IS_ERR(obj))
 		return ERR_CAST(obj);
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-07 16:00 ` Chris Wilson
  0 siblings, 0 replies; 56+ messages in thread
From: Chris Wilson @ 2020-07-07 16:00 UTC (permalink / raw)
  To: dri-devel
  Cc: Chris Wilson, Thomas Hellström, stable,
	Christian König, Lepton Wu, intel-gfx

If we assign obj->filp, we believe that the create vgem bo is native and
allow direct operations like mmap() assuming it behaves as backed by a
shmemfs inode. When imported from a dmabuf, the obj->pages are
not always meaningful and the shmemfs backing store misleading.

Note, that regular mmap access to a vgem bo is via the dumb buffer API,
and that rejects attempts to mmap an imported dmabuf,

drm_gem_dumb_map_offset():
        if (obj->import_attach) return -EINVAL;

So the only route by which we might accidentally allow mmapping of an
imported buffer is via vgem_prime_mmap(), which checked for
obj->filp assuming that it would be NULL.

Well it would had it been updated to use the common
drm_gem_dum_map_offset() helper, instead it has

vgem_gem_dumb_map():
	if (!obj->filp) return -EINVAL;

falling foul of the same trap as above.

Reported-by: Lepton Wu <ytht.net@gmail.com>
Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lepton Wu <ytht.net@gmail.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
Cc: <stable@vger.kernel.org> # v4.13+
---
 drivers/gpu/drm/vgem/vgem_drv.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index 909eba43664a..eb3b7cdac941 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
 		ret = 0;
 	}
 	mutex_unlock(&obj->pages_lock);
-	if (ret) {
+	if (ret && obj->base.filp) {
 		struct page *page;
 
 		page = shmem_read_mapping_page(
@@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
 }
 
 static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
-						unsigned long size)
+						     struct file *shmem,
+						     unsigned long size)
 {
 	struct drm_vgem_gem_object *obj;
 	int ret;
@@ -166,11 +167,8 @@ static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
 	if (!obj)
 		return ERR_PTR(-ENOMEM);
 
-	ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
-	if (ret) {
-		kfree(obj);
-		return ERR_PTR(ret);
-	}
+	drm_gem_private_object_init(dev, &obj->base, size);
+	obj->base.filp = shmem;
 
 	mutex_init(&obj->pages_lock);
 
@@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
 					      unsigned long size)
 {
 	struct drm_vgem_gem_object *obj;
+	struct file *shmem;
 	int ret;
 
-	obj = __vgem_gem_create(dev, size);
-	if (IS_ERR(obj))
+	size = roundup(size, PAGE_SIZE);
+
+	shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
+	if (IS_ERR(shmem))
+		return ERR_CAST(shmem);
+
+	obj = __vgem_gem_create(dev, shmem, size);
+	if (IS_ERR(obj)) {
+		fput(shmem);
 		return ERR_CAST(obj);
+	}
 
 	ret = drm_gem_handle_create(file, &obj->base, handle);
 	if (ret) {
@@ -363,7 +370,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
 	struct drm_vgem_gem_object *obj;
 	int npages;
 
-	obj = __vgem_gem_create(dev, attach->dmabuf->size);
+	obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
 	if (IS_ERR(obj))
 		return ERR_CAST(obj);
 
-- 
2.27.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-07 16:00 ` Chris Wilson
  0 siblings, 0 replies; 56+ messages in thread
From: Chris Wilson @ 2020-07-07 16:00 UTC (permalink / raw)
  To: dri-devel
  Cc: Chris Wilson, stable, Christian König, Lepton Wu, intel-gfx

If we assign obj->filp, we believe that the create vgem bo is native and
allow direct operations like mmap() assuming it behaves as backed by a
shmemfs inode. When imported from a dmabuf, the obj->pages are
not always meaningful and the shmemfs backing store misleading.

Note, that regular mmap access to a vgem bo is via the dumb buffer API,
and that rejects attempts to mmap an imported dmabuf,

drm_gem_dumb_map_offset():
        if (obj->import_attach) return -EINVAL;

So the only route by which we might accidentally allow mmapping of an
imported buffer is via vgem_prime_mmap(), which checked for
obj->filp assuming that it would be NULL.

Well it would had it been updated to use the common
drm_gem_dum_map_offset() helper, instead it has

vgem_gem_dumb_map():
	if (!obj->filp) return -EINVAL;

falling foul of the same trap as above.

Reported-by: Lepton Wu <ytht.net@gmail.com>
Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lepton Wu <ytht.net@gmail.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
Cc: <stable@vger.kernel.org> # v4.13+
---
 drivers/gpu/drm/vgem/vgem_drv.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index 909eba43664a..eb3b7cdac941 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
 		ret = 0;
 	}
 	mutex_unlock(&obj->pages_lock);
-	if (ret) {
+	if (ret && obj->base.filp) {
 		struct page *page;
 
 		page = shmem_read_mapping_page(
@@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
 }
 
 static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
-						unsigned long size)
+						     struct file *shmem,
+						     unsigned long size)
 {
 	struct drm_vgem_gem_object *obj;
 	int ret;
@@ -166,11 +167,8 @@ static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
 	if (!obj)
 		return ERR_PTR(-ENOMEM);
 
-	ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
-	if (ret) {
-		kfree(obj);
-		return ERR_PTR(ret);
-	}
+	drm_gem_private_object_init(dev, &obj->base, size);
+	obj->base.filp = shmem;
 
 	mutex_init(&obj->pages_lock);
 
@@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
 					      unsigned long size)
 {
 	struct drm_vgem_gem_object *obj;
+	struct file *shmem;
 	int ret;
 
-	obj = __vgem_gem_create(dev, size);
-	if (IS_ERR(obj))
+	size = roundup(size, PAGE_SIZE);
+
+	shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
+	if (IS_ERR(shmem))
+		return ERR_CAST(shmem);
+
+	obj = __vgem_gem_create(dev, shmem, size);
+	if (IS_ERR(obj)) {
+		fput(shmem);
 		return ERR_CAST(obj);
+	}
 
 	ret = drm_gem_handle_create(file, &obj->base, handle);
 	if (ret) {
@@ -363,7 +370,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
 	struct drm_vgem_gem_object *obj;
 	int npages;
 
-	obj = __vgem_gem_create(dev, attach->dmabuf->size);
+	obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
 	if (IS_ERR(obj))
 		return ERR_CAST(obj);
 
-- 
2.27.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH 2/2] drm/vgem: Replace opencoded version of drm_gem_dumb_map_offset()
  2020-07-07 16:00 ` Chris Wilson
@ 2020-07-07 16:00   ` Chris Wilson
  -1 siblings, 0 replies; 56+ messages in thread
From: Chris Wilson @ 2020-07-07 16:00 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, Chris Wilson

drm_gem_dumb_map_offset() now exists and does everything
vgem_gem_dump_map does and *ought* to do.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/vgem/vgem_drv.c | 28 +---------------------------
 1 file changed, 1 insertion(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index eb3b7cdac941..866cff537f28 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -236,32 +236,6 @@ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
 	return 0;
 }
 
-static int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev,
-			     uint32_t handle, uint64_t *offset)
-{
-	struct drm_gem_object *obj;
-	int ret;
-
-	obj = drm_gem_object_lookup(file, handle);
-	if (!obj)
-		return -ENOENT;
-
-	if (!obj->filp) {
-		ret = -EINVAL;
-		goto unref;
-	}
-
-	ret = drm_gem_create_mmap_offset(obj);
-	if (ret)
-		goto unref;
-
-	*offset = drm_vma_node_offset_addr(&obj->vma_node);
-unref:
-	drm_gem_object_put_unlocked(obj);
-
-	return ret;
-}
-
 static struct drm_ioctl_desc vgem_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
@@ -455,7 +429,7 @@ static struct drm_driver vgem_driver = {
 	.fops				= &vgem_driver_fops,
 
 	.dumb_create			= vgem_gem_dumb_create,
-	.dumb_map_offset		= vgem_gem_dumb_map,
+	.dumb_map_offset		= drm_gem_dumb_map_offset,
 
 	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
-- 
2.27.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [Intel-gfx] [PATCH 2/2] drm/vgem: Replace opencoded version of drm_gem_dumb_map_offset()
@ 2020-07-07 16:00   ` Chris Wilson
  0 siblings, 0 replies; 56+ messages in thread
From: Chris Wilson @ 2020-07-07 16:00 UTC (permalink / raw)
  To: dri-devel; +Cc: intel-gfx, Chris Wilson

drm_gem_dumb_map_offset() now exists and does everything
vgem_gem_dump_map does and *ought* to do.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/vgem/vgem_drv.c | 28 +---------------------------
 1 file changed, 1 insertion(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index eb3b7cdac941..866cff537f28 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -236,32 +236,6 @@ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
 	return 0;
 }
 
-static int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev,
-			     uint32_t handle, uint64_t *offset)
-{
-	struct drm_gem_object *obj;
-	int ret;
-
-	obj = drm_gem_object_lookup(file, handle);
-	if (!obj)
-		return -ENOENT;
-
-	if (!obj->filp) {
-		ret = -EINVAL;
-		goto unref;
-	}
-
-	ret = drm_gem_create_mmap_offset(obj);
-	if (ret)
-		goto unref;
-
-	*offset = drm_vma_node_offset_addr(&obj->vma_node);
-unref:
-	drm_gem_object_put_unlocked(obj);
-
-	return ret;
-}
-
 static struct drm_ioctl_desc vgem_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
@@ -455,7 +429,7 @@ static struct drm_driver vgem_driver = {
 	.fops				= &vgem_driver_fops,
 
 	.dumb_create			= vgem_gem_dumb_create,
-	.dumb_map_offset		= vgem_gem_dumb_map,
+	.dumb_map_offset		= drm_gem_dumb_map_offset,
 
 	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
-- 
2.27.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-07 16:00 ` Chris Wilson
                   ` (2 preceding siblings ...)
  (?)
@ 2020-07-07 16:33 ` Patchwork
  -1 siblings, 0 replies; 56+ messages in thread
From: Patchwork @ 2020-07-07 16:33 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
URL   : https://patchwork.freedesktop.org/series/79203/
State : failure

== Summary ==

Applying: drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
Applying: drm/vgem: Replace opencoded version of drm_gem_dumb_map_offset()
error: sha1 information is lacking or useless (drivers/gpu/drm/vgem/vgem_drv.c).
error: could not build fake ancestor
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0002 drm/vgem: Replace opencoded version of drm_gem_dumb_map_offset()
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-07 16:00 ` Chris Wilson
  (?)
@ 2020-07-07 17:05   ` lepton
  -1 siblings, 0 replies; 56+ messages in thread
From: lepton @ 2020-07-07 17:05 UTC (permalink / raw)
  To: chris
  Cc: dri-devel, intel-gfx, Daniel Vetter, Christian König,
	Thomas Hellström, # v4 . 10+

On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> If we assign obj->filp, we believe that the create vgem bo is native and
> allow direct operations like mmap() assuming it behaves as backed by a
> shmemfs inode. When imported from a dmabuf, the obj->pages are
> not always meaningful and the shmemfs backing store misleading.
>
> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> and that rejects attempts to mmap an imported dmabuf,
What do you mean by "regular mmap access" here?  It looks like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
drm_gem_dumb_map_offset
>
> drm_gem_dumb_map_offset():
>         if (obj->import_attach) return -EINVAL;
>
> So the only route by which we might accidentally allow mmapping of an
> imported buffer is via vgem_prime_mmap(), which checked for
> obj->filp assuming that it would be NULL.
>
> Well it would had it been updated to use the common
> drm_gem_dum_map_offset() helper, instead it has
>
> vgem_gem_dumb_map():
>         if (!obj->filp) return -EINVAL;
>
> falling foul of the same trap as above.
>
> Reported-by: Lepton Wu <ytht.net@gmail.com>
> Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Lepton Wu <ytht.net@gmail.com>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
> Cc: <stable@vger.kernel.org> # v4.13+
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 27 +++++++++++++++++----------
>  1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 909eba43664a..eb3b7cdac941 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
>                 ret = 0;
>         }
>         mutex_unlock(&obj->pages_lock);
> -       if (ret) {
> +       if (ret && obj->base.filp) {
>                 struct page *page;
>
>                 page = shmem_read_mapping_page(
> @@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
>  }
>
>  static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> -                                               unsigned long size)
> +                                                    struct file *shmem,
> +                                                    unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
>         int ret;
> @@ -166,11 +167,8 @@ static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
>         if (!obj)
>                 return ERR_PTR(-ENOMEM);
>
> -       ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
> -       if (ret) {
> -               kfree(obj);
> -               return ERR_PTR(ret);
> -       }
> +       drm_gem_private_object_init(dev, &obj->base, size);
> +       obj->base.filp = shmem;
>
>         mutex_init(&obj->pages_lock);
>
> @@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
>                                               unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
> +       struct file *shmem;
>         int ret;
>
> -       obj = __vgem_gem_create(dev, size);
> -       if (IS_ERR(obj))
> +       size = roundup(size, PAGE_SIZE);
> +
> +       shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
> +       if (IS_ERR(shmem))
> +               return ERR_CAST(shmem);
> +
> +       obj = __vgem_gem_create(dev, shmem, size);
> +       if (IS_ERR(obj)) {
> +               fput(shmem);
>                 return ERR_CAST(obj);
> +       }
>
>         ret = drm_gem_handle_create(file, &obj->base, handle);
>         if (ret) {
> @@ -363,7 +370,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
>         struct drm_vgem_gem_object *obj;
>         int npages;
>
> -       obj = __vgem_gem_create(dev, attach->dmabuf->size);
> +       obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
>         if (IS_ERR(obj))
>                 return ERR_CAST(obj);
>
> --
> 2.27.0
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-07 17:05   ` lepton
  0 siblings, 0 replies; 56+ messages in thread
From: lepton @ 2020-07-07 17:05 UTC (permalink / raw)
  To: chris
  Cc: Thomas Hellström, # v4 . 10+,
	Christian König, dri-devel, intel-gfx

On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> If we assign obj->filp, we believe that the create vgem bo is native and
> allow direct operations like mmap() assuming it behaves as backed by a
> shmemfs inode. When imported from a dmabuf, the obj->pages are
> not always meaningful and the shmemfs backing store misleading.
>
> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> and that rejects attempts to mmap an imported dmabuf,
What do you mean by "regular mmap access" here?  It looks like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
drm_gem_dumb_map_offset
>
> drm_gem_dumb_map_offset():
>         if (obj->import_attach) return -EINVAL;
>
> So the only route by which we might accidentally allow mmapping of an
> imported buffer is via vgem_prime_mmap(), which checked for
> obj->filp assuming that it would be NULL.
>
> Well it would had it been updated to use the common
> drm_gem_dum_map_offset() helper, instead it has
>
> vgem_gem_dumb_map():
>         if (!obj->filp) return -EINVAL;
>
> falling foul of the same trap as above.
>
> Reported-by: Lepton Wu <ytht.net@gmail.com>
> Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Lepton Wu <ytht.net@gmail.com>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
> Cc: <stable@vger.kernel.org> # v4.13+
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 27 +++++++++++++++++----------
>  1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 909eba43664a..eb3b7cdac941 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
>                 ret = 0;
>         }
>         mutex_unlock(&obj->pages_lock);
> -       if (ret) {
> +       if (ret && obj->base.filp) {
>                 struct page *page;
>
>                 page = shmem_read_mapping_page(
> @@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
>  }
>
>  static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> -                                               unsigned long size)
> +                                                    struct file *shmem,
> +                                                    unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
>         int ret;
> @@ -166,11 +167,8 @@ static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
>         if (!obj)
>                 return ERR_PTR(-ENOMEM);
>
> -       ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
> -       if (ret) {
> -               kfree(obj);
> -               return ERR_PTR(ret);
> -       }
> +       drm_gem_private_object_init(dev, &obj->base, size);
> +       obj->base.filp = shmem;
>
>         mutex_init(&obj->pages_lock);
>
> @@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
>                                               unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
> +       struct file *shmem;
>         int ret;
>
> -       obj = __vgem_gem_create(dev, size);
> -       if (IS_ERR(obj))
> +       size = roundup(size, PAGE_SIZE);
> +
> +       shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
> +       if (IS_ERR(shmem))
> +               return ERR_CAST(shmem);
> +
> +       obj = __vgem_gem_create(dev, shmem, size);
> +       if (IS_ERR(obj)) {
> +               fput(shmem);
>                 return ERR_CAST(obj);
> +       }
>
>         ret = drm_gem_handle_create(file, &obj->base, handle);
>         if (ret) {
> @@ -363,7 +370,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
>         struct drm_vgem_gem_object *obj;
>         int npages;
>
> -       obj = __vgem_gem_create(dev, attach->dmabuf->size);
> +       obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
>         if (IS_ERR(obj))
>                 return ERR_CAST(obj);
>
> --
> 2.27.0
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-07 17:05   ` lepton
  0 siblings, 0 replies; 56+ messages in thread
From: lepton @ 2020-07-07 17:05 UTC (permalink / raw)
  To: chris; +Cc: # v4 . 10+, Christian König, dri-devel, intel-gfx

On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> If we assign obj->filp, we believe that the create vgem bo is native and
> allow direct operations like mmap() assuming it behaves as backed by a
> shmemfs inode. When imported from a dmabuf, the obj->pages are
> not always meaningful and the shmemfs backing store misleading.
>
> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> and that rejects attempts to mmap an imported dmabuf,
What do you mean by "regular mmap access" here?  It looks like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
drm_gem_dumb_map_offset
>
> drm_gem_dumb_map_offset():
>         if (obj->import_attach) return -EINVAL;
>
> So the only route by which we might accidentally allow mmapping of an
> imported buffer is via vgem_prime_mmap(), which checked for
> obj->filp assuming that it would be NULL.
>
> Well it would had it been updated to use the common
> drm_gem_dum_map_offset() helper, instead it has
>
> vgem_gem_dumb_map():
>         if (!obj->filp) return -EINVAL;
>
> falling foul of the same trap as above.
>
> Reported-by: Lepton Wu <ytht.net@gmail.com>
> Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Lepton Wu <ytht.net@gmail.com>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
> Cc: <stable@vger.kernel.org> # v4.13+
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 27 +++++++++++++++++----------
>  1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 909eba43664a..eb3b7cdac941 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
>                 ret = 0;
>         }
>         mutex_unlock(&obj->pages_lock);
> -       if (ret) {
> +       if (ret && obj->base.filp) {
>                 struct page *page;
>
>                 page = shmem_read_mapping_page(
> @@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
>  }
>
>  static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> -                                               unsigned long size)
> +                                                    struct file *shmem,
> +                                                    unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
>         int ret;
> @@ -166,11 +167,8 @@ static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
>         if (!obj)
>                 return ERR_PTR(-ENOMEM);
>
> -       ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
> -       if (ret) {
> -               kfree(obj);
> -               return ERR_PTR(ret);
> -       }
> +       drm_gem_private_object_init(dev, &obj->base, size);
> +       obj->base.filp = shmem;
>
>         mutex_init(&obj->pages_lock);
>
> @@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
>                                               unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
> +       struct file *shmem;
>         int ret;
>
> -       obj = __vgem_gem_create(dev, size);
> -       if (IS_ERR(obj))
> +       size = roundup(size, PAGE_SIZE);
> +
> +       shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
> +       if (IS_ERR(shmem))
> +               return ERR_CAST(shmem);
> +
> +       obj = __vgem_gem_create(dev, shmem, size);
> +       if (IS_ERR(obj)) {
> +               fput(shmem);
>                 return ERR_CAST(obj);
> +       }
>
>         ret = drm_gem_handle_create(file, &obj->base, handle);
>         if (ret) {
> @@ -363,7 +370,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
>         struct drm_vgem_gem_object *obj;
>         int npages;
>
> -       obj = __vgem_gem_create(dev, attach->dmabuf->size);
> +       obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
>         if (IS_ERR(obj))
>                 return ERR_CAST(obj);
>
> --
> 2.27.0
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-07 17:05   ` lepton
@ 2020-07-07 17:20     ` Chris Wilson
  -1 siblings, 0 replies; 56+ messages in thread
From: Chris Wilson @ 2020-07-07 17:20 UTC (permalink / raw)
  To: lepton
  Cc: Thomas Hellström, # v4 . 10+,
	Christian König, dri-devel, intel-gfx

Quoting lepton (2020-07-07 18:05:21)
> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >
> > If we assign obj->filp, we believe that the create vgem bo is native and
> > allow direct operations like mmap() assuming it behaves as backed by a
> > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > not always meaningful and the shmemfs backing store misleading.
> >
> > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > and that rejects attempts to mmap an imported dmabuf,
> What do you mean by "regular mmap access" here?  It looks like vgem is
> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> drm_gem_dumb_map_offset

As I too found out, and so had to correct my story telling.

By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
API] as opposed to mmap() via an exported dma-buf fd. I had to look at
igt to see how it was being used.
-Chris
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-07 17:20     ` Chris Wilson
  0 siblings, 0 replies; 56+ messages in thread
From: Chris Wilson @ 2020-07-07 17:20 UTC (permalink / raw)
  To: lepton; +Cc: # v4 . 10+, Christian König, dri-devel, intel-gfx

Quoting lepton (2020-07-07 18:05:21)
> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >
> > If we assign obj->filp, we believe that the create vgem bo is native and
> > allow direct operations like mmap() assuming it behaves as backed by a
> > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > not always meaningful and the shmemfs backing store misleading.
> >
> > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > and that rejects attempts to mmap an imported dmabuf,
> What do you mean by "regular mmap access" here?  It looks like vgem is
> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> drm_gem_dumb_map_offset

As I too found out, and so had to correct my story telling.

By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
API] as opposed to mmap() via an exported dma-buf fd. I had to look at
igt to see how it was being used.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-07 17:20     ` [Intel-gfx] " Chris Wilson
  (?)
@ 2020-07-07 18:17       ` lepton
  -1 siblings, 0 replies; 56+ messages in thread
From: lepton @ 2020-07-07 18:17 UTC (permalink / raw)
  To: Chris Wilson
  Cc: dri-devel, intel-gfx, Daniel Vetter, Christian König,
	Thomas Hellström, # v4 . 10+

On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> Quoting lepton (2020-07-07 18:05:21)
> > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > >
> > > If we assign obj->filp, we believe that the create vgem bo is native and
> > > allow direct operations like mmap() assuming it behaves as backed by a
> > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > not always meaningful and the shmemfs backing store misleading.
> > >
> > > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > > and that rejects attempts to mmap an imported dmabuf,
> > What do you mean by "regular mmap access" here?  It looks like vgem is
> > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > drm_gem_dumb_map_offset
>
> As I too found out, and so had to correct my story telling.
>
> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> igt to see how it was being used.
Now it seems your fix is to disable "regular mmap" on imported dma buf
for vgem. I am not really a graphic guy, but then the api looks like:
for a gem handle, user space has to guess to find out the way to mmap
it. If user space guess wrong, then it will fail to mmap. Is this the
expected way
for people to handle gpu buffer?
> -Chris

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-07 18:17       ` lepton
  0 siblings, 0 replies; 56+ messages in thread
From: lepton @ 2020-07-07 18:17 UTC (permalink / raw)
  To: Chris Wilson
  Cc: Thomas Hellström, # v4 . 10+,
	Christian König, dri-devel, intel-gfx

On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> Quoting lepton (2020-07-07 18:05:21)
> > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > >
> > > If we assign obj->filp, we believe that the create vgem bo is native and
> > > allow direct operations like mmap() assuming it behaves as backed by a
> > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > not always meaningful and the shmemfs backing store misleading.
> > >
> > > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > > and that rejects attempts to mmap an imported dmabuf,
> > What do you mean by "regular mmap access" here?  It looks like vgem is
> > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > drm_gem_dumb_map_offset
>
> As I too found out, and so had to correct my story telling.
>
> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> igt to see how it was being used.
Now it seems your fix is to disable "regular mmap" on imported dma buf
for vgem. I am not really a graphic guy, but then the api looks like:
for a gem handle, user space has to guess to find out the way to mmap
it. If user space guess wrong, then it will fail to mmap. Is this the
expected way
for people to handle gpu buffer?
> -Chris
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-07 18:17       ` lepton
  0 siblings, 0 replies; 56+ messages in thread
From: lepton @ 2020-07-07 18:17 UTC (permalink / raw)
  To: Chris Wilson; +Cc: # v4 . 10+, Christian König, dri-devel, intel-gfx

On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> Quoting lepton (2020-07-07 18:05:21)
> > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > >
> > > If we assign obj->filp, we believe that the create vgem bo is native and
> > > allow direct operations like mmap() assuming it behaves as backed by a
> > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > not always meaningful and the shmemfs backing store misleading.
> > >
> > > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > > and that rejects attempts to mmap an imported dmabuf,
> > What do you mean by "regular mmap access" here?  It looks like vgem is
> > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > drm_gem_dumb_map_offset
>
> As I too found out, and so had to correct my story telling.
>
> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> igt to see how it was being used.
Now it seems your fix is to disable "regular mmap" on imported dma buf
for vgem. I am not really a graphic guy, but then the api looks like:
for a gem handle, user space has to guess to find out the way to mmap
it. If user space guess wrong, then it will fail to mmap. Is this the
expected way
for people to handle gpu buffer?
> -Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-07 18:17       ` lepton
@ 2020-07-07 18:35         ` Chris Wilson
  -1 siblings, 0 replies; 56+ messages in thread
From: Chris Wilson @ 2020-07-07 18:35 UTC (permalink / raw)
  To: lepton
  Cc: Thomas Hellström, # v4 . 10+,
	Christian König, dri-devel, intel-gfx

Quoting lepton (2020-07-07 19:17:51)
> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >
> > Quoting lepton (2020-07-07 18:05:21)
> > > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > >
> > > > If we assign obj->filp, we believe that the create vgem bo is native and
> > > > allow direct operations like mmap() assuming it behaves as backed by a
> > > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > > not always meaningful and the shmemfs backing store misleading.
> > > >
> > > > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > > > and that rejects attempts to mmap an imported dmabuf,
> > > What do you mean by "regular mmap access" here?  It looks like vgem is
> > > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > > drm_gem_dumb_map_offset
> >
> > As I too found out, and so had to correct my story telling.
> >
> > By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> > API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> > igt to see how it was being used.
> Now it seems your fix is to disable "regular mmap" on imported dma buf
> for vgem. I am not really a graphic guy, but then the api looks like:
> for a gem handle, user space has to guess to find out the way to mmap
> it. If user space guess wrong, then it will fail to mmap. Is this the
> expected way
> for people to handle gpu buffer?

You either have a dumb buffer handle, or a dma-buf fd. If you have the
handle, you have to use the dumb buffer API, there's no other way to
mmap it. If you have the dma-buf fd, you should mmap it directly. Those
two are clear.

It's when you import the dma-buf into vgem and create a handle out of
it, that's when the handle is no longer first class and certain uAPI
[the dumb buffer API in particular] fail.

It's not brilliant, as you say, it requires the user to remember the
difference between the handles, but at the same time it does prevent
them falling into coherency traps by forcing them to use the right
driver to handle the object, and have to consider the additional ioctls
that go along with that access.
-Chris
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-07 18:35         ` Chris Wilson
  0 siblings, 0 replies; 56+ messages in thread
From: Chris Wilson @ 2020-07-07 18:35 UTC (permalink / raw)
  To: lepton; +Cc: # v4 . 10+, Christian König, dri-devel, intel-gfx

Quoting lepton (2020-07-07 19:17:51)
> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >
> > Quoting lepton (2020-07-07 18:05:21)
> > > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > >
> > > > If we assign obj->filp, we believe that the create vgem bo is native and
> > > > allow direct operations like mmap() assuming it behaves as backed by a
> > > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > > not always meaningful and the shmemfs backing store misleading.
> > > >
> > > > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > > > and that rejects attempts to mmap an imported dmabuf,
> > > What do you mean by "regular mmap access" here?  It looks like vgem is
> > > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > > drm_gem_dumb_map_offset
> >
> > As I too found out, and so had to correct my story telling.
> >
> > By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> > API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> > igt to see how it was being used.
> Now it seems your fix is to disable "regular mmap" on imported dma buf
> for vgem. I am not really a graphic guy, but then the api looks like:
> for a gem handle, user space has to guess to find out the way to mmap
> it. If user space guess wrong, then it will fail to mmap. Is this the
> expected way
> for people to handle gpu buffer?

You either have a dumb buffer handle, or a dma-buf fd. If you have the
handle, you have to use the dumb buffer API, there's no other way to
mmap it. If you have the dma-buf fd, you should mmap it directly. Those
two are clear.

It's when you import the dma-buf into vgem and create a handle out of
it, that's when the handle is no longer first class and certain uAPI
[the dumb buffer API in particular] fail.

It's not brilliant, as you say, it requires the user to remember the
difference between the handles, but at the same time it does prevent
them falling into coherency traps by forcing them to use the right
driver to handle the object, and have to consider the additional ioctls
that go along with that access.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-07 16:00 ` Chris Wilson
  (?)
@ 2020-07-08  5:44   ` lepton
  -1 siblings, 0 replies; 56+ messages in thread
From: lepton @ 2020-07-08  5:44 UTC (permalink / raw)
  To: Chris Wilson
  Cc: dri-devel, intel-gfx, Daniel Vetter, Christian König,
	Thomas Hellström, # v4 . 10+

On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> If we assign obj->filp, we believe that the create vgem bo is native and
> allow direct operations like mmap() assuming it behaves as backed by a
> shmemfs inode. When imported from a dmabuf, the obj->pages are
> not always meaningful and the shmemfs backing store misleading.
>
> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> and that rejects attempts to mmap an imported dmabuf,
>
> drm_gem_dumb_map_offset():
>         if (obj->import_attach) return -EINVAL;
>
> So the only route by which we might accidentally allow mmapping of an
> imported buffer is via vgem_prime_mmap(), which checked for
> obj->filp assuming that it would be NULL.
>
> Well it would had it been updated to use the common
> drm_gem_dum_map_offset() helper, instead it has
>
> vgem_gem_dumb_map():
>         if (!obj->filp) return -EINVAL;
>
> falling foul of the same trap as above.
>
> Reported-by: Lepton Wu <ytht.net@gmail.com>
> Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Lepton Wu <ytht.net@gmail.com>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
> Cc: <stable@vger.kernel.org> # v4.13+
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 27 +++++++++++++++++----------
>  1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 909eba43664a..eb3b7cdac941 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
>                 ret = 0;
>         }
>         mutex_unlock(&obj->pages_lock);
> -       if (ret) {
> +       if (ret && obj->base.filp) {
>                 struct page *page;
>
>                 page = shmem_read_mapping_page(
> @@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
>  }
>
>  static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> -                                               unsigned long size)
> +                                                    struct file *shmem,
> +                                                    unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
>         int ret;
Remove this, it's not used any more.
> @@ -166,11 +167,8 @@ static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
>         if (!obj)
>                 return ERR_PTR(-ENOMEM);
>
> -       ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
> -       if (ret) {
> -               kfree(obj);
> -               return ERR_PTR(ret);
> -       }
> +       drm_gem_private_object_init(dev, &obj->base, size);
> +       obj->base.filp = shmem;
>
>         mutex_init(&obj->pages_lock);
>
> @@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
>                                               unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
> +       struct file *shmem;
>         int ret;
>
> -       obj = __vgem_gem_create(dev, size);
> -       if (IS_ERR(obj))
> +       size = roundup(size, PAGE_SIZE);
> +
> +       shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
> +       if (IS_ERR(shmem))
> +               return ERR_CAST(shmem);
> +
> +       obj = __vgem_gem_create(dev, shmem, size);
> +       if (IS_ERR(obj)) {
> +               fput(shmem);
>                 return ERR_CAST(obj);
> +       }
>
>         ret = drm_gem_handle_create(file, &obj->base, handle);
>         if (ret) {
> @@ -363,7 +370,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
>         struct drm_vgem_gem_object *obj;
>         int npages;
>
> -       obj = __vgem_gem_create(dev, attach->dmabuf->size);
> +       obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
>         if (IS_ERR(obj))
>                 return ERR_CAST(obj);
>
> --
> 2.27.0
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08  5:44   ` lepton
  0 siblings, 0 replies; 56+ messages in thread
From: lepton @ 2020-07-08  5:44 UTC (permalink / raw)
  To: Chris Wilson
  Cc: Thomas Hellström, # v4 . 10+,
	Christian König, dri-devel, intel-gfx

On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> If we assign obj->filp, we believe that the create vgem bo is native and
> allow direct operations like mmap() assuming it behaves as backed by a
> shmemfs inode. When imported from a dmabuf, the obj->pages are
> not always meaningful and the shmemfs backing store misleading.
>
> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> and that rejects attempts to mmap an imported dmabuf,
>
> drm_gem_dumb_map_offset():
>         if (obj->import_attach) return -EINVAL;
>
> So the only route by which we might accidentally allow mmapping of an
> imported buffer is via vgem_prime_mmap(), which checked for
> obj->filp assuming that it would be NULL.
>
> Well it would had it been updated to use the common
> drm_gem_dum_map_offset() helper, instead it has
>
> vgem_gem_dumb_map():
>         if (!obj->filp) return -EINVAL;
>
> falling foul of the same trap as above.
>
> Reported-by: Lepton Wu <ytht.net@gmail.com>
> Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Lepton Wu <ytht.net@gmail.com>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
> Cc: <stable@vger.kernel.org> # v4.13+
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 27 +++++++++++++++++----------
>  1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 909eba43664a..eb3b7cdac941 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
>                 ret = 0;
>         }
>         mutex_unlock(&obj->pages_lock);
> -       if (ret) {
> +       if (ret && obj->base.filp) {
>                 struct page *page;
>
>                 page = shmem_read_mapping_page(
> @@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
>  }
>
>  static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> -                                               unsigned long size)
> +                                                    struct file *shmem,
> +                                                    unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
>         int ret;
Remove this, it's not used any more.
> @@ -166,11 +167,8 @@ static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
>         if (!obj)
>                 return ERR_PTR(-ENOMEM);
>
> -       ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
> -       if (ret) {
> -               kfree(obj);
> -               return ERR_PTR(ret);
> -       }
> +       drm_gem_private_object_init(dev, &obj->base, size);
> +       obj->base.filp = shmem;
>
>         mutex_init(&obj->pages_lock);
>
> @@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
>                                               unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
> +       struct file *shmem;
>         int ret;
>
> -       obj = __vgem_gem_create(dev, size);
> -       if (IS_ERR(obj))
> +       size = roundup(size, PAGE_SIZE);
> +
> +       shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
> +       if (IS_ERR(shmem))
> +               return ERR_CAST(shmem);
> +
> +       obj = __vgem_gem_create(dev, shmem, size);
> +       if (IS_ERR(obj)) {
> +               fput(shmem);
>                 return ERR_CAST(obj);
> +       }
>
>         ret = drm_gem_handle_create(file, &obj->base, handle);
>         if (ret) {
> @@ -363,7 +370,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
>         struct drm_vgem_gem_object *obj;
>         int npages;
>
> -       obj = __vgem_gem_create(dev, attach->dmabuf->size);
> +       obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
>         if (IS_ERR(obj))
>                 return ERR_CAST(obj);
>
> --
> 2.27.0
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08  5:44   ` lepton
  0 siblings, 0 replies; 56+ messages in thread
From: lepton @ 2020-07-08  5:44 UTC (permalink / raw)
  To: Chris Wilson; +Cc: # v4 . 10+, Christian König, dri-devel, intel-gfx

On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> If we assign obj->filp, we believe that the create vgem bo is native and
> allow direct operations like mmap() assuming it behaves as backed by a
> shmemfs inode. When imported from a dmabuf, the obj->pages are
> not always meaningful and the shmemfs backing store misleading.
>
> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> and that rejects attempts to mmap an imported dmabuf,
>
> drm_gem_dumb_map_offset():
>         if (obj->import_attach) return -EINVAL;
>
> So the only route by which we might accidentally allow mmapping of an
> imported buffer is via vgem_prime_mmap(), which checked for
> obj->filp assuming that it would be NULL.
>
> Well it would had it been updated to use the common
> drm_gem_dum_map_offset() helper, instead it has
>
> vgem_gem_dumb_map():
>         if (!obj->filp) return -EINVAL;
>
> falling foul of the same trap as above.
>
> Reported-by: Lepton Wu <ytht.net@gmail.com>
> Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Lepton Wu <ytht.net@gmail.com>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
> Cc: <stable@vger.kernel.org> # v4.13+
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 27 +++++++++++++++++----------
>  1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 909eba43664a..eb3b7cdac941 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
>                 ret = 0;
>         }
>         mutex_unlock(&obj->pages_lock);
> -       if (ret) {
> +       if (ret && obj->base.filp) {
>                 struct page *page;
>
>                 page = shmem_read_mapping_page(
> @@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file)
>  }
>
>  static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> -                                               unsigned long size)
> +                                                    struct file *shmem,
> +                                                    unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
>         int ret;
Remove this, it's not used any more.
> @@ -166,11 +167,8 @@ static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
>         if (!obj)
>                 return ERR_PTR(-ENOMEM);
>
> -       ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE));
> -       if (ret) {
> -               kfree(obj);
> -               return ERR_PTR(ret);
> -       }
> +       drm_gem_private_object_init(dev, &obj->base, size);
> +       obj->base.filp = shmem;
>
>         mutex_init(&obj->pages_lock);
>
> @@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
>                                               unsigned long size)
>  {
>         struct drm_vgem_gem_object *obj;
> +       struct file *shmem;
>         int ret;
>
> -       obj = __vgem_gem_create(dev, size);
> -       if (IS_ERR(obj))
> +       size = roundup(size, PAGE_SIZE);
> +
> +       shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
> +       if (IS_ERR(shmem))
> +               return ERR_CAST(shmem);
> +
> +       obj = __vgem_gem_create(dev, shmem, size);
> +       if (IS_ERR(obj)) {
> +               fput(shmem);
>                 return ERR_CAST(obj);
> +       }
>
>         ret = drm_gem_handle_create(file, &obj->base, handle);
>         if (ret) {
> @@ -363,7 +370,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
>         struct drm_vgem_gem_object *obj;
>         int npages;
>
> -       obj = __vgem_gem_create(dev, attach->dmabuf->size);
> +       obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
>         if (IS_ERR(obj))
>                 return ERR_CAST(obj);
>
> --
> 2.27.0
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-07 18:35         ` [Intel-gfx] " Chris Wilson
  (?)
@ 2020-07-08  9:22           ` Christian König
  -1 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-08  9:22 UTC (permalink / raw)
  To: Chris Wilson, lepton
  Cc: dri-devel, intel-gfx, Daniel Vetter, Thomas Hellström, # v4.10+

Am 07.07.20 um 20:35 schrieb Chris Wilson:
> Quoting lepton (2020-07-07 19:17:51)
>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>> Quoting lepton (2020-07-07 18:05:21)
>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>
>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>> drm_gem_dumb_map_offset
>>> As I too found out, and so had to correct my story telling.
>>>
>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>> igt to see how it was being used.
>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>> for vgem. I am not really a graphic guy, but then the api looks like:
>> for a gem handle, user space has to guess to find out the way to mmap
>> it. If user space guess wrong, then it will fail to mmap. Is this the
>> expected way
>> for people to handle gpu buffer?
> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> handle, you have to use the dumb buffer API, there's no other way to
> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> two are clear.
>
> It's when you import the dma-buf into vgem and create a handle out of
> it, that's when the handle is no longer first class and certain uAPI
> [the dumb buffer API in particular] fail.
>
> It's not brilliant, as you say, it requires the user to remember the
> difference between the handles, but at the same time it does prevent
> them falling into coherency traps by forcing them to use the right
> driver to handle the object, and have to consider the additional ioctls
> that go along with that access.

Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an 
importer is illegal.

What we could maybe try to do is to redirect this mmap() API call on the 
importer to the exporter, but I'm pretty sure that the fs layer wouldn't 
like that without changes.

Regards,
Christian.


> -Chris


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08  9:22           ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-08  9:22 UTC (permalink / raw)
  To: Chris Wilson, lepton
  Cc: # v4.10+, intel-gfx, dri-devel, Thomas Hellström

Am 07.07.20 um 20:35 schrieb Chris Wilson:
> Quoting lepton (2020-07-07 19:17:51)
>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>> Quoting lepton (2020-07-07 18:05:21)
>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>
>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>> drm_gem_dumb_map_offset
>>> As I too found out, and so had to correct my story telling.
>>>
>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>> igt to see how it was being used.
>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>> for vgem. I am not really a graphic guy, but then the api looks like:
>> for a gem handle, user space has to guess to find out the way to mmap
>> it. If user space guess wrong, then it will fail to mmap. Is this the
>> expected way
>> for people to handle gpu buffer?
> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> handle, you have to use the dumb buffer API, there's no other way to
> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> two are clear.
>
> It's when you import the dma-buf into vgem and create a handle out of
> it, that's when the handle is no longer first class and certain uAPI
> [the dumb buffer API in particular] fail.
>
> It's not brilliant, as you say, it requires the user to remember the
> difference between the handles, but at the same time it does prevent
> them falling into coherency traps by forcing them to use the right
> driver to handle the object, and have to consider the additional ioctls
> that go along with that access.

Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an 
importer is illegal.

What we could maybe try to do is to redirect this mmap() API call on the 
importer to the exporter, but I'm pretty sure that the fs layer wouldn't 
like that without changes.

Regards,
Christian.


> -Chris

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08  9:22           ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-08  9:22 UTC (permalink / raw)
  To: Chris Wilson, lepton; +Cc: # v4.10+, intel-gfx, dri-devel

Am 07.07.20 um 20:35 schrieb Chris Wilson:
> Quoting lepton (2020-07-07 19:17:51)
>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>> Quoting lepton (2020-07-07 18:05:21)
>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>
>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>> drm_gem_dumb_map_offset
>>> As I too found out, and so had to correct my story telling.
>>>
>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>> igt to see how it was being used.
>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>> for vgem. I am not really a graphic guy, but then the api looks like:
>> for a gem handle, user space has to guess to find out the way to mmap
>> it. If user space guess wrong, then it will fail to mmap. Is this the
>> expected way
>> for people to handle gpu buffer?
> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> handle, you have to use the dumb buffer API, there's no other way to
> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> two are clear.
>
> It's when you import the dma-buf into vgem and create a handle out of
> it, that's when the handle is no longer first class and certain uAPI
> [the dumb buffer API in particular] fail.
>
> It's not brilliant, as you say, it requires the user to remember the
> difference between the handles, but at the same time it does prevent
> them falling into coherency traps by forcing them to use the right
> driver to handle the object, and have to consider the additional ioctls
> that go along with that access.

Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an 
importer is illegal.

What we could maybe try to do is to redirect this mmap() API call on the 
importer to the exporter, but I'm pretty sure that the fs layer wouldn't 
like that without changes.

Regards,
Christian.


> -Chris

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-08  9:22           ` Christian König
  (?)
@ 2020-07-08  9:54             ` Daniel Vetter
  -1 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08  9:54 UTC (permalink / raw)
  To: Christian König
  Cc: Chris Wilson, lepton, dri-devel, intel-gfx, Daniel Vetter,
	Thomas Hellström, # v4.10+

On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> > Quoting lepton (2020-07-07 19:17:51)
> > > On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > > Quoting lepton (2020-07-07 18:05:21)
> > > > > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > > > > If we assign obj->filp, we believe that the create vgem bo is native and
> > > > > > allow direct operations like mmap() assuming it behaves as backed by a
> > > > > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > > > > not always meaningful and the shmemfs backing store misleading.
> > > > > > 
> > > > > > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > > > > > and that rejects attempts to mmap an imported dmabuf,
> > > > > What do you mean by "regular mmap access" here?  It looks like vgem is
> > > > > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > > > > drm_gem_dumb_map_offset
> > > > As I too found out, and so had to correct my story telling.
> > > > 
> > > > By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> > > > API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> > > > igt to see how it was being used.
> > > Now it seems your fix is to disable "regular mmap" on imported dma buf
> > > for vgem. I am not really a graphic guy, but then the api looks like:
> > > for a gem handle, user space has to guess to find out the way to mmap
> > > it. If user space guess wrong, then it will fail to mmap. Is this the
> > > expected way
> > > for people to handle gpu buffer?
> > You either have a dumb buffer handle, or a dma-buf fd. If you have the
> > handle, you have to use the dumb buffer API, there's no other way to
> > mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> > two are clear.
> > 
> > It's when you import the dma-buf into vgem and create a handle out of
> > it, that's when the handle is no longer first class and certain uAPI
> > [the dumb buffer API in particular] fail.
> > 
> > It's not brilliant, as you say, it requires the user to remember the
> > difference between the handles, but at the same time it does prevent
> > them falling into coherency traps by forcing them to use the right
> > driver to handle the object, and have to consider the additional ioctls
> > that go along with that access.
> 
> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> is illegal.
> 
> What we could maybe try to do is to redirect this mmap() API call on the
> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> like that without changes.

We already do that, there's a full helper-ified path from I think shmem
helpers through prime helpers to forward this all. Including handling
buffer offsets and all the other lolz back&forth.

Of course there's still the problem that many drivers don't forward the
cache coherency calls for begin/end cpu access, so in a bunch of cases
you'll cache cacheline dirt soup. But that's kinda standard procedure for
dma-buf :-P

But yeah trying to handle the mmap as an importer, bypassing the export:
nope. The one exception is if you have some kind of fancy gart with
cpu-visible pci bar (like at least integrated intel gpus have). But in
that case the mmap very much looks&acts like device access in every way.

Cheers, Daniel

> Regards,
> Christian.
> 
> 
> > -Chris
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08  9:54             ` Daniel Vetter
  0 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08  9:54 UTC (permalink / raw)
  To: Christian König
  Cc: intel-gfx, dri-devel, lepton, # v4.10+,
	Chris Wilson, Thomas Hellström

On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> > Quoting lepton (2020-07-07 19:17:51)
> > > On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > > Quoting lepton (2020-07-07 18:05:21)
> > > > > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > > > > If we assign obj->filp, we believe that the create vgem bo is native and
> > > > > > allow direct operations like mmap() assuming it behaves as backed by a
> > > > > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > > > > not always meaningful and the shmemfs backing store misleading.
> > > > > > 
> > > > > > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > > > > > and that rejects attempts to mmap an imported dmabuf,
> > > > > What do you mean by "regular mmap access" here?  It looks like vgem is
> > > > > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > > > > drm_gem_dumb_map_offset
> > > > As I too found out, and so had to correct my story telling.
> > > > 
> > > > By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> > > > API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> > > > igt to see how it was being used.
> > > Now it seems your fix is to disable "regular mmap" on imported dma buf
> > > for vgem. I am not really a graphic guy, but then the api looks like:
> > > for a gem handle, user space has to guess to find out the way to mmap
> > > it. If user space guess wrong, then it will fail to mmap. Is this the
> > > expected way
> > > for people to handle gpu buffer?
> > You either have a dumb buffer handle, or a dma-buf fd. If you have the
> > handle, you have to use the dumb buffer API, there's no other way to
> > mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> > two are clear.
> > 
> > It's when you import the dma-buf into vgem and create a handle out of
> > it, that's when the handle is no longer first class and certain uAPI
> > [the dumb buffer API in particular] fail.
> > 
> > It's not brilliant, as you say, it requires the user to remember the
> > difference between the handles, but at the same time it does prevent
> > them falling into coherency traps by forcing them to use the right
> > driver to handle the object, and have to consider the additional ioctls
> > that go along with that access.
> 
> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> is illegal.
> 
> What we could maybe try to do is to redirect this mmap() API call on the
> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> like that without changes.

We already do that, there's a full helper-ified path from I think shmem
helpers through prime helpers to forward this all. Including handling
buffer offsets and all the other lolz back&forth.

Of course there's still the problem that many drivers don't forward the
cache coherency calls for begin/end cpu access, so in a bunch of cases
you'll cache cacheline dirt soup. But that's kinda standard procedure for
dma-buf :-P

But yeah trying to handle the mmap as an importer, bypassing the export:
nope. The one exception is if you have some kind of fancy gart with
cpu-visible pci bar (like at least integrated intel gpus have). But in
that case the mmap very much looks&acts like device access in every way.

Cheers, Daniel

> Regards,
> Christian.
> 
> 
> > -Chris
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08  9:54             ` Daniel Vetter
  0 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08  9:54 UTC (permalink / raw)
  To: Christian König; +Cc: intel-gfx, dri-devel, lepton, # v4.10+, Chris Wilson

On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> > Quoting lepton (2020-07-07 19:17:51)
> > > On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > > Quoting lepton (2020-07-07 18:05:21)
> > > > > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > > > > If we assign obj->filp, we believe that the create vgem bo is native and
> > > > > > allow direct operations like mmap() assuming it behaves as backed by a
> > > > > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > > > > not always meaningful and the shmemfs backing store misleading.
> > > > > > 
> > > > > > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > > > > > and that rejects attempts to mmap an imported dmabuf,
> > > > > What do you mean by "regular mmap access" here?  It looks like vgem is
> > > > > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > > > > drm_gem_dumb_map_offset
> > > > As I too found out, and so had to correct my story telling.
> > > > 
> > > > By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> > > > API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> > > > igt to see how it was being used.
> > > Now it seems your fix is to disable "regular mmap" on imported dma buf
> > > for vgem. I am not really a graphic guy, but then the api looks like:
> > > for a gem handle, user space has to guess to find out the way to mmap
> > > it. If user space guess wrong, then it will fail to mmap. Is this the
> > > expected way
> > > for people to handle gpu buffer?
> > You either have a dumb buffer handle, or a dma-buf fd. If you have the
> > handle, you have to use the dumb buffer API, there's no other way to
> > mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> > two are clear.
> > 
> > It's when you import the dma-buf into vgem and create a handle out of
> > it, that's when the handle is no longer first class and certain uAPI
> > [the dumb buffer API in particular] fail.
> > 
> > It's not brilliant, as you say, it requires the user to remember the
> > difference between the handles, but at the same time it does prevent
> > them falling into coherency traps by forcing them to use the right
> > driver to handle the object, and have to consider the additional ioctls
> > that go along with that access.
> 
> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> is illegal.
> 
> What we could maybe try to do is to redirect this mmap() API call on the
> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> like that without changes.

We already do that, there's a full helper-ified path from I think shmem
helpers through prime helpers to forward this all. Including handling
buffer offsets and all the other lolz back&forth.

Of course there's still the problem that many drivers don't forward the
cache coherency calls for begin/end cpu access, so in a bunch of cases
you'll cache cacheline dirt soup. But that's kinda standard procedure for
dma-buf :-P

But yeah trying to handle the mmap as an importer, bypassing the export:
nope. The one exception is if you have some kind of fancy gart with
cpu-visible pci bar (like at least integrated intel gpus have). But in
that case the mmap very much looks&acts like device access in every way.

Cheers, Daniel

> Regards,
> Christian.
> 
> 
> > -Chris
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 2/2] drm/vgem: Replace opencoded version of drm_gem_dumb_map_offset()
  2020-07-07 16:00   ` [Intel-gfx] " Chris Wilson
@ 2020-07-08  9:56     ` Daniel Vetter
  -1 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08  9:56 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx, dri-devel

On Tue, Jul 07, 2020 at 05:00:12PM +0100, Chris Wilson wrote:
> drm_gem_dumb_map_offset() now exists and does everything
> vgem_gem_dump_map does and *ought* to do.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 28 +---------------------------
>  1 file changed, 1 insertion(+), 27 deletions(-)
> 
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index eb3b7cdac941..866cff537f28 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -236,32 +236,6 @@ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
>  	return 0;
>  }
>  
> -static int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev,
> -			     uint32_t handle, uint64_t *offset)
> -{
> -	struct drm_gem_object *obj;
> -	int ret;
> -
> -	obj = drm_gem_object_lookup(file, handle);
> -	if (!obj)
> -		return -ENOENT;
> -
> -	if (!obj->filp) {
> -		ret = -EINVAL;
> -		goto unref;
> -	}
> -
> -	ret = drm_gem_create_mmap_offset(obj);
> -	if (ret)
> -		goto unref;
> -
> -	*offset = drm_vma_node_offset_addr(&obj->vma_node);
> -unref:
> -	drm_gem_object_put_unlocked(obj);
> -
> -	return ret;
> -}
> -
>  static struct drm_ioctl_desc vgem_ioctls[] = {
>  	DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
>  	DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
> @@ -455,7 +429,7 @@ static struct drm_driver vgem_driver = {
>  	.fops				= &vgem_driver_fops,
>  
>  	.dumb_create			= vgem_gem_dumb_create,
> -	.dumb_map_offset		= vgem_gem_dumb_map,
> +	.dumb_map_offset		= drm_gem_dumb_map_offset,

Even better: Just delete it, it's the default. With that:

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Also maybe cc: stable, since this should stop the mmap attempts on
imported dma-buf? Or will this break stuff ...
-Daniel

>  
>  	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>  	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> -- 
> 2.27.0
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 2/2] drm/vgem: Replace opencoded version of drm_gem_dumb_map_offset()
@ 2020-07-08  9:56     ` Daniel Vetter
  0 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08  9:56 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx, dri-devel

On Tue, Jul 07, 2020 at 05:00:12PM +0100, Chris Wilson wrote:
> drm_gem_dumb_map_offset() now exists and does everything
> vgem_gem_dump_map does and *ought* to do.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 28 +---------------------------
>  1 file changed, 1 insertion(+), 27 deletions(-)
> 
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index eb3b7cdac941..866cff537f28 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -236,32 +236,6 @@ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
>  	return 0;
>  }
>  
> -static int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev,
> -			     uint32_t handle, uint64_t *offset)
> -{
> -	struct drm_gem_object *obj;
> -	int ret;
> -
> -	obj = drm_gem_object_lookup(file, handle);
> -	if (!obj)
> -		return -ENOENT;
> -
> -	if (!obj->filp) {
> -		ret = -EINVAL;
> -		goto unref;
> -	}
> -
> -	ret = drm_gem_create_mmap_offset(obj);
> -	if (ret)
> -		goto unref;
> -
> -	*offset = drm_vma_node_offset_addr(&obj->vma_node);
> -unref:
> -	drm_gem_object_put_unlocked(obj);
> -
> -	return ret;
> -}
> -
>  static struct drm_ioctl_desc vgem_ioctls[] = {
>  	DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
>  	DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
> @@ -455,7 +429,7 @@ static struct drm_driver vgem_driver = {
>  	.fops				= &vgem_driver_fops,
>  
>  	.dumb_create			= vgem_gem_dumb_create,
> -	.dumb_map_offset		= vgem_gem_dumb_map,
> +	.dumb_map_offset		= drm_gem_dumb_map_offset,

Even better: Just delete it, it's the default. With that:

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Also maybe cc: stable, since this should stop the mmap attempts on
imported dma-buf? Or will this break stuff ...
-Daniel

>  
>  	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>  	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> -- 
> 2.27.0
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-08  9:54             ` Daniel Vetter
  (?)
@ 2020-07-08 14:37               ` Christian König
  -1 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-08 14:37 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Chris Wilson, lepton, dri-devel, intel-gfx,
	Thomas Hellström, # v4.10+

Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>> Quoting lepton (2020-07-07 19:17:51)
>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>
>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>> drm_gem_dumb_map_offset
>>>>> As I too found out, and so had to correct my story telling.
>>>>>
>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>> igt to see how it was being used.
>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>> expected way
>>>> for people to handle gpu buffer?
>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>> handle, you have to use the dumb buffer API, there's no other way to
>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>> two are clear.
>>>
>>> It's when you import the dma-buf into vgem and create a handle out of
>>> it, that's when the handle is no longer first class and certain uAPI
>>> [the dumb buffer API in particular] fail.
>>>
>>> It's not brilliant, as you say, it requires the user to remember the
>>> difference between the handles, but at the same time it does prevent
>>> them falling into coherency traps by forcing them to use the right
>>> driver to handle the object, and have to consider the additional ioctls
>>> that go along with that access.
>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>> is illegal.
>>
>> What we could maybe try to do is to redirect this mmap() API call on the
>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>> like that without changes.
> We already do that, there's a full helper-ified path from I think shmem
> helpers through prime helpers to forward this all. Including handling
> buffer offsets and all the other lolz back&forth.

Oh, that most likely won't work correctly with unpinned DMA-bufs and 
needs to be avoided.

Each file descriptor is associated with an struct address_space. And 
when you mmap() through the importer by redirecting the system call to 
the exporter you end up with the wrong struct address_space in your VMA.

That in turn can go up easily in flames when the exporter tries to 
invalidate the CPU mappings for its DMA-buf while moving it.

Where are we doing this? My last status was that this is forbidden.

Christian.

> Of course there's still the problem that many drivers don't forward the
> cache coherency calls for begin/end cpu access, so in a bunch of cases
> you'll cache cacheline dirt soup. But that's kinda standard procedure for
> dma-buf :-P
>
> But yeah trying to handle the mmap as an importer, bypassing the export:
> nope. The one exception is if you have some kind of fancy gart with
> cpu-visible pci bar (like at least integrated intel gpus have). But in
> that case the mmap very much looks&acts like device access in every way.
>
> Cheers, Daniel
>
>> Regards,
>> Christian.
>>
>>
>>> -Chris


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08 14:37               ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-08 14:37 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Thomas Hellström, # v4.10+,
	lepton, dri-devel, Chris Wilson, intel-gfx

Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>> Quoting lepton (2020-07-07 19:17:51)
>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>
>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>> drm_gem_dumb_map_offset
>>>>> As I too found out, and so had to correct my story telling.
>>>>>
>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>> igt to see how it was being used.
>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>> expected way
>>>> for people to handle gpu buffer?
>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>> handle, you have to use the dumb buffer API, there's no other way to
>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>> two are clear.
>>>
>>> It's when you import the dma-buf into vgem and create a handle out of
>>> it, that's when the handle is no longer first class and certain uAPI
>>> [the dumb buffer API in particular] fail.
>>>
>>> It's not brilliant, as you say, it requires the user to remember the
>>> difference between the handles, but at the same time it does prevent
>>> them falling into coherency traps by forcing them to use the right
>>> driver to handle the object, and have to consider the additional ioctls
>>> that go along with that access.
>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>> is illegal.
>>
>> What we could maybe try to do is to redirect this mmap() API call on the
>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>> like that without changes.
> We already do that, there's a full helper-ified path from I think shmem
> helpers through prime helpers to forward this all. Including handling
> buffer offsets and all the other lolz back&forth.

Oh, that most likely won't work correctly with unpinned DMA-bufs and 
needs to be avoided.

Each file descriptor is associated with an struct address_space. And 
when you mmap() through the importer by redirecting the system call to 
the exporter you end up with the wrong struct address_space in your VMA.

That in turn can go up easily in flames when the exporter tries to 
invalidate the CPU mappings for its DMA-buf while moving it.

Where are we doing this? My last status was that this is forbidden.

Christian.

> Of course there's still the problem that many drivers don't forward the
> cache coherency calls for begin/end cpu access, so in a bunch of cases
> you'll cache cacheline dirt soup. But that's kinda standard procedure for
> dma-buf :-P
>
> But yeah trying to handle the mmap as an importer, bypassing the export:
> nope. The one exception is if you have some kind of fancy gart with
> cpu-visible pci bar (like at least integrated intel gpus have). But in
> that case the mmap very much looks&acts like device access in every way.
>
> Cheers, Daniel
>
>> Regards,
>> Christian.
>>
>>
>>> -Chris

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08 14:37               ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-08 14:37 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: # v4.10+, lepton, dri-devel, Chris Wilson, intel-gfx

Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>> Quoting lepton (2020-07-07 19:17:51)
>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>
>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>> drm_gem_dumb_map_offset
>>>>> As I too found out, and so had to correct my story telling.
>>>>>
>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>> igt to see how it was being used.
>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>> expected way
>>>> for people to handle gpu buffer?
>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>> handle, you have to use the dumb buffer API, there's no other way to
>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>> two are clear.
>>>
>>> It's when you import the dma-buf into vgem and create a handle out of
>>> it, that's when the handle is no longer first class and certain uAPI
>>> [the dumb buffer API in particular] fail.
>>>
>>> It's not brilliant, as you say, it requires the user to remember the
>>> difference between the handles, but at the same time it does prevent
>>> them falling into coherency traps by forcing them to use the right
>>> driver to handle the object, and have to consider the additional ioctls
>>> that go along with that access.
>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>> is illegal.
>>
>> What we could maybe try to do is to redirect this mmap() API call on the
>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>> like that without changes.
> We already do that, there's a full helper-ified path from I think shmem
> helpers through prime helpers to forward this all. Including handling
> buffer offsets and all the other lolz back&forth.

Oh, that most likely won't work correctly with unpinned DMA-bufs and 
needs to be avoided.

Each file descriptor is associated with an struct address_space. And 
when you mmap() through the importer by redirecting the system call to 
the exporter you end up with the wrong struct address_space in your VMA.

That in turn can go up easily in flames when the exporter tries to 
invalidate the CPU mappings for its DMA-buf while moving it.

Where are we doing this? My last status was that this is forbidden.

Christian.

> Of course there's still the problem that many drivers don't forward the
> cache coherency calls for begin/end cpu access, so in a bunch of cases
> you'll cache cacheline dirt soup. But that's kinda standard procedure for
> dma-buf :-P
>
> But yeah trying to handle the mmap as an importer, bypassing the export:
> nope. The one exception is if you have some kind of fancy gart with
> cpu-visible pci bar (like at least integrated intel gpus have). But in
> that case the mmap very much looks&acts like device access in every way.
>
> Cheers, Daniel
>
>> Regards,
>> Christian.
>>
>>
>>> -Chris

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 2/2] drm/vgem: Replace opencoded version of drm_gem_dumb_map_offset()
  2020-07-08  9:56     ` [Intel-gfx] " Daniel Vetter
@ 2020-07-08 14:53       ` Chris Wilson
  -1 siblings, 0 replies; 56+ messages in thread
From: Chris Wilson @ 2020-07-08 14:53 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx, dri-devel

Quoting Daniel Vetter (2020-07-08 10:56:19)
> On Tue, Jul 07, 2020 at 05:00:12PM +0100, Chris Wilson wrote:
> > drm_gem_dumb_map_offset() now exists and does everything
> > vgem_gem_dump_map does and *ought* to do.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >  drivers/gpu/drm/vgem/vgem_drv.c | 28 +---------------------------
> >  1 file changed, 1 insertion(+), 27 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> > index eb3b7cdac941..866cff537f28 100644
> > --- a/drivers/gpu/drm/vgem/vgem_drv.c
> > +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> > @@ -236,32 +236,6 @@ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
> >       return 0;
> >  }
> >  
> > -static int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev,
> > -                          uint32_t handle, uint64_t *offset)
> > -{
> > -     struct drm_gem_object *obj;
> > -     int ret;
> > -
> > -     obj = drm_gem_object_lookup(file, handle);
> > -     if (!obj)
> > -             return -ENOENT;
> > -
> > -     if (!obj->filp) {
> > -             ret = -EINVAL;
> > -             goto unref;
> > -     }
> > -
> > -     ret = drm_gem_create_mmap_offset(obj);
> > -     if (ret)
> > -             goto unref;
> > -
> > -     *offset = drm_vma_node_offset_addr(&obj->vma_node);
> > -unref:
> > -     drm_gem_object_put_unlocked(obj);
> > -
> > -     return ret;
> > -}
> > -
> >  static struct drm_ioctl_desc vgem_ioctls[] = {
> >       DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
> >       DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
> > @@ -455,7 +429,7 @@ static struct drm_driver vgem_driver = {
> >       .fops                           = &vgem_driver_fops,
> >  
> >       .dumb_create                    = vgem_gem_dumb_create,
> > -     .dumb_map_offset                = vgem_gem_dumb_map,
> > +     .dumb_map_offset                = drm_gem_dumb_map_offset,
> 
> Even better: Just delete it, it's the default. With that:
> 
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> 
> Also maybe cc: stable, since this should stop the mmap attempts on
> imported dma-buf? Or will this break stuff ...

commit 90378e58919285637aa0f063c04ba0c6449d98b1 [v4.15]
Author: Noralf Trønnes <noralf@tronnes.org>
Date:   Thu Aug 17 18:21:30 2017 +0200

    drm/gem: drm_gem_dumb_map_offset(): reject dma-buf

and

commit db61152703c64133e42b8c720a83ff36e1824bb1 [v4.14]
Author: Noralf Trønnes <noralf@tronnes.org>
Date:   Sun Jul 23 21:16:17 2017 +0200

    drm/gem: Add drm_gem_dumb_map_offset()

It became default at the same time:

commit 0be8d63a840ab7cacb08f1af1f2395be0fe3b698 [v4.14]
Author: Noralf Trønnes <noralf@tronnes.org>
Date:   Sun Jul 23 21:16:18 2017 +0200

    drm/dumb-buffers: Add defaults for .dumb_map_offset and .dumb_destroy

The bug was in

commit af33a9190d0226251e9cbc137c88a707b0bbe356 [v4.13]
Author: Laura Abbott <labbott@redhat.com>
Date:   Thu May 4 11:45:48 2017 -0700

    drm/vgem: Enable dmabuf import interfaces

The drm_gem_dumb_map_offset is much older than I thought it was, and
while I suspected you might have suggested making it an automatic default,
I looked in the wrong place for the caller.

It's the cleaner fix, so this deserves the cc:stable more than the
first and covers the same lts.
-Chris
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 2/2] drm/vgem: Replace opencoded version of drm_gem_dumb_map_offset()
@ 2020-07-08 14:53       ` Chris Wilson
  0 siblings, 0 replies; 56+ messages in thread
From: Chris Wilson @ 2020-07-08 14:53 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx, dri-devel

Quoting Daniel Vetter (2020-07-08 10:56:19)
> On Tue, Jul 07, 2020 at 05:00:12PM +0100, Chris Wilson wrote:
> > drm_gem_dumb_map_offset() now exists and does everything
> > vgem_gem_dump_map does and *ought* to do.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >  drivers/gpu/drm/vgem/vgem_drv.c | 28 +---------------------------
> >  1 file changed, 1 insertion(+), 27 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> > index eb3b7cdac941..866cff537f28 100644
> > --- a/drivers/gpu/drm/vgem/vgem_drv.c
> > +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> > @@ -236,32 +236,6 @@ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
> >       return 0;
> >  }
> >  
> > -static int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev,
> > -                          uint32_t handle, uint64_t *offset)
> > -{
> > -     struct drm_gem_object *obj;
> > -     int ret;
> > -
> > -     obj = drm_gem_object_lookup(file, handle);
> > -     if (!obj)
> > -             return -ENOENT;
> > -
> > -     if (!obj->filp) {
> > -             ret = -EINVAL;
> > -             goto unref;
> > -     }
> > -
> > -     ret = drm_gem_create_mmap_offset(obj);
> > -     if (ret)
> > -             goto unref;
> > -
> > -     *offset = drm_vma_node_offset_addr(&obj->vma_node);
> > -unref:
> > -     drm_gem_object_put_unlocked(obj);
> > -
> > -     return ret;
> > -}
> > -
> >  static struct drm_ioctl_desc vgem_ioctls[] = {
> >       DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
> >       DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
> > @@ -455,7 +429,7 @@ static struct drm_driver vgem_driver = {
> >       .fops                           = &vgem_driver_fops,
> >  
> >       .dumb_create                    = vgem_gem_dumb_create,
> > -     .dumb_map_offset                = vgem_gem_dumb_map,
> > +     .dumb_map_offset                = drm_gem_dumb_map_offset,
> 
> Even better: Just delete it, it's the default. With that:
> 
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> 
> Also maybe cc: stable, since this should stop the mmap attempts on
> imported dma-buf? Or will this break stuff ...

commit 90378e58919285637aa0f063c04ba0c6449d98b1 [v4.15]
Author: Noralf Trønnes <noralf@tronnes.org>
Date:   Thu Aug 17 18:21:30 2017 +0200

    drm/gem: drm_gem_dumb_map_offset(): reject dma-buf

and

commit db61152703c64133e42b8c720a83ff36e1824bb1 [v4.14]
Author: Noralf Trønnes <noralf@tronnes.org>
Date:   Sun Jul 23 21:16:17 2017 +0200

    drm/gem: Add drm_gem_dumb_map_offset()

It became default at the same time:

commit 0be8d63a840ab7cacb08f1af1f2395be0fe3b698 [v4.14]
Author: Noralf Trønnes <noralf@tronnes.org>
Date:   Sun Jul 23 21:16:18 2017 +0200

    drm/dumb-buffers: Add defaults for .dumb_map_offset and .dumb_destroy

The bug was in

commit af33a9190d0226251e9cbc137c88a707b0bbe356 [v4.13]
Author: Laura Abbott <labbott@redhat.com>
Date:   Thu May 4 11:45:48 2017 -0700

    drm/vgem: Enable dmabuf import interfaces

The drm_gem_dumb_map_offset is much older than I thought it was, and
while I suspected you might have suggested making it an automatic default,
I looked in the wrong place for the caller.

It's the cleaner fix, so this deserves the cc:stable more than the
first and covers the same lts.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-08 14:37               ` Christian König
  (?)
@ 2020-07-08 15:01                 ` Daniel Vetter
  -1 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08 15:01 UTC (permalink / raw)
  To: Christian König
  Cc: Chris Wilson, lepton, dri-devel, intel-gfx,
	Thomas Hellström, # v4.10+

On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>
> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> > On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> >> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> >>> Quoting lepton (2020-07-07 19:17:51)
> >>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>> Quoting lepton (2020-07-07 18:05:21)
> >>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
> >>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
> >>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
> >>>>>>> not always meaningful and the shmemfs backing store misleading.
> >>>>>>>
> >>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> >>>>>>> and that rejects attempts to mmap an imported dmabuf,
> >>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
> >>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> >>>>>> drm_gem_dumb_map_offset
> >>>>> As I too found out, and so had to correct my story telling.
> >>>>>
> >>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> >>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> >>>>> igt to see how it was being used.
> >>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
> >>>> for vgem. I am not really a graphic guy, but then the api looks like:
> >>>> for a gem handle, user space has to guess to find out the way to mmap
> >>>> it. If user space guess wrong, then it will fail to mmap. Is this the
> >>>> expected way
> >>>> for people to handle gpu buffer?
> >>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> >>> handle, you have to use the dumb buffer API, there's no other way to
> >>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> >>> two are clear.
> >>>
> >>> It's when you import the dma-buf into vgem and create a handle out of
> >>> it, that's when the handle is no longer first class and certain uAPI
> >>> [the dumb buffer API in particular] fail.
> >>>
> >>> It's not brilliant, as you say, it requires the user to remember the
> >>> difference between the handles, but at the same time it does prevent
> >>> them falling into coherency traps by forcing them to use the right
> >>> driver to handle the object, and have to consider the additional ioctls
> >>> that go along with that access.
> >> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> >> is illegal.
> >>
> >> What we could maybe try to do is to redirect this mmap() API call on the
> >> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> >> like that without changes.
> > We already do that, there's a full helper-ified path from I think shmem
> > helpers through prime helpers to forward this all. Including handling
> > buffer offsets and all the other lolz back&forth.
>
> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> needs to be avoided.
>
> Each file descriptor is associated with an struct address_space. And
> when you mmap() through the importer by redirecting the system call to
> the exporter you end up with the wrong struct address_space in your VMA.
>
> That in turn can go up easily in flames when the exporter tries to
> invalidate the CPU mappings for its DMA-buf while moving it.
>
> Where are we doing this? My last status was that this is forbidden.

Hm I thought we're doing all that already, but looking at the code
again we're only doing this when opening a new drm fd or dma-buf fd.
So the right file->f_mapping is always set at file creation time.

And we indeed don't frob this more when going another indirection ...
Maybe we screwed up something somewhere :-/

Also I thought the mapping is only taken after the vma is instatiated,
otherwise the tricks we're playing with dma-buf already wouldn't work:
dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
whether we could adjust vm_file too. Or is that the thing that's
forbidden?
-Daniel

> Christian.
>
> > Of course there's still the problem that many drivers don't forward the
> > cache coherency calls for begin/end cpu access, so in a bunch of cases
> > you'll cache cacheline dirt soup. But that's kinda standard procedure for
> > dma-buf :-P
> >
> > But yeah trying to handle the mmap as an importer, bypassing the export:
> > nope. The one exception is if you have some kind of fancy gart with
> > cpu-visible pci bar (like at least integrated intel gpus have). But in
> > that case the mmap very much looks&acts like device access in every way.
> >
> > Cheers, Daniel
> >
> >> Regards,
> >> Christian.
> >>
> >>
> >>> -Chris
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08 15:01                 ` Daniel Vetter
  0 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08 15:01 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Hellström, # v4.10+,
	lepton, dri-devel, Chris Wilson, intel-gfx

On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>
> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> > On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> >> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> >>> Quoting lepton (2020-07-07 19:17:51)
> >>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>> Quoting lepton (2020-07-07 18:05:21)
> >>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
> >>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
> >>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
> >>>>>>> not always meaningful and the shmemfs backing store misleading.
> >>>>>>>
> >>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> >>>>>>> and that rejects attempts to mmap an imported dmabuf,
> >>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
> >>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> >>>>>> drm_gem_dumb_map_offset
> >>>>> As I too found out, and so had to correct my story telling.
> >>>>>
> >>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> >>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> >>>>> igt to see how it was being used.
> >>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
> >>>> for vgem. I am not really a graphic guy, but then the api looks like:
> >>>> for a gem handle, user space has to guess to find out the way to mmap
> >>>> it. If user space guess wrong, then it will fail to mmap. Is this the
> >>>> expected way
> >>>> for people to handle gpu buffer?
> >>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> >>> handle, you have to use the dumb buffer API, there's no other way to
> >>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> >>> two are clear.
> >>>
> >>> It's when you import the dma-buf into vgem and create a handle out of
> >>> it, that's when the handle is no longer first class and certain uAPI
> >>> [the dumb buffer API in particular] fail.
> >>>
> >>> It's not brilliant, as you say, it requires the user to remember the
> >>> difference between the handles, but at the same time it does prevent
> >>> them falling into coherency traps by forcing them to use the right
> >>> driver to handle the object, and have to consider the additional ioctls
> >>> that go along with that access.
> >> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> >> is illegal.
> >>
> >> What we could maybe try to do is to redirect this mmap() API call on the
> >> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> >> like that without changes.
> > We already do that, there's a full helper-ified path from I think shmem
> > helpers through prime helpers to forward this all. Including handling
> > buffer offsets and all the other lolz back&forth.
>
> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> needs to be avoided.
>
> Each file descriptor is associated with an struct address_space. And
> when you mmap() through the importer by redirecting the system call to
> the exporter you end up with the wrong struct address_space in your VMA.
>
> That in turn can go up easily in flames when the exporter tries to
> invalidate the CPU mappings for its DMA-buf while moving it.
>
> Where are we doing this? My last status was that this is forbidden.

Hm I thought we're doing all that already, but looking at the code
again we're only doing this when opening a new drm fd or dma-buf fd.
So the right file->f_mapping is always set at file creation time.

And we indeed don't frob this more when going another indirection ...
Maybe we screwed up something somewhere :-/

Also I thought the mapping is only taken after the vma is instatiated,
otherwise the tricks we're playing with dma-buf already wouldn't work:
dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
whether we could adjust vm_file too. Or is that the thing that's
forbidden?
-Daniel

> Christian.
>
> > Of course there's still the problem that many drivers don't forward the
> > cache coherency calls for begin/end cpu access, so in a bunch of cases
> > you'll cache cacheline dirt soup. But that's kinda standard procedure for
> > dma-buf :-P
> >
> > But yeah trying to handle the mmap as an importer, bypassing the export:
> > nope. The one exception is if you have some kind of fancy gart with
> > cpu-visible pci bar (like at least integrated intel gpus have). But in
> > that case the mmap very much looks&acts like device access in every way.
> >
> > Cheers, Daniel
> >
> >> Regards,
> >> Christian.
> >>
> >>
> >>> -Chris
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08 15:01                 ` Daniel Vetter
  0 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08 15:01 UTC (permalink / raw)
  To: Christian König; +Cc: # v4.10+, lepton, dri-devel, Chris Wilson, intel-gfx

On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>
> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> > On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> >> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> >>> Quoting lepton (2020-07-07 19:17:51)
> >>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>> Quoting lepton (2020-07-07 18:05:21)
> >>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
> >>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
> >>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
> >>>>>>> not always meaningful and the shmemfs backing store misleading.
> >>>>>>>
> >>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> >>>>>>> and that rejects attempts to mmap an imported dmabuf,
> >>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
> >>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> >>>>>> drm_gem_dumb_map_offset
> >>>>> As I too found out, and so had to correct my story telling.
> >>>>>
> >>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> >>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> >>>>> igt to see how it was being used.
> >>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
> >>>> for vgem. I am not really a graphic guy, but then the api looks like:
> >>>> for a gem handle, user space has to guess to find out the way to mmap
> >>>> it. If user space guess wrong, then it will fail to mmap. Is this the
> >>>> expected way
> >>>> for people to handle gpu buffer?
> >>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> >>> handle, you have to use the dumb buffer API, there's no other way to
> >>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> >>> two are clear.
> >>>
> >>> It's when you import the dma-buf into vgem and create a handle out of
> >>> it, that's when the handle is no longer first class and certain uAPI
> >>> [the dumb buffer API in particular] fail.
> >>>
> >>> It's not brilliant, as you say, it requires the user to remember the
> >>> difference between the handles, but at the same time it does prevent
> >>> them falling into coherency traps by forcing them to use the right
> >>> driver to handle the object, and have to consider the additional ioctls
> >>> that go along with that access.
> >> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> >> is illegal.
> >>
> >> What we could maybe try to do is to redirect this mmap() API call on the
> >> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> >> like that without changes.
> > We already do that, there's a full helper-ified path from I think shmem
> > helpers through prime helpers to forward this all. Including handling
> > buffer offsets and all the other lolz back&forth.
>
> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> needs to be avoided.
>
> Each file descriptor is associated with an struct address_space. And
> when you mmap() through the importer by redirecting the system call to
> the exporter you end up with the wrong struct address_space in your VMA.
>
> That in turn can go up easily in flames when the exporter tries to
> invalidate the CPU mappings for its DMA-buf while moving it.
>
> Where are we doing this? My last status was that this is forbidden.

Hm I thought we're doing all that already, but looking at the code
again we're only doing this when opening a new drm fd or dma-buf fd.
So the right file->f_mapping is always set at file creation time.

And we indeed don't frob this more when going another indirection ...
Maybe we screwed up something somewhere :-/

Also I thought the mapping is only taken after the vma is instatiated,
otherwise the tricks we're playing with dma-buf already wouldn't work:
dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
whether we could adjust vm_file too. Or is that the thing that's
forbidden?
-Daniel

> Christian.
>
> > Of course there's still the problem that many drivers don't forward the
> > cache coherency calls for begin/end cpu access, so in a bunch of cases
> > you'll cache cacheline dirt soup. But that's kinda standard procedure for
> > dma-buf :-P
> >
> > But yeah trying to handle the mmap as an importer, bypassing the export:
> > nope. The one exception is if you have some kind of fancy gart with
> > cpu-visible pci bar (like at least integrated intel gpus have). But in
> > that case the mmap very much looks&acts like device access in every way.
> >
> > Cheers, Daniel
> >
> >> Regards,
> >> Christian.
> >>
> >>
> >>> -Chris
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-08 15:01                 ` Daniel Vetter
  (?)
@ 2020-07-08 15:05                   ` Christian König
  -1 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-08 15:05 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Chris Wilson, lepton, dri-devel, intel-gfx,
	Thomas Hellström, # v4.10+

Am 08.07.20 um 17:01 schrieb Daniel Vetter:
> On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>>>
>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>>>> drm_gem_dumb_map_offset
>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>
>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>>>> igt to see how it was being used.
>>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>>>> expected way
>>>>>> for people to handle gpu buffer?
>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>>>> handle, you have to use the dumb buffer API, there's no other way to
>>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>>>> two are clear.
>>>>>
>>>>> It's when you import the dma-buf into vgem and create a handle out of
>>>>> it, that's when the handle is no longer first class and certain uAPI
>>>>> [the dumb buffer API in particular] fail.
>>>>>
>>>>> It's not brilliant, as you say, it requires the user to remember the
>>>>> difference between the handles, but at the same time it does prevent
>>>>> them falling into coherency traps by forcing them to use the right
>>>>> driver to handle the object, and have to consider the additional ioctls
>>>>> that go along with that access.
>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>>>> is illegal.
>>>>
>>>> What we could maybe try to do is to redirect this mmap() API call on the
>>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>>>> like that without changes.
>>> We already do that, there's a full helper-ified path from I think shmem
>>> helpers through prime helpers to forward this all. Including handling
>>> buffer offsets and all the other lolz back&forth.
>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>> needs to be avoided.
>>
>> Each file descriptor is associated with an struct address_space. And
>> when you mmap() through the importer by redirecting the system call to
>> the exporter you end up with the wrong struct address_space in your VMA.
>>
>> That in turn can go up easily in flames when the exporter tries to
>> invalidate the CPU mappings for its DMA-buf while moving it.
>>
>> Where are we doing this? My last status was that this is forbidden.
> Hm I thought we're doing all that already, but looking at the code
> again we're only doing this when opening a new drm fd or dma-buf fd.
> So the right file->f_mapping is always set at file creation time.
>
> And we indeed don't frob this more when going another indirection ...
> Maybe we screwed up something somewhere :-/
>
> Also I thought the mapping is only taken after the vma is instatiated,
> otherwise the tricks we're playing with dma-buf already wouldn't work:
> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
> whether we could adjust vm_file too. Or is that the thing that's
> forbidden?

Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.

But changing vma->vm_file, that's something I haven't seen before and 
most likely could blow up badly.

Christian.

> -Daniel
>
>> Christian.
>>
>>> Of course there's still the problem that many drivers don't forward the
>>> cache coherency calls for begin/end cpu access, so in a bunch of cases
>>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
>>> dma-buf :-P
>>>
>>> But yeah trying to handle the mmap as an importer, bypassing the export:
>>> nope. The one exception is if you have some kind of fancy gart with
>>> cpu-visible pci bar (like at least integrated intel gpus have). But in
>>> that case the mmap very much looks&acts like device access in every way.
>>>
>>> Cheers, Daniel
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>
>>>>> -Chris
>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08 15:05                   ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-08 15:05 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Thomas Hellström, # v4.10+,
	lepton, dri-devel, Chris Wilson, intel-gfx

Am 08.07.20 um 17:01 schrieb Daniel Vetter:
> On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>>>
>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>>>> drm_gem_dumb_map_offset
>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>
>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>>>> igt to see how it was being used.
>>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>>>> expected way
>>>>>> for people to handle gpu buffer?
>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>>>> handle, you have to use the dumb buffer API, there's no other way to
>>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>>>> two are clear.
>>>>>
>>>>> It's when you import the dma-buf into vgem and create a handle out of
>>>>> it, that's when the handle is no longer first class and certain uAPI
>>>>> [the dumb buffer API in particular] fail.
>>>>>
>>>>> It's not brilliant, as you say, it requires the user to remember the
>>>>> difference between the handles, but at the same time it does prevent
>>>>> them falling into coherency traps by forcing them to use the right
>>>>> driver to handle the object, and have to consider the additional ioctls
>>>>> that go along with that access.
>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>>>> is illegal.
>>>>
>>>> What we could maybe try to do is to redirect this mmap() API call on the
>>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>>>> like that without changes.
>>> We already do that, there's a full helper-ified path from I think shmem
>>> helpers through prime helpers to forward this all. Including handling
>>> buffer offsets and all the other lolz back&forth.
>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>> needs to be avoided.
>>
>> Each file descriptor is associated with an struct address_space. And
>> when you mmap() through the importer by redirecting the system call to
>> the exporter you end up with the wrong struct address_space in your VMA.
>>
>> That in turn can go up easily in flames when the exporter tries to
>> invalidate the CPU mappings for its DMA-buf while moving it.
>>
>> Where are we doing this? My last status was that this is forbidden.
> Hm I thought we're doing all that already, but looking at the code
> again we're only doing this when opening a new drm fd or dma-buf fd.
> So the right file->f_mapping is always set at file creation time.
>
> And we indeed don't frob this more when going another indirection ...
> Maybe we screwed up something somewhere :-/
>
> Also I thought the mapping is only taken after the vma is instatiated,
> otherwise the tricks we're playing with dma-buf already wouldn't work:
> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
> whether we could adjust vm_file too. Or is that the thing that's
> forbidden?

Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.

But changing vma->vm_file, that's something I haven't seen before and 
most likely could blow up badly.

Christian.

> -Daniel
>
>> Christian.
>>
>>> Of course there's still the problem that many drivers don't forward the
>>> cache coherency calls for begin/end cpu access, so in a bunch of cases
>>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
>>> dma-buf :-P
>>>
>>> But yeah trying to handle the mmap as an importer, bypassing the export:
>>> nope. The one exception is if you have some kind of fancy gart with
>>> cpu-visible pci bar (like at least integrated intel gpus have). But in
>>> that case the mmap very much looks&acts like device access in every way.
>>>
>>> Cheers, Daniel
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>
>>>>> -Chris
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08 15:05                   ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-08 15:05 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: # v4.10+, lepton, dri-devel, Chris Wilson, intel-gfx

Am 08.07.20 um 17:01 schrieb Daniel Vetter:
> On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>>>
>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>>>> drm_gem_dumb_map_offset
>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>
>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>>>> igt to see how it was being used.
>>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>>>> expected way
>>>>>> for people to handle gpu buffer?
>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>>>> handle, you have to use the dumb buffer API, there's no other way to
>>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>>>> two are clear.
>>>>>
>>>>> It's when you import the dma-buf into vgem and create a handle out of
>>>>> it, that's when the handle is no longer first class and certain uAPI
>>>>> [the dumb buffer API in particular] fail.
>>>>>
>>>>> It's not brilliant, as you say, it requires the user to remember the
>>>>> difference between the handles, but at the same time it does prevent
>>>>> them falling into coherency traps by forcing them to use the right
>>>>> driver to handle the object, and have to consider the additional ioctls
>>>>> that go along with that access.
>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>>>> is illegal.
>>>>
>>>> What we could maybe try to do is to redirect this mmap() API call on the
>>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>>>> like that without changes.
>>> We already do that, there's a full helper-ified path from I think shmem
>>> helpers through prime helpers to forward this all. Including handling
>>> buffer offsets and all the other lolz back&forth.
>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>> needs to be avoided.
>>
>> Each file descriptor is associated with an struct address_space. And
>> when you mmap() through the importer by redirecting the system call to
>> the exporter you end up with the wrong struct address_space in your VMA.
>>
>> That in turn can go up easily in flames when the exporter tries to
>> invalidate the CPU mappings for its DMA-buf while moving it.
>>
>> Where are we doing this? My last status was that this is forbidden.
> Hm I thought we're doing all that already, but looking at the code
> again we're only doing this when opening a new drm fd or dma-buf fd.
> So the right file->f_mapping is always set at file creation time.
>
> And we indeed don't frob this more when going another indirection ...
> Maybe we screwed up something somewhere :-/
>
> Also I thought the mapping is only taken after the vma is instatiated,
> otherwise the tricks we're playing with dma-buf already wouldn't work:
> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
> whether we could adjust vm_file too. Or is that the thing that's
> forbidden?

Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.

But changing vma->vm_file, that's something I haven't seen before and 
most likely could blow up badly.

Christian.

> -Daniel
>
>> Christian.
>>
>>> Of course there's still the problem that many drivers don't forward the
>>> cache coherency calls for begin/end cpu access, so in a bunch of cases
>>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
>>> dma-buf :-P
>>>
>>> But yeah trying to handle the mmap as an importer, bypassing the export:
>>> nope. The one exception is if you have some kind of fancy gart with
>>> cpu-visible pci bar (like at least integrated intel gpus have). But in
>>> that case the mmap very much looks&acts like device access in every way.
>>>
>>> Cheers, Daniel
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>
>>>>> -Chris
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-08 15:05                   ` Christian König
  (?)
@ 2020-07-08 16:11                     ` Daniel Vetter
  -1 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08 16:11 UTC (permalink / raw)
  To: Christian König
  Cc: Chris Wilson, lepton, dri-devel, intel-gfx,
	Thomas Hellström, # v4.10+

On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
>
> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
> > On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
> >> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> >>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> >>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> >>>>> Quoting lepton (2020-07-07 19:17:51)
> >>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>>>> Quoting lepton (2020-07-07 18:05:21)
> >>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
> >>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
> >>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
> >>>>>>>>> not always meaningful and the shmemfs backing store misleading.
> >>>>>>>>>
> >>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> >>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
> >>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
> >>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> >>>>>>>> drm_gem_dumb_map_offset
> >>>>>>> As I too found out, and so had to correct my story telling.
> >>>>>>>
> >>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> >>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> >>>>>>> igt to see how it was being used.
> >>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
> >>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
> >>>>>> for a gem handle, user space has to guess to find out the way to mmap
> >>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
> >>>>>> expected way
> >>>>>> for people to handle gpu buffer?
> >>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> >>>>> handle, you have to use the dumb buffer API, there's no other way to
> >>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> >>>>> two are clear.
> >>>>>
> >>>>> It's when you import the dma-buf into vgem and create a handle out of
> >>>>> it, that's when the handle is no longer first class and certain uAPI
> >>>>> [the dumb buffer API in particular] fail.
> >>>>>
> >>>>> It's not brilliant, as you say, it requires the user to remember the
> >>>>> difference between the handles, but at the same time it does prevent
> >>>>> them falling into coherency traps by forcing them to use the right
> >>>>> driver to handle the object, and have to consider the additional ioctls
> >>>>> that go along with that access.
> >>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> >>>> is illegal.
> >>>>
> >>>> What we could maybe try to do is to redirect this mmap() API call on the
> >>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> >>>> like that without changes.
> >>> We already do that, there's a full helper-ified path from I think shmem
> >>> helpers through prime helpers to forward this all. Including handling
> >>> buffer offsets and all the other lolz back&forth.
> >> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> >> needs to be avoided.
> >>
> >> Each file descriptor is associated with an struct address_space. And
> >> when you mmap() through the importer by redirecting the system call to
> >> the exporter you end up with the wrong struct address_space in your VMA.
> >>
> >> That in turn can go up easily in flames when the exporter tries to
> >> invalidate the CPU mappings for its DMA-buf while moving it.
> >>
> >> Where are we doing this? My last status was that this is forbidden.
> > Hm I thought we're doing all that already, but looking at the code
> > again we're only doing this when opening a new drm fd or dma-buf fd.
> > So the right file->f_mapping is always set at file creation time.
> >
> > And we indeed don't frob this more when going another indirection ...
> > Maybe we screwed up something somewhere :-/
> >
> > Also I thought the mapping is only taken after the vma is instatiated,
> > otherwise the tricks we're playing with dma-buf already wouldn't work:
> > dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
> > it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
> > whether we could adjust vm_file too. Or is that the thing that's
> > forbidden?
>
> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>
> But changing vma->vm_file, that's something I haven't seen before and
> most likely could blow up badly.

Ok, I read the shmem helpers again, I think those are the only ones
which do the importer mmap -> dma_buf_mmap() forwarding, and hence
break stuff all around here.

They also remove the vma->vm_pgoff offset, which means
unmap_mapping_range wont work anyway. I guess for drivers which use
shmem helpers the hard assumption is that a) can't use p2p dma and b)
pin everything into system memory.

So not a problem. But something to keep in mind. I'll try to do a
kerneldoc patch to note this somewhere. btw on that, did the
timeline/syncobj documentation patch land by now? Just trying to make
sure that doesn't get lost for another few months or so :-/

Cheers, Daniel

>
> Christian.
>
> > -Daniel
> >
> >> Christian.
> >>
> >>> Of course there's still the problem that many drivers don't forward the
> >>> cache coherency calls for begin/end cpu access, so in a bunch of cases
> >>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
> >>> dma-buf :-P
> >>>
> >>> But yeah trying to handle the mmap as an importer, bypassing the export:
> >>> nope. The one exception is if you have some kind of fancy gart with
> >>> cpu-visible pci bar (like at least integrated intel gpus have). But in
> >>> that case the mmap very much looks&acts like device access in every way.
> >>>
> >>> Cheers, Daniel
> >>>
> >>>> Regards,
> >>>> Christian.
> >>>>
> >>>>
> >>>>> -Chris
> >
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08 16:11                     ` Daniel Vetter
  0 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08 16:11 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Hellström, # v4.10+,
	lepton, dri-devel, Chris Wilson, intel-gfx

On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
>
> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
> > On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
> >> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> >>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> >>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> >>>>> Quoting lepton (2020-07-07 19:17:51)
> >>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>>>> Quoting lepton (2020-07-07 18:05:21)
> >>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
> >>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
> >>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
> >>>>>>>>> not always meaningful and the shmemfs backing store misleading.
> >>>>>>>>>
> >>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> >>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
> >>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
> >>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> >>>>>>>> drm_gem_dumb_map_offset
> >>>>>>> As I too found out, and so had to correct my story telling.
> >>>>>>>
> >>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> >>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> >>>>>>> igt to see how it was being used.
> >>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
> >>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
> >>>>>> for a gem handle, user space has to guess to find out the way to mmap
> >>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
> >>>>>> expected way
> >>>>>> for people to handle gpu buffer?
> >>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> >>>>> handle, you have to use the dumb buffer API, there's no other way to
> >>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> >>>>> two are clear.
> >>>>>
> >>>>> It's when you import the dma-buf into vgem and create a handle out of
> >>>>> it, that's when the handle is no longer first class and certain uAPI
> >>>>> [the dumb buffer API in particular] fail.
> >>>>>
> >>>>> It's not brilliant, as you say, it requires the user to remember the
> >>>>> difference between the handles, but at the same time it does prevent
> >>>>> them falling into coherency traps by forcing them to use the right
> >>>>> driver to handle the object, and have to consider the additional ioctls
> >>>>> that go along with that access.
> >>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> >>>> is illegal.
> >>>>
> >>>> What we could maybe try to do is to redirect this mmap() API call on the
> >>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> >>>> like that without changes.
> >>> We already do that, there's a full helper-ified path from I think shmem
> >>> helpers through prime helpers to forward this all. Including handling
> >>> buffer offsets and all the other lolz back&forth.
> >> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> >> needs to be avoided.
> >>
> >> Each file descriptor is associated with an struct address_space. And
> >> when you mmap() through the importer by redirecting the system call to
> >> the exporter you end up with the wrong struct address_space in your VMA.
> >>
> >> That in turn can go up easily in flames when the exporter tries to
> >> invalidate the CPU mappings for its DMA-buf while moving it.
> >>
> >> Where are we doing this? My last status was that this is forbidden.
> > Hm I thought we're doing all that already, but looking at the code
> > again we're only doing this when opening a new drm fd or dma-buf fd.
> > So the right file->f_mapping is always set at file creation time.
> >
> > And we indeed don't frob this more when going another indirection ...
> > Maybe we screwed up something somewhere :-/
> >
> > Also I thought the mapping is only taken after the vma is instatiated,
> > otherwise the tricks we're playing with dma-buf already wouldn't work:
> > dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
> > it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
> > whether we could adjust vm_file too. Or is that the thing that's
> > forbidden?
>
> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>
> But changing vma->vm_file, that's something I haven't seen before and
> most likely could blow up badly.

Ok, I read the shmem helpers again, I think those are the only ones
which do the importer mmap -> dma_buf_mmap() forwarding, and hence
break stuff all around here.

They also remove the vma->vm_pgoff offset, which means
unmap_mapping_range wont work anyway. I guess for drivers which use
shmem helpers the hard assumption is that a) can't use p2p dma and b)
pin everything into system memory.

So not a problem. But something to keep in mind. I'll try to do a
kerneldoc patch to note this somewhere. btw on that, did the
timeline/syncobj documentation patch land by now? Just trying to make
sure that doesn't get lost for another few months or so :-/

Cheers, Daniel

>
> Christian.
>
> > -Daniel
> >
> >> Christian.
> >>
> >>> Of course there's still the problem that many drivers don't forward the
> >>> cache coherency calls for begin/end cpu access, so in a bunch of cases
> >>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
> >>> dma-buf :-P
> >>>
> >>> But yeah trying to handle the mmap as an importer, bypassing the export:
> >>> nope. The one exception is if you have some kind of fancy gart with
> >>> cpu-visible pci bar (like at least integrated intel gpus have). But in
> >>> that case the mmap very much looks&acts like device access in every way.
> >>>
> >>> Cheers, Daniel
> >>>
> >>>> Regards,
> >>>> Christian.
> >>>>
> >>>>
> >>>>> -Chris
> >
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08 16:11                     ` Daniel Vetter
  0 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08 16:11 UTC (permalink / raw)
  To: Christian König; +Cc: # v4.10+, lepton, dri-devel, Chris Wilson, intel-gfx

On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
>
> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
> > On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
> >> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> >>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> >>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> >>>>> Quoting lepton (2020-07-07 19:17:51)
> >>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>>>> Quoting lepton (2020-07-07 18:05:21)
> >>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
> >>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
> >>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
> >>>>>>>>> not always meaningful and the shmemfs backing store misleading.
> >>>>>>>>>
> >>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> >>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
> >>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
> >>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> >>>>>>>> drm_gem_dumb_map_offset
> >>>>>>> As I too found out, and so had to correct my story telling.
> >>>>>>>
> >>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> >>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> >>>>>>> igt to see how it was being used.
> >>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
> >>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
> >>>>>> for a gem handle, user space has to guess to find out the way to mmap
> >>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
> >>>>>> expected way
> >>>>>> for people to handle gpu buffer?
> >>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> >>>>> handle, you have to use the dumb buffer API, there's no other way to
> >>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> >>>>> two are clear.
> >>>>>
> >>>>> It's when you import the dma-buf into vgem and create a handle out of
> >>>>> it, that's when the handle is no longer first class and certain uAPI
> >>>>> [the dumb buffer API in particular] fail.
> >>>>>
> >>>>> It's not brilliant, as you say, it requires the user to remember the
> >>>>> difference between the handles, but at the same time it does prevent
> >>>>> them falling into coherency traps by forcing them to use the right
> >>>>> driver to handle the object, and have to consider the additional ioctls
> >>>>> that go along with that access.
> >>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> >>>> is illegal.
> >>>>
> >>>> What we could maybe try to do is to redirect this mmap() API call on the
> >>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> >>>> like that without changes.
> >>> We already do that, there's a full helper-ified path from I think shmem
> >>> helpers through prime helpers to forward this all. Including handling
> >>> buffer offsets and all the other lolz back&forth.
> >> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> >> needs to be avoided.
> >>
> >> Each file descriptor is associated with an struct address_space. And
> >> when you mmap() through the importer by redirecting the system call to
> >> the exporter you end up with the wrong struct address_space in your VMA.
> >>
> >> That in turn can go up easily in flames when the exporter tries to
> >> invalidate the CPU mappings for its DMA-buf while moving it.
> >>
> >> Where are we doing this? My last status was that this is forbidden.
> > Hm I thought we're doing all that already, but looking at the code
> > again we're only doing this when opening a new drm fd or dma-buf fd.
> > So the right file->f_mapping is always set at file creation time.
> >
> > And we indeed don't frob this more when going another indirection ...
> > Maybe we screwed up something somewhere :-/
> >
> > Also I thought the mapping is only taken after the vma is instatiated,
> > otherwise the tricks we're playing with dma-buf already wouldn't work:
> > dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
> > it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
> > whether we could adjust vm_file too. Or is that the thing that's
> > forbidden?
>
> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>
> But changing vma->vm_file, that's something I haven't seen before and
> most likely could blow up badly.

Ok, I read the shmem helpers again, I think those are the only ones
which do the importer mmap -> dma_buf_mmap() forwarding, and hence
break stuff all around here.

They also remove the vma->vm_pgoff offset, which means
unmap_mapping_range wont work anyway. I guess for drivers which use
shmem helpers the hard assumption is that a) can't use p2p dma and b)
pin everything into system memory.

So not a problem. But something to keep in mind. I'll try to do a
kerneldoc patch to note this somewhere. btw on that, did the
timeline/syncobj documentation patch land by now? Just trying to make
sure that doesn't get lost for another few months or so :-/

Cheers, Daniel

>
> Christian.
>
> > -Daniel
> >
> >> Christian.
> >>
> >>> Of course there's still the problem that many drivers don't forward the
> >>> cache coherency calls for begin/end cpu access, so in a bunch of cases
> >>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
> >>> dma-buf :-P
> >>>
> >>> But yeah trying to handle the mmap as an importer, bypassing the export:
> >>> nope. The one exception is if you have some kind of fancy gart with
> >>> cpu-visible pci bar (like at least integrated intel gpus have). But in
> >>> that case the mmap very much looks&acts like device access in every way.
> >>>
> >>> Cheers, Daniel
> >>>
> >>>> Regards,
> >>>> Christian.
> >>>>
> >>>>
> >>>>> -Chris
> >
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-08 16:11                     ` Daniel Vetter
  (?)
@ 2020-07-08 16:19                       ` Daniel Vetter
  -1 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08 16:19 UTC (permalink / raw)
  To: Christian König, Rob Herring, Tomeu Vizoso,
	Alyssa Rosenzweig, Steven Price
  Cc: Chris Wilson, lepton, dri-devel, intel-gfx,
	Thomas Hellström, # v4.10+

On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
> >
> > Am 08.07.20 um 17:01 schrieb Daniel Vetter:
> > > On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
> > >> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> > >>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> > >>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> > >>>>> Quoting lepton (2020-07-07 19:17:51)
> > >>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > >>>>>>> Quoting lepton (2020-07-07 18:05:21)
> > >>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > >>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
> > >>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
> > >>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
> > >>>>>>>>> not always meaningful and the shmemfs backing store misleading.
> > >>>>>>>>>
> > >>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > >>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
> > >>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
> > >>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > >>>>>>>> drm_gem_dumb_map_offset
> > >>>>>>> As I too found out, and so had to correct my story telling.
> > >>>>>>>
> > >>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> > >>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> > >>>>>>> igt to see how it was being used.
> > >>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
> > >>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
> > >>>>>> for a gem handle, user space has to guess to find out the way to mmap
> > >>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
> > >>>>>> expected way
> > >>>>>> for people to handle gpu buffer?
> > >>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> > >>>>> handle, you have to use the dumb buffer API, there's no other way to
> > >>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> > >>>>> two are clear.
> > >>>>>
> > >>>>> It's when you import the dma-buf into vgem and create a handle out of
> > >>>>> it, that's when the handle is no longer first class and certain uAPI
> > >>>>> [the dumb buffer API in particular] fail.
> > >>>>>
> > >>>>> It's not brilliant, as you say, it requires the user to remember the
> > >>>>> difference between the handles, but at the same time it does prevent
> > >>>>> them falling into coherency traps by forcing them to use the right
> > >>>>> driver to handle the object, and have to consider the additional ioctls
> > >>>>> that go along with that access.
> > >>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> > >>>> is illegal.
> > >>>>
> > >>>> What we could maybe try to do is to redirect this mmap() API call on the
> > >>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> > >>>> like that without changes.
> > >>> We already do that, there's a full helper-ified path from I think shmem
> > >>> helpers through prime helpers to forward this all. Including handling
> > >>> buffer offsets and all the other lolz back&forth.
> > >> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> > >> needs to be avoided.
> > >>
> > >> Each file descriptor is associated with an struct address_space. And
> > >> when you mmap() through the importer by redirecting the system call to
> > >> the exporter you end up with the wrong struct address_space in your VMA.
> > >>
> > >> That in turn can go up easily in flames when the exporter tries to
> > >> invalidate the CPU mappings for its DMA-buf while moving it.
> > >>
> > >> Where are we doing this? My last status was that this is forbidden.
> > > Hm I thought we're doing all that already, but looking at the code
> > > again we're only doing this when opening a new drm fd or dma-buf fd.
> > > So the right file->f_mapping is always set at file creation time.
> > >
> > > And we indeed don't frob this more when going another indirection ...
> > > Maybe we screwed up something somewhere :-/
> > >
> > > Also I thought the mapping is only taken after the vma is instatiated,
> > > otherwise the tricks we're playing with dma-buf already wouldn't work:
> > > dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
> > > it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
> > > whether we could adjust vm_file too. Or is that the thing that's
> > > forbidden?
> >
> > Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
> >
> > But changing vma->vm_file, that's something I haven't seen before and
> > most likely could blow up badly.
>
> Ok, I read the shmem helpers again, I think those are the only ones
> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
> break stuff all around here.
>
> They also remove the vma->vm_pgoff offset, which means
> unmap_mapping_range wont work anyway. I guess for drivers which use
> shmem helpers the hard assumption is that a) can't use p2p dma and b)
> pin everything into system memory.
>
> So not a problem. But something to keep in mind. I'll try to do a
> kerneldoc patch to note this somewhere. btw on that, did the
> timeline/syncobj documentation patch land by now? Just trying to make
> sure that doesn't get lost for another few months or so :-/

Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
which uses unmap_mapping_range underneath, and that's used by
panfrost. And panfrost also uses the mmap helper. Kinda wondering
whether we broke some stuff here, or whether the reverse map is
installed before we touch vma->vm_pgoff.

panfrost folks, does panfrost purged buffer handling of mmap still
work correctly? Do you have some kind of igt or similar for this?

Cheers, Daniel

>
> Cheers, Daniel
>
> >
> > Christian.
> >
> > > -Daniel
> > >
> > >> Christian.
> > >>
> > >>> Of course there's still the problem that many drivers don't forward the
> > >>> cache coherency calls for begin/end cpu access, so in a bunch of cases
> > >>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
> > >>> dma-buf :-P
> > >>>
> > >>> But yeah trying to handle the mmap as an importer, bypassing the export:
> > >>> nope. The one exception is if you have some kind of fancy gart with
> > >>> cpu-visible pci bar (like at least integrated intel gpus have). But in
> > >>> that case the mmap very much looks&acts like device access in every way.
> > >>>
> > >>> Cheers, Daniel
> > >>>
> > >>>> Regards,
> > >>>> Christian.
> > >>>>
> > >>>>
> > >>>>> -Chris
> > >
> >
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08 16:19                       ` Daniel Vetter
  0 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08 16:19 UTC (permalink / raw)
  To: Christian König, Rob Herring, Tomeu Vizoso,
	Alyssa Rosenzweig, Steven Price
  Cc: Thomas Hellström, # v4.10+,
	lepton, dri-devel, Chris Wilson, intel-gfx

On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
> >
> > Am 08.07.20 um 17:01 schrieb Daniel Vetter:
> > > On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
> > >> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> > >>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> > >>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> > >>>>> Quoting lepton (2020-07-07 19:17:51)
> > >>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > >>>>>>> Quoting lepton (2020-07-07 18:05:21)
> > >>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > >>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
> > >>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
> > >>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
> > >>>>>>>>> not always meaningful and the shmemfs backing store misleading.
> > >>>>>>>>>
> > >>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > >>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
> > >>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
> > >>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > >>>>>>>> drm_gem_dumb_map_offset
> > >>>>>>> As I too found out, and so had to correct my story telling.
> > >>>>>>>
> > >>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> > >>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> > >>>>>>> igt to see how it was being used.
> > >>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
> > >>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
> > >>>>>> for a gem handle, user space has to guess to find out the way to mmap
> > >>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
> > >>>>>> expected way
> > >>>>>> for people to handle gpu buffer?
> > >>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> > >>>>> handle, you have to use the dumb buffer API, there's no other way to
> > >>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> > >>>>> two are clear.
> > >>>>>
> > >>>>> It's when you import the dma-buf into vgem and create a handle out of
> > >>>>> it, that's when the handle is no longer first class and certain uAPI
> > >>>>> [the dumb buffer API in particular] fail.
> > >>>>>
> > >>>>> It's not brilliant, as you say, it requires the user to remember the
> > >>>>> difference between the handles, but at the same time it does prevent
> > >>>>> them falling into coherency traps by forcing them to use the right
> > >>>>> driver to handle the object, and have to consider the additional ioctls
> > >>>>> that go along with that access.
> > >>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> > >>>> is illegal.
> > >>>>
> > >>>> What we could maybe try to do is to redirect this mmap() API call on the
> > >>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> > >>>> like that without changes.
> > >>> We already do that, there's a full helper-ified path from I think shmem
> > >>> helpers through prime helpers to forward this all. Including handling
> > >>> buffer offsets and all the other lolz back&forth.
> > >> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> > >> needs to be avoided.
> > >>
> > >> Each file descriptor is associated with an struct address_space. And
> > >> when you mmap() through the importer by redirecting the system call to
> > >> the exporter you end up with the wrong struct address_space in your VMA.
> > >>
> > >> That in turn can go up easily in flames when the exporter tries to
> > >> invalidate the CPU mappings for its DMA-buf while moving it.
> > >>
> > >> Where are we doing this? My last status was that this is forbidden.
> > > Hm I thought we're doing all that already, but looking at the code
> > > again we're only doing this when opening a new drm fd or dma-buf fd.
> > > So the right file->f_mapping is always set at file creation time.
> > >
> > > And we indeed don't frob this more when going another indirection ...
> > > Maybe we screwed up something somewhere :-/
> > >
> > > Also I thought the mapping is only taken after the vma is instatiated,
> > > otherwise the tricks we're playing with dma-buf already wouldn't work:
> > > dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
> > > it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
> > > whether we could adjust vm_file too. Or is that the thing that's
> > > forbidden?
> >
> > Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
> >
> > But changing vma->vm_file, that's something I haven't seen before and
> > most likely could blow up badly.
>
> Ok, I read the shmem helpers again, I think those are the only ones
> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
> break stuff all around here.
>
> They also remove the vma->vm_pgoff offset, which means
> unmap_mapping_range wont work anyway. I guess for drivers which use
> shmem helpers the hard assumption is that a) can't use p2p dma and b)
> pin everything into system memory.
>
> So not a problem. But something to keep in mind. I'll try to do a
> kerneldoc patch to note this somewhere. btw on that, did the
> timeline/syncobj documentation patch land by now? Just trying to make
> sure that doesn't get lost for another few months or so :-/

Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
which uses unmap_mapping_range underneath, and that's used by
panfrost. And panfrost also uses the mmap helper. Kinda wondering
whether we broke some stuff here, or whether the reverse map is
installed before we touch vma->vm_pgoff.

panfrost folks, does panfrost purged buffer handling of mmap still
work correctly? Do you have some kind of igt or similar for this?

Cheers, Daniel

>
> Cheers, Daniel
>
> >
> > Christian.
> >
> > > -Daniel
> > >
> > >> Christian.
> > >>
> > >>> Of course there's still the problem that many drivers don't forward the
> > >>> cache coherency calls for begin/end cpu access, so in a bunch of cases
> > >>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
> > >>> dma-buf :-P
> > >>>
> > >>> But yeah trying to handle the mmap as an importer, bypassing the export:
> > >>> nope. The one exception is if you have some kind of fancy gart with
> > >>> cpu-visible pci bar (like at least integrated intel gpus have). But in
> > >>> that case the mmap very much looks&acts like device access in every way.
> > >>>
> > >>> Cheers, Daniel
> > >>>
> > >>>> Regards,
> > >>>> Christian.
> > >>>>
> > >>>>
> > >>>>> -Chris
> > >
> >
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-08 16:19                       ` Daniel Vetter
  0 siblings, 0 replies; 56+ messages in thread
From: Daniel Vetter @ 2020-07-08 16:19 UTC (permalink / raw)
  To: Christian König, Rob Herring, Tomeu Vizoso,
	Alyssa Rosenzweig, Steven Price
  Cc: # v4.10+, lepton, dri-devel, Chris Wilson, intel-gfx

On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
> >
> > Am 08.07.20 um 17:01 schrieb Daniel Vetter:
> > > On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
> > >> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
> > >>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
> > >>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
> > >>>>> Quoting lepton (2020-07-07 19:17:51)
> > >>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > >>>>>>> Quoting lepton (2020-07-07 18:05:21)
> > >>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > >>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
> > >>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
> > >>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
> > >>>>>>>>> not always meaningful and the shmemfs backing store misleading.
> > >>>>>>>>>
> > >>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > >>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
> > >>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
> > >>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > >>>>>>>> drm_gem_dumb_map_offset
> > >>>>>>> As I too found out, and so had to correct my story telling.
> > >>>>>>>
> > >>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> > >>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> > >>>>>>> igt to see how it was being used.
> > >>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
> > >>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
> > >>>>>> for a gem handle, user space has to guess to find out the way to mmap
> > >>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
> > >>>>>> expected way
> > >>>>>> for people to handle gpu buffer?
> > >>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> > >>>>> handle, you have to use the dumb buffer API, there's no other way to
> > >>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> > >>>>> two are clear.
> > >>>>>
> > >>>>> It's when you import the dma-buf into vgem and create a handle out of
> > >>>>> it, that's when the handle is no longer first class and certain uAPI
> > >>>>> [the dumb buffer API in particular] fail.
> > >>>>>
> > >>>>> It's not brilliant, as you say, it requires the user to remember the
> > >>>>> difference between the handles, but at the same time it does prevent
> > >>>>> them falling into coherency traps by forcing them to use the right
> > >>>>> driver to handle the object, and have to consider the additional ioctls
> > >>>>> that go along with that access.
> > >>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
> > >>>> is illegal.
> > >>>>
> > >>>> What we could maybe try to do is to redirect this mmap() API call on the
> > >>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
> > >>>> like that without changes.
> > >>> We already do that, there's a full helper-ified path from I think shmem
> > >>> helpers through prime helpers to forward this all. Including handling
> > >>> buffer offsets and all the other lolz back&forth.
> > >> Oh, that most likely won't work correctly with unpinned DMA-bufs and
> > >> needs to be avoided.
> > >>
> > >> Each file descriptor is associated with an struct address_space. And
> > >> when you mmap() through the importer by redirecting the system call to
> > >> the exporter you end up with the wrong struct address_space in your VMA.
> > >>
> > >> That in turn can go up easily in flames when the exporter tries to
> > >> invalidate the CPU mappings for its DMA-buf while moving it.
> > >>
> > >> Where are we doing this? My last status was that this is forbidden.
> > > Hm I thought we're doing all that already, but looking at the code
> > > again we're only doing this when opening a new drm fd or dma-buf fd.
> > > So the right file->f_mapping is always set at file creation time.
> > >
> > > And we indeed don't frob this more when going another indirection ...
> > > Maybe we screwed up something somewhere :-/
> > >
> > > Also I thought the mapping is only taken after the vma is instatiated,
> > > otherwise the tricks we're playing with dma-buf already wouldn't work:
> > > dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
> > > it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
> > > whether we could adjust vm_file too. Or is that the thing that's
> > > forbidden?
> >
> > Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
> >
> > But changing vma->vm_file, that's something I haven't seen before and
> > most likely could blow up badly.
>
> Ok, I read the shmem helpers again, I think those are the only ones
> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
> break stuff all around here.
>
> They also remove the vma->vm_pgoff offset, which means
> unmap_mapping_range wont work anyway. I guess for drivers which use
> shmem helpers the hard assumption is that a) can't use p2p dma and b)
> pin everything into system memory.
>
> So not a problem. But something to keep in mind. I'll try to do a
> kerneldoc patch to note this somewhere. btw on that, did the
> timeline/syncobj documentation patch land by now? Just trying to make
> sure that doesn't get lost for another few months or so :-/

Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
which uses unmap_mapping_range underneath, and that's used by
panfrost. And panfrost also uses the mmap helper. Kinda wondering
whether we broke some stuff here, or whether the reverse map is
installed before we touch vma->vm_pgoff.

panfrost folks, does panfrost purged buffer handling of mmap still
work correctly? Do you have some kind of igt or similar for this?

Cheers, Daniel

>
> Cheers, Daniel
>
> >
> > Christian.
> >
> > > -Daniel
> > >
> > >> Christian.
> > >>
> > >>> Of course there's still the problem that many drivers don't forward the
> > >>> cache coherency calls for begin/end cpu access, so in a bunch of cases
> > >>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
> > >>> dma-buf :-P
> > >>>
> > >>> But yeah trying to handle the mmap as an importer, bypassing the export:
> > >>> nope. The one exception is if you have some kind of fancy gart with
> > >>> cpu-visible pci bar (like at least integrated intel gpus have). But in
> > >>> that case the mmap very much looks&acts like device access in every way.
> > >>>
> > >>> Cheers, Daniel
> > >>>
> > >>>> Regards,
> > >>>> Christian.
> > >>>>
> > >>>>
> > >>>>> -Chris
> > >
> >
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-08 16:19                       ` Daniel Vetter
  (?)
@ 2020-07-09  8:48                         ` Christian König
  -1 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-09  8:48 UTC (permalink / raw)
  To: Daniel Vetter, Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig,
	Steven Price
  Cc: Chris Wilson, lepton, dri-devel, intel-gfx,
	Thomas Hellström, # v4.10+

Am 08.07.20 um 18:19 schrieb Daniel Vetter:
> On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
>>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>>>>>>
>>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>>
>>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>>>>>>> igt to see how it was being used.
>>>>>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>>>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>>>>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>>>>>>> expected way
>>>>>>>>> for people to handle gpu buffer?
>>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>>>>>>> handle, you have to use the dumb buffer API, there's no other way to
>>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>>>>>>> two are clear.
>>>>>>>>
>>>>>>>> It's when you import the dma-buf into vgem and create a handle out of
>>>>>>>> it, that's when the handle is no longer first class and certain uAPI
>>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>>
>>>>>>>> It's not brilliant, as you say, it requires the user to remember the
>>>>>>>> difference between the handles, but at the same time it does prevent
>>>>>>>> them falling into coherency traps by forcing them to use the right
>>>>>>>> driver to handle the object, and have to consider the additional ioctls
>>>>>>>> that go along with that access.
>>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>>>>>>> is illegal.
>>>>>>>
>>>>>>> What we could maybe try to do is to redirect this mmap() API call on the
>>>>>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>>>>>>> like that without changes.
>>>>>> We already do that, there's a full helper-ified path from I think shmem
>>>>>> helpers through prime helpers to forward this all. Including handling
>>>>>> buffer offsets and all the other lolz back&forth.
>>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>>>>> needs to be avoided.
>>>>>
>>>>> Each file descriptor is associated with an struct address_space. And
>>>>> when you mmap() through the importer by redirecting the system call to
>>>>> the exporter you end up with the wrong struct address_space in your VMA.
>>>>>
>>>>> That in turn can go up easily in flames when the exporter tries to
>>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>>
>>>>> Where are we doing this? My last status was that this is forbidden.
>>>> Hm I thought we're doing all that already, but looking at the code
>>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>>> So the right file->f_mapping is always set at file creation time.
>>>>
>>>> And we indeed don't frob this more when going another indirection ...
>>>> Maybe we screwed up something somewhere :-/
>>>>
>>>> Also I thought the mapping is only taken after the vma is instatiated,
>>>> otherwise the tricks we're playing with dma-buf already wouldn't work:
>>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
>>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>>> whether we could adjust vm_file too. Or is that the thing that's
>>>> forbidden?
>>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>>
>>> But changing vma->vm_file, that's something I haven't seen before and
>>> most likely could blow up badly.
>> Ok, I read the shmem helpers again, I think those are the only ones
>> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
>> break stuff all around here.
>>
>> They also remove the vma->vm_pgoff offset, which means
>> unmap_mapping_range wont work anyway. I guess for drivers which use
>> shmem helpers the hard assumption is that a) can't use p2p dma and b)
>> pin everything into system memory.
>>
>> So not a problem. But something to keep in mind. I'll try to do a
>> kerneldoc patch to note this somewhere. btw on that, did the
>> timeline/syncobj documentation patch land by now? Just trying to make
>> sure that doesn't get lost for another few months or so :-/
> Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
> which uses unmap_mapping_range underneath, and that's used by
> panfrost. And panfrost also uses the mmap helper. Kinda wondering
> whether we broke some stuff here, or whether the reverse map is
> installed before we touch vma->vm_pgoff.

I think the key problem here is that unmap_mapping_range() doesn't blow 
up immediately when this is wrong.

E.g. we just have a stale CPU page table entry which allows userspace to 
write to freed up memory, but we don't really notice that immediately....

Maybe we should stop allowing to mmap() DMA-buf through the importer 
file descriptor altogether and only allow mapping it through its own fd 
or the exporter.

Christian.

> panfrost folks, does panfrost purged buffer handling of mmap still
> work correctly? Do you have some kind of igt or similar for this?
>
> Cheers, Daniel
>
>> Cheers, Daniel
>>
>>> Christian.
>>>
>>>> -Daniel
>>>>
>>>>> Christian.
>>>>>
>>>>>> Of course there's still the problem that many drivers don't forward the
>>>>>> cache coherency calls for begin/end cpu access, so in a bunch of cases
>>>>>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
>>>>>> dma-buf :-P
>>>>>>
>>>>>> But yeah trying to handle the mmap as an importer, bypassing the export:
>>>>>> nope. The one exception is if you have some kind of fancy gart with
>>>>>> cpu-visible pci bar (like at least integrated intel gpus have). But in
>>>>>> that case the mmap very much looks&acts like device access in every way.
>>>>>>
>>>>>> Cheers, Daniel
>>>>>>
>>>>>>> Regards,
>>>>>>> Christian.
>>>>>>>
>>>>>>>
>>>>>>>> -Chris
>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7Ca4429cf3610248b1122f08d8235ac32a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637298220041879219&amp;sdata=DoNpTWtuKAfiwqUdYw7INhajhH1rvzSncDivXWkv%2FDI%3D&amp;reserved=0
>
>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-09  8:48                         ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-09  8:48 UTC (permalink / raw)
  To: Daniel Vetter, Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig,
	Steven Price
  Cc: Thomas Hellström, # v4.10+,
	lepton, dri-devel, Chris Wilson, intel-gfx

Am 08.07.20 um 18:19 schrieb Daniel Vetter:
> On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
>>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>>>>>>
>>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>>
>>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>>>>>>> igt to see how it was being used.
>>>>>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>>>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>>>>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>>>>>>> expected way
>>>>>>>>> for people to handle gpu buffer?
>>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>>>>>>> handle, you have to use the dumb buffer API, there's no other way to
>>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>>>>>>> two are clear.
>>>>>>>>
>>>>>>>> It's when you import the dma-buf into vgem and create a handle out of
>>>>>>>> it, that's when the handle is no longer first class and certain uAPI
>>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>>
>>>>>>>> It's not brilliant, as you say, it requires the user to remember the
>>>>>>>> difference between the handles, but at the same time it does prevent
>>>>>>>> them falling into coherency traps by forcing them to use the right
>>>>>>>> driver to handle the object, and have to consider the additional ioctls
>>>>>>>> that go along with that access.
>>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>>>>>>> is illegal.
>>>>>>>
>>>>>>> What we could maybe try to do is to redirect this mmap() API call on the
>>>>>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>>>>>>> like that without changes.
>>>>>> We already do that, there's a full helper-ified path from I think shmem
>>>>>> helpers through prime helpers to forward this all. Including handling
>>>>>> buffer offsets and all the other lolz back&forth.
>>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>>>>> needs to be avoided.
>>>>>
>>>>> Each file descriptor is associated with an struct address_space. And
>>>>> when you mmap() through the importer by redirecting the system call to
>>>>> the exporter you end up with the wrong struct address_space in your VMA.
>>>>>
>>>>> That in turn can go up easily in flames when the exporter tries to
>>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>>
>>>>> Where are we doing this? My last status was that this is forbidden.
>>>> Hm I thought we're doing all that already, but looking at the code
>>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>>> So the right file->f_mapping is always set at file creation time.
>>>>
>>>> And we indeed don't frob this more when going another indirection ...
>>>> Maybe we screwed up something somewhere :-/
>>>>
>>>> Also I thought the mapping is only taken after the vma is instatiated,
>>>> otherwise the tricks we're playing with dma-buf already wouldn't work:
>>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
>>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>>> whether we could adjust vm_file too. Or is that the thing that's
>>>> forbidden?
>>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>>
>>> But changing vma->vm_file, that's something I haven't seen before and
>>> most likely could blow up badly.
>> Ok, I read the shmem helpers again, I think those are the only ones
>> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
>> break stuff all around here.
>>
>> They also remove the vma->vm_pgoff offset, which means
>> unmap_mapping_range wont work anyway. I guess for drivers which use
>> shmem helpers the hard assumption is that a) can't use p2p dma and b)
>> pin everything into system memory.
>>
>> So not a problem. But something to keep in mind. I'll try to do a
>> kerneldoc patch to note this somewhere. btw on that, did the
>> timeline/syncobj documentation patch land by now? Just trying to make
>> sure that doesn't get lost for another few months or so :-/
> Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
> which uses unmap_mapping_range underneath, and that's used by
> panfrost. And panfrost also uses the mmap helper. Kinda wondering
> whether we broke some stuff here, or whether the reverse map is
> installed before we touch vma->vm_pgoff.

I think the key problem here is that unmap_mapping_range() doesn't blow 
up immediately when this is wrong.

E.g. we just have a stale CPU page table entry which allows userspace to 
write to freed up memory, but we don't really notice that immediately....

Maybe we should stop allowing to mmap() DMA-buf through the importer 
file descriptor altogether and only allow mapping it through its own fd 
or the exporter.

Christian.

> panfrost folks, does panfrost purged buffer handling of mmap still
> work correctly? Do you have some kind of igt or similar for this?
>
> Cheers, Daniel
>
>> Cheers, Daniel
>>
>>> Christian.
>>>
>>>> -Daniel
>>>>
>>>>> Christian.
>>>>>
>>>>>> Of course there's still the problem that many drivers don't forward the
>>>>>> cache coherency calls for begin/end cpu access, so in a bunch of cases
>>>>>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
>>>>>> dma-buf :-P
>>>>>>
>>>>>> But yeah trying to handle the mmap as an importer, bypassing the export:
>>>>>> nope. The one exception is if you have some kind of fancy gart with
>>>>>> cpu-visible pci bar (like at least integrated intel gpus have). But in
>>>>>> that case the mmap very much looks&acts like device access in every way.
>>>>>>
>>>>>> Cheers, Daniel
>>>>>>
>>>>>>> Regards,
>>>>>>> Christian.
>>>>>>>
>>>>>>>
>>>>>>>> -Chris
>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7Ca4429cf3610248b1122f08d8235ac32a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637298220041879219&amp;sdata=DoNpTWtuKAfiwqUdYw7INhajhH1rvzSncDivXWkv%2FDI%3D&amp;reserved=0
>
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-09  8:48                         ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-09  8:48 UTC (permalink / raw)
  To: Daniel Vetter, Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig,
	Steven Price
  Cc: # v4.10+, lepton, dri-devel, Chris Wilson, intel-gfx

Am 08.07.20 um 18:19 schrieb Daniel Vetter:
> On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
>>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>>>>>>
>>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>>
>>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>>>>>>> igt to see how it was being used.
>>>>>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>>>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>>>>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>>>>>>> expected way
>>>>>>>>> for people to handle gpu buffer?
>>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>>>>>>> handle, you have to use the dumb buffer API, there's no other way to
>>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>>>>>>> two are clear.
>>>>>>>>
>>>>>>>> It's when you import the dma-buf into vgem and create a handle out of
>>>>>>>> it, that's when the handle is no longer first class and certain uAPI
>>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>>
>>>>>>>> It's not brilliant, as you say, it requires the user to remember the
>>>>>>>> difference between the handles, but at the same time it does prevent
>>>>>>>> them falling into coherency traps by forcing them to use the right
>>>>>>>> driver to handle the object, and have to consider the additional ioctls
>>>>>>>> that go along with that access.
>>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>>>>>>> is illegal.
>>>>>>>
>>>>>>> What we could maybe try to do is to redirect this mmap() API call on the
>>>>>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>>>>>>> like that without changes.
>>>>>> We already do that, there's a full helper-ified path from I think shmem
>>>>>> helpers through prime helpers to forward this all. Including handling
>>>>>> buffer offsets and all the other lolz back&forth.
>>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>>>>> needs to be avoided.
>>>>>
>>>>> Each file descriptor is associated with an struct address_space. And
>>>>> when you mmap() through the importer by redirecting the system call to
>>>>> the exporter you end up with the wrong struct address_space in your VMA.
>>>>>
>>>>> That in turn can go up easily in flames when the exporter tries to
>>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>>
>>>>> Where are we doing this? My last status was that this is forbidden.
>>>> Hm I thought we're doing all that already, but looking at the code
>>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>>> So the right file->f_mapping is always set at file creation time.
>>>>
>>>> And we indeed don't frob this more when going another indirection ...
>>>> Maybe we screwed up something somewhere :-/
>>>>
>>>> Also I thought the mapping is only taken after the vma is instatiated,
>>>> otherwise the tricks we're playing with dma-buf already wouldn't work:
>>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
>>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>>> whether we could adjust vm_file too. Or is that the thing that's
>>>> forbidden?
>>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>>
>>> But changing vma->vm_file, that's something I haven't seen before and
>>> most likely could blow up badly.
>> Ok, I read the shmem helpers again, I think those are the only ones
>> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
>> break stuff all around here.
>>
>> They also remove the vma->vm_pgoff offset, which means
>> unmap_mapping_range wont work anyway. I guess for drivers which use
>> shmem helpers the hard assumption is that a) can't use p2p dma and b)
>> pin everything into system memory.
>>
>> So not a problem. But something to keep in mind. I'll try to do a
>> kerneldoc patch to note this somewhere. btw on that, did the
>> timeline/syncobj documentation patch land by now? Just trying to make
>> sure that doesn't get lost for another few months or so :-/
> Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
> which uses unmap_mapping_range underneath, and that's used by
> panfrost. And panfrost also uses the mmap helper. Kinda wondering
> whether we broke some stuff here, or whether the reverse map is
> installed before we touch vma->vm_pgoff.

I think the key problem here is that unmap_mapping_range() doesn't blow 
up immediately when this is wrong.

E.g. we just have a stale CPU page table entry which allows userspace to 
write to freed up memory, but we don't really notice that immediately....

Maybe we should stop allowing to mmap() DMA-buf through the importer 
file descriptor altogether and only allow mapping it through its own fd 
or the exporter.

Christian.

> panfrost folks, does panfrost purged buffer handling of mmap still
> work correctly? Do you have some kind of igt or similar for this?
>
> Cheers, Daniel
>
>> Cheers, Daniel
>>
>>> Christian.
>>>
>>>> -Daniel
>>>>
>>>>> Christian.
>>>>>
>>>>>> Of course there's still the problem that many drivers don't forward the
>>>>>> cache coherency calls for begin/end cpu access, so in a bunch of cases
>>>>>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
>>>>>> dma-buf :-P
>>>>>>
>>>>>> But yeah trying to handle the mmap as an importer, bypassing the export:
>>>>>> nope. The one exception is if you have some kind of fancy gart with
>>>>>> cpu-visible pci bar (like at least integrated intel gpus have). But in
>>>>>> that case the mmap very much looks&acts like device access in every way.
>>>>>>
>>>>>> Cheers, Daniel
>>>>>>
>>>>>>> Regards,
>>>>>>> Christian.
>>>>>>>
>>>>>>>
>>>>>>>> -Chris
>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7Ca4429cf3610248b1122f08d8235ac32a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637298220041879219&amp;sdata=DoNpTWtuKAfiwqUdYw7INhajhH1rvzSncDivXWkv%2FDI%3D&amp;reserved=0
>
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-08 16:11                     ` Daniel Vetter
  (?)
@ 2020-07-09  8:49                       ` Christian König
  -1 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-09  8:49 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Chris Wilson, lepton, dri-devel, intel-gfx,
	Thomas Hellström, # v4.10+

Am 08.07.20 um 18:11 schrieb Daniel Vetter:
> On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>>>>>
>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>
>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>>>>>> igt to see how it was being used.
>>>>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>>>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>>>>>> expected way
>>>>>>>> for people to handle gpu buffer?
>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>>>>>> handle, you have to use the dumb buffer API, there's no other way to
>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>>>>>> two are clear.
>>>>>>>
>>>>>>> It's when you import the dma-buf into vgem and create a handle out of
>>>>>>> it, that's when the handle is no longer first class and certain uAPI
>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>
>>>>>>> It's not brilliant, as you say, it requires the user to remember the
>>>>>>> difference between the handles, but at the same time it does prevent
>>>>>>> them falling into coherency traps by forcing them to use the right
>>>>>>> driver to handle the object, and have to consider the additional ioctls
>>>>>>> that go along with that access.
>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>>>>>> is illegal.
>>>>>>
>>>>>> What we could maybe try to do is to redirect this mmap() API call on the
>>>>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>>>>>> like that without changes.
>>>>> We already do that, there's a full helper-ified path from I think shmem
>>>>> helpers through prime helpers to forward this all. Including handling
>>>>> buffer offsets and all the other lolz back&forth.
>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>>>> needs to be avoided.
>>>>
>>>> Each file descriptor is associated with an struct address_space. And
>>>> when you mmap() through the importer by redirecting the system call to
>>>> the exporter you end up with the wrong struct address_space in your VMA.
>>>>
>>>> That in turn can go up easily in flames when the exporter tries to
>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>
>>>> Where are we doing this? My last status was that this is forbidden.
>>> Hm I thought we're doing all that already, but looking at the code
>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>> So the right file->f_mapping is always set at file creation time.
>>>
>>> And we indeed don't frob this more when going another indirection ...
>>> Maybe we screwed up something somewhere :-/
>>>
>>> Also I thought the mapping is only taken after the vma is instatiated,
>>> otherwise the tricks we're playing with dma-buf already wouldn't work:
>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>> whether we could adjust vm_file too. Or is that the thing that's
>>> forbidden?
>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>
>> But changing vma->vm_file, that's something I haven't seen before and
>> most likely could blow up badly.
> Ok, I read the shmem helpers again, I think those are the only ones
> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
> break stuff all around here.
>
> They also remove the vma->vm_pgoff offset, which means
> unmap_mapping_range wont work anyway. I guess for drivers which use
> shmem helpers the hard assumption is that a) can't use p2p dma and b)
> pin everything into system memory.
>
> So not a problem. But something to keep in mind. I'll try to do a
> kerneldoc patch to note this somewhere. btw on that, did the
> timeline/syncobj documentation patch land by now? Just trying to make
> sure that doesn't get lost for another few months or so :-/

I still haven't found time to double check the documentation and most 
likely won't in quite a while.

Sorry for that, but yeah you know :)

Christian.

>
> Cheers, Daniel
>
>> Christian.
>>
>>> -Daniel
>>>
>>>> Christian.
>>>>
>>>>> Of course there's still the problem that many drivers don't forward the
>>>>> cache coherency calls for begin/end cpu access, so in a bunch of cases
>>>>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
>>>>> dma-buf :-P
>>>>>
>>>>> But yeah trying to handle the mmap as an importer, bypassing the export:
>>>>> nope. The one exception is if you have some kind of fancy gart with
>>>>> cpu-visible pci bar (like at least integrated intel gpus have). But in
>>>>> that case the mmap very much looks&acts like device access in every way.
>>>>>
>>>>> Cheers, Daniel
>>>>>
>>>>>> Regards,
>>>>>> Christian.
>>>>>>
>>>>>>
>>>>>>> -Chris
>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-09  8:49                       ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-09  8:49 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Thomas Hellström, # v4.10+,
	lepton, dri-devel, Chris Wilson, intel-gfx

Am 08.07.20 um 18:11 schrieb Daniel Vetter:
> On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>>>>>
>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>
>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>>>>>> igt to see how it was being used.
>>>>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>>>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>>>>>> expected way
>>>>>>>> for people to handle gpu buffer?
>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>>>>>> handle, you have to use the dumb buffer API, there's no other way to
>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>>>>>> two are clear.
>>>>>>>
>>>>>>> It's when you import the dma-buf into vgem and create a handle out of
>>>>>>> it, that's when the handle is no longer first class and certain uAPI
>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>
>>>>>>> It's not brilliant, as you say, it requires the user to remember the
>>>>>>> difference between the handles, but at the same time it does prevent
>>>>>>> them falling into coherency traps by forcing them to use the right
>>>>>>> driver to handle the object, and have to consider the additional ioctls
>>>>>>> that go along with that access.
>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>>>>>> is illegal.
>>>>>>
>>>>>> What we could maybe try to do is to redirect this mmap() API call on the
>>>>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>>>>>> like that without changes.
>>>>> We already do that, there's a full helper-ified path from I think shmem
>>>>> helpers through prime helpers to forward this all. Including handling
>>>>> buffer offsets and all the other lolz back&forth.
>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>>>> needs to be avoided.
>>>>
>>>> Each file descriptor is associated with an struct address_space. And
>>>> when you mmap() through the importer by redirecting the system call to
>>>> the exporter you end up with the wrong struct address_space in your VMA.
>>>>
>>>> That in turn can go up easily in flames when the exporter tries to
>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>
>>>> Where are we doing this? My last status was that this is forbidden.
>>> Hm I thought we're doing all that already, but looking at the code
>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>> So the right file->f_mapping is always set at file creation time.
>>>
>>> And we indeed don't frob this more when going another indirection ...
>>> Maybe we screwed up something somewhere :-/
>>>
>>> Also I thought the mapping is only taken after the vma is instatiated,
>>> otherwise the tricks we're playing with dma-buf already wouldn't work:
>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>> whether we could adjust vm_file too. Or is that the thing that's
>>> forbidden?
>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>
>> But changing vma->vm_file, that's something I haven't seen before and
>> most likely could blow up badly.
> Ok, I read the shmem helpers again, I think those are the only ones
> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
> break stuff all around here.
>
> They also remove the vma->vm_pgoff offset, which means
> unmap_mapping_range wont work anyway. I guess for drivers which use
> shmem helpers the hard assumption is that a) can't use p2p dma and b)
> pin everything into system memory.
>
> So not a problem. But something to keep in mind. I'll try to do a
> kerneldoc patch to note this somewhere. btw on that, did the
> timeline/syncobj documentation patch land by now? Just trying to make
> sure that doesn't get lost for another few months or so :-/

I still haven't found time to double check the documentation and most 
likely won't in quite a while.

Sorry for that, but yeah you know :)

Christian.

>
> Cheers, Daniel
>
>> Christian.
>>
>>> -Daniel
>>>
>>>> Christian.
>>>>
>>>>> Of course there's still the problem that many drivers don't forward the
>>>>> cache coherency calls for begin/end cpu access, so in a bunch of cases
>>>>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
>>>>> dma-buf :-P
>>>>>
>>>>> But yeah trying to handle the mmap as an importer, bypassing the export:
>>>>> nope. The one exception is if you have some kind of fancy gart with
>>>>> cpu-visible pci bar (like at least integrated intel gpus have). But in
>>>>> that case the mmap very much looks&acts like device access in every way.
>>>>>
>>>>> Cheers, Daniel
>>>>>
>>>>>> Regards,
>>>>>> Christian.
>>>>>>
>>>>>>
>>>>>>> -Chris
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-09  8:49                       ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-09  8:49 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: # v4.10+, lepton, dri-devel, Chris Wilson, intel-gfx

Am 08.07.20 um 18:11 schrieb Daniel Vetter:
> On Wed, Jul 8, 2020 at 5:05 PM Christian König <christian.koenig@amd.com> wrote:
>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König <christian.koenig@amd.com> wrote:
>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>>>>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>>>>>>>
>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks like vgem is
>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>
>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>>>>>>>> igt to see how it was being used.
>>>>>>>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>>>>>>>> for vgem. I am not really a graphic guy, but then the api looks like:
>>>>>>>> for a gem handle, user space has to guess to find out the way to mmap
>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is this the
>>>>>>>> expected way
>>>>>>>> for people to handle gpu buffer?
>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you have the
>>>>>>> handle, you have to use the dumb buffer API, there's no other way to
>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
>>>>>>> two are clear.
>>>>>>>
>>>>>>> It's when you import the dma-buf into vgem and create a handle out of
>>>>>>> it, that's when the handle is no longer first class and certain uAPI
>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>
>>>>>>> It's not brilliant, as you say, it requires the user to remember the
>>>>>>> difference between the handles, but at the same time it does prevent
>>>>>>> them falling into coherency traps by forcing them to use the right
>>>>>>> driver to handle the object, and have to consider the additional ioctls
>>>>>>> that go along with that access.
>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
>>>>>> is illegal.
>>>>>>
>>>>>> What we could maybe try to do is to redirect this mmap() API call on the
>>>>>> importer to the exporter, but I'm pretty sure that the fs layer wouldn't
>>>>>> like that without changes.
>>>>> We already do that, there's a full helper-ified path from I think shmem
>>>>> helpers through prime helpers to forward this all. Including handling
>>>>> buffer offsets and all the other lolz back&forth.
>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>>>> needs to be avoided.
>>>>
>>>> Each file descriptor is associated with an struct address_space. And
>>>> when you mmap() through the importer by redirecting the system call to
>>>> the exporter you end up with the wrong struct address_space in your VMA.
>>>>
>>>> That in turn can go up easily in flames when the exporter tries to
>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>
>>>> Where are we doing this? My last status was that this is forbidden.
>>> Hm I thought we're doing all that already, but looking at the code
>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>> So the right file->f_mapping is always set at file creation time.
>>>
>>> And we indeed don't frob this more when going another indirection ...
>>> Maybe we screwed up something somewhere :-/
>>>
>>> Also I thought the mapping is only taken after the vma is instatiated,
>>> otherwise the tricks we're playing with dma-buf already wouldn't work:
>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>> whether we could adjust vm_file too. Or is that the thing that's
>>> forbidden?
>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>
>> But changing vma->vm_file, that's something I haven't seen before and
>> most likely could blow up badly.
> Ok, I read the shmem helpers again, I think those are the only ones
> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
> break stuff all around here.
>
> They also remove the vma->vm_pgoff offset, which means
> unmap_mapping_range wont work anyway. I guess for drivers which use
> shmem helpers the hard assumption is that a) can't use p2p dma and b)
> pin everything into system memory.
>
> So not a problem. But something to keep in mind. I'll try to do a
> kerneldoc patch to note this somewhere. btw on that, did the
> timeline/syncobj documentation patch land by now? Just trying to make
> sure that doesn't get lost for another few months or so :-/

I still haven't found time to double check the documentation and most 
likely won't in quite a while.

Sorry for that, but yeah you know :)

Christian.

>
> Cheers, Daniel
>
>> Christian.
>>
>>> -Daniel
>>>
>>>> Christian.
>>>>
>>>>> Of course there's still the problem that many drivers don't forward the
>>>>> cache coherency calls for begin/end cpu access, so in a bunch of cases
>>>>> you'll cache cacheline dirt soup. But that's kinda standard procedure for
>>>>> dma-buf :-P
>>>>>
>>>>> But yeah trying to handle the mmap as an importer, bypassing the export:
>>>>> nope. The one exception is if you have some kind of fancy gart with
>>>>> cpu-visible pci bar (like at least integrated intel gpus have). But in
>>>>> that case the mmap very much looks&acts like device access in every way.
>>>>>
>>>>> Cheers, Daniel
>>>>>
>>>>>> Regards,
>>>>>> Christian.
>>>>>>
>>>>>>
>>>>>>> -Chris
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-09  8:48                         ` Christian König
  (?)
@ 2020-07-09 13:54                           ` Steven Price
  -1 siblings, 0 replies; 56+ messages in thread
From: Steven Price @ 2020-07-09 13:54 UTC (permalink / raw)
  To: Christian König, Daniel Vetter, Rob Herring, Tomeu Vizoso,
	Alyssa Rosenzweig
  Cc: Thomas Hellström, # v4.10+,
	lepton, dri-devel, Chris Wilson, intel-gfx

On 09/07/2020 09:48, Christian König wrote:
> Am 08.07.20 um 18:19 schrieb Daniel Vetter:
>> On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>>> On Wed, Jul 8, 2020 at 5:05 PM Christian König 
>>> <christian.koenig@amd.com> wrote:
>>>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König 
>>>>> <christian.koenig@amd.com> wrote:
>>>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson 
>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson 
>>>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo 
>>>>>>>>>>>>> is native and
>>>>>>>>>>>>> allow direct operations like mmap() assuming it behaves as 
>>>>>>>>>>>>> backed by a
>>>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>>>>>> not always meaningful and the shmemfs backing store 
>>>>>>>>>>>>> misleading.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb 
>>>>>>>>>>>>> buffer API,
>>>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks 
>>>>>>>>>>>> like vgem is
>>>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it 
>>>>>>>>>>>> doesn't call
>>>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>>>
>>>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the 
>>>>>>>>>>> dumb buffer
>>>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had 
>>>>>>>>>>> to look at
>>>>>>>>>>> igt to see how it was being used.
>>>>>>>>>> Now it seems your fix is to disable "regular mmap" on imported 
>>>>>>>>>> dma buf
>>>>>>>>>> for vgem. I am not really a graphic guy, but then the api 
>>>>>>>>>> looks like:
>>>>>>>>>> for a gem handle, user space has to guess to find out the way 
>>>>>>>>>> to mmap
>>>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is 
>>>>>>>>>> this the
>>>>>>>>>> expected way
>>>>>>>>>> for people to handle gpu buffer?
>>>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you 
>>>>>>>>> have the
>>>>>>>>> handle, you have to use the dumb buffer API, there's no other 
>>>>>>>>> way to
>>>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it 
>>>>>>>>> directly. Those
>>>>>>>>> two are clear.
>>>>>>>>>
>>>>>>>>> It's when you import the dma-buf into vgem and create a handle 
>>>>>>>>> out of
>>>>>>>>> it, that's when the handle is no longer first class and certain 
>>>>>>>>> uAPI
>>>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>>>
>>>>>>>>> It's not brilliant, as you say, it requires the user to 
>>>>>>>>> remember the
>>>>>>>>> difference between the handles, but at the same time it does 
>>>>>>>>> prevent
>>>>>>>>> them falling into coherency traps by forcing them to use the right
>>>>>>>>> driver to handle the object, and have to consider the 
>>>>>>>>> additional ioctls
>>>>>>>>> that go along with that access.
>>>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of 
>>>>>>>> an importer
>>>>>>>> is illegal.
>>>>>>>>
>>>>>>>> What we could maybe try to do is to redirect this mmap() API 
>>>>>>>> call on the
>>>>>>>> importer to the exporter, but I'm pretty sure that the fs layer 
>>>>>>>> wouldn't
>>>>>>>> like that without changes.
>>>>>>> We already do that, there's a full helper-ified path from I think 
>>>>>>> shmem
>>>>>>> helpers through prime helpers to forward this all. Including 
>>>>>>> handling
>>>>>>> buffer offsets and all the other lolz back&forth.
>>>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>>>>>> needs to be avoided.
>>>>>>
>>>>>> Each file descriptor is associated with an struct address_space. And
>>>>>> when you mmap() through the importer by redirecting the system 
>>>>>> call to
>>>>>> the exporter you end up with the wrong struct address_space in 
>>>>>> your VMA.
>>>>>>
>>>>>> That in turn can go up easily in flames when the exporter tries to
>>>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>>>
>>>>>> Where are we doing this? My last status was that this is forbidden.
>>>>> Hm I thought we're doing all that already, but looking at the code
>>>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>>>> So the right file->f_mapping is always set at file creation time.
>>>>>
>>>>> And we indeed don't frob this more when going another indirection ...
>>>>> Maybe we screwed up something somewhere :-/
>>>>>
>>>>> Also I thought the mapping is only taken after the vma is instatiated,
>>>>> otherwise the tricks we're playing with dma-buf already wouldn't work:
>>>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
>>>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>>>> whether we could adjust vm_file too. Or is that the thing that's
>>>>> forbidden?
>>>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>>>
>>>> But changing vma->vm_file, that's something I haven't seen before and
>>>> most likely could blow up badly.
>>> Ok, I read the shmem helpers again, I think those are the only ones
>>> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
>>> break stuff all around here.
>>>
>>> They also remove the vma->vm_pgoff offset, which means
>>> unmap_mapping_range wont work anyway. I guess for drivers which use
>>> shmem helpers the hard assumption is that a) can't use p2p dma and b)
>>> pin everything into system memory.
>>>
>>> So not a problem. But something to keep in mind. I'll try to do a
>>> kerneldoc patch to note this somewhere. btw on that, did the
>>> timeline/syncobj documentation patch land by now? Just trying to make
>>> sure that doesn't get lost for another few months or so :-/
>> Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
>> which uses unmap_mapping_range underneath, and that's used by
>> panfrost. And panfrost also uses the mmap helper. Kinda wondering
>> whether we broke some stuff here, or whether the reverse map is
>> installed before we touch vma->vm_pgoff.
> 
> I think the key problem here is that unmap_mapping_range() doesn't blow 
> up immediately when this is wrong.
> 
> E.g. we just have a stale CPU page table entry which allows userspace to 
> write to freed up memory, but we don't really notice that immediately....
> 
> Maybe we should stop allowing to mmap() DMA-buf through the importer 
> file descriptor altogether and only allow mapping it through its own fd 
> or the exporter.

That is what I tried to do with panfrost a while ago:

    583bbf46133c drm/panfrost: Use drm_gem_map_offset()

    panfrost_ioctl_mmap_bo() contains a reimplementation of
    drm_gem_map_offset() but with a bug - it allows mapping imported
    objects (without going through the exporter). Fix this by switching to
    use the newly renamed drm_gem_map_offset() function instead which has
    the bonus of simplifying the code.

Sadly it was followed by:

    be855382bacb Revert "drm/panfrost: Use drm_gem_map_offset()"
    This reverts commit 583bbf46133c726bae277e8f4e32bfba2a528c7f.

    Turns out we need mmap to work on imported BOs even if the current code
    is buggy.

> Christian.
> 
>> panfrost folks, does panfrost purged buffer handling of mmap still
>> work correctly? Do you have some kind of igt or similar for this?

I'm not aware of any real testing of this. And I fear it probably isn't 
getting much in the way of real-world testing either otherwise someone 
would have grown tired of the "Purging xx bytes" message[1]

I'm a little bit lost on this thread - are you expecting this patch to 
break panfrost? We shouldn't be purging imported memory. Although I'm 
not sure what (if anything) stops you trying to mark imported memory as 
"don't need". Or indeed what would happen.

Steve

[1] I have a patch silencing that, but recently haven't had much time 
for working on Panfrost, and don't have my WFH setup quite as slick as I 
did when I was in the office.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-09 13:54                           ` Steven Price
  0 siblings, 0 replies; 56+ messages in thread
From: Steven Price @ 2020-07-09 13:54 UTC (permalink / raw)
  To: Christian König, Daniel Vetter, Rob Herring, Tomeu Vizoso,
	Alyssa Rosenzweig
  Cc: Thomas Hellström, dri-devel, lepton, # v4.10+,
	Chris Wilson, intel-gfx

On 09/07/2020 09:48, Christian König wrote:
> Am 08.07.20 um 18:19 schrieb Daniel Vetter:
>> On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>>> On Wed, Jul 8, 2020 at 5:05 PM Christian König 
>>> <christian.koenig@amd.com> wrote:
>>>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König 
>>>>> <christian.koenig@amd.com> wrote:
>>>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson 
>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson 
>>>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo 
>>>>>>>>>>>>> is native and
>>>>>>>>>>>>> allow direct operations like mmap() assuming it behaves as 
>>>>>>>>>>>>> backed by a
>>>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>>>>>> not always meaningful and the shmemfs backing store 
>>>>>>>>>>>>> misleading.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb 
>>>>>>>>>>>>> buffer API,
>>>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks 
>>>>>>>>>>>> like vgem is
>>>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it 
>>>>>>>>>>>> doesn't call
>>>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>>>
>>>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the 
>>>>>>>>>>> dumb buffer
>>>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had 
>>>>>>>>>>> to look at
>>>>>>>>>>> igt to see how it was being used.
>>>>>>>>>> Now it seems your fix is to disable "regular mmap" on imported 
>>>>>>>>>> dma buf
>>>>>>>>>> for vgem. I am not really a graphic guy, but then the api 
>>>>>>>>>> looks like:
>>>>>>>>>> for a gem handle, user space has to guess to find out the way 
>>>>>>>>>> to mmap
>>>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is 
>>>>>>>>>> this the
>>>>>>>>>> expected way
>>>>>>>>>> for people to handle gpu buffer?
>>>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you 
>>>>>>>>> have the
>>>>>>>>> handle, you have to use the dumb buffer API, there's no other 
>>>>>>>>> way to
>>>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it 
>>>>>>>>> directly. Those
>>>>>>>>> two are clear.
>>>>>>>>>
>>>>>>>>> It's when you import the dma-buf into vgem and create a handle 
>>>>>>>>> out of
>>>>>>>>> it, that's when the handle is no longer first class and certain 
>>>>>>>>> uAPI
>>>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>>>
>>>>>>>>> It's not brilliant, as you say, it requires the user to 
>>>>>>>>> remember the
>>>>>>>>> difference between the handles, but at the same time it does 
>>>>>>>>> prevent
>>>>>>>>> them falling into coherency traps by forcing them to use the right
>>>>>>>>> driver to handle the object, and have to consider the 
>>>>>>>>> additional ioctls
>>>>>>>>> that go along with that access.
>>>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of 
>>>>>>>> an importer
>>>>>>>> is illegal.
>>>>>>>>
>>>>>>>> What we could maybe try to do is to redirect this mmap() API 
>>>>>>>> call on the
>>>>>>>> importer to the exporter, but I'm pretty sure that the fs layer 
>>>>>>>> wouldn't
>>>>>>>> like that without changes.
>>>>>>> We already do that, there's a full helper-ified path from I think 
>>>>>>> shmem
>>>>>>> helpers through prime helpers to forward this all. Including 
>>>>>>> handling
>>>>>>> buffer offsets and all the other lolz back&forth.
>>>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>>>>>> needs to be avoided.
>>>>>>
>>>>>> Each file descriptor is associated with an struct address_space. And
>>>>>> when you mmap() through the importer by redirecting the system 
>>>>>> call to
>>>>>> the exporter you end up with the wrong struct address_space in 
>>>>>> your VMA.
>>>>>>
>>>>>> That in turn can go up easily in flames when the exporter tries to
>>>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>>>
>>>>>> Where are we doing this? My last status was that this is forbidden.
>>>>> Hm I thought we're doing all that already, but looking at the code
>>>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>>>> So the right file->f_mapping is always set at file creation time.
>>>>>
>>>>> And we indeed don't frob this more when going another indirection ...
>>>>> Maybe we screwed up something somewhere :-/
>>>>>
>>>>> Also I thought the mapping is only taken after the vma is instatiated,
>>>>> otherwise the tricks we're playing with dma-buf already wouldn't work:
>>>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
>>>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>>>> whether we could adjust vm_file too. Or is that the thing that's
>>>>> forbidden?
>>>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>>>
>>>> But changing vma->vm_file, that's something I haven't seen before and
>>>> most likely could blow up badly.
>>> Ok, I read the shmem helpers again, I think those are the only ones
>>> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
>>> break stuff all around here.
>>>
>>> They also remove the vma->vm_pgoff offset, which means
>>> unmap_mapping_range wont work anyway. I guess for drivers which use
>>> shmem helpers the hard assumption is that a) can't use p2p dma and b)
>>> pin everything into system memory.
>>>
>>> So not a problem. But something to keep in mind. I'll try to do a
>>> kerneldoc patch to note this somewhere. btw on that, did the
>>> timeline/syncobj documentation patch land by now? Just trying to make
>>> sure that doesn't get lost for another few months or so :-/
>> Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
>> which uses unmap_mapping_range underneath, and that's used by
>> panfrost. And panfrost also uses the mmap helper. Kinda wondering
>> whether we broke some stuff here, or whether the reverse map is
>> installed before we touch vma->vm_pgoff.
> 
> I think the key problem here is that unmap_mapping_range() doesn't blow 
> up immediately when this is wrong.
> 
> E.g. we just have a stale CPU page table entry which allows userspace to 
> write to freed up memory, but we don't really notice that immediately....
> 
> Maybe we should stop allowing to mmap() DMA-buf through the importer 
> file descriptor altogether and only allow mapping it through its own fd 
> or the exporter.

That is what I tried to do with panfrost a while ago:

    583bbf46133c drm/panfrost: Use drm_gem_map_offset()

    panfrost_ioctl_mmap_bo() contains a reimplementation of
    drm_gem_map_offset() but with a bug - it allows mapping imported
    objects (without going through the exporter). Fix this by switching to
    use the newly renamed drm_gem_map_offset() function instead which has
    the bonus of simplifying the code.

Sadly it was followed by:

    be855382bacb Revert "drm/panfrost: Use drm_gem_map_offset()"
    This reverts commit 583bbf46133c726bae277e8f4e32bfba2a528c7f.

    Turns out we need mmap to work on imported BOs even if the current code
    is buggy.

> Christian.
> 
>> panfrost folks, does panfrost purged buffer handling of mmap still
>> work correctly? Do you have some kind of igt or similar for this?

I'm not aware of any real testing of this. And I fear it probably isn't 
getting much in the way of real-world testing either otherwise someone 
would have grown tired of the "Purging xx bytes" message[1]

I'm a little bit lost on this thread - are you expecting this patch to 
break panfrost? We shouldn't be purging imported memory. Although I'm 
not sure what (if anything) stops you trying to mark imported memory as 
"don't need". Or indeed what would happen.

Steve

[1] I have a patch silencing that, but recently haven't had much time 
for working on Panfrost, and don't have my WFH setup quite as slick as I 
did when I was in the office.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-09 13:54                           ` Steven Price
  0 siblings, 0 replies; 56+ messages in thread
From: Steven Price @ 2020-07-09 13:54 UTC (permalink / raw)
  To: Christian König, Daniel Vetter, Rob Herring, Tomeu Vizoso,
	Alyssa Rosenzweig
  Cc: dri-devel, lepton, # v4.10+, Chris Wilson, intel-gfx

On 09/07/2020 09:48, Christian König wrote:
> Am 08.07.20 um 18:19 schrieb Daniel Vetter:
>> On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>>> On Wed, Jul 8, 2020 at 5:05 PM Christian König 
>>> <christian.koenig@amd.com> wrote:
>>>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König 
>>>>> <christian.koenig@amd.com> wrote:
>>>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson 
>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson 
>>>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem bo 
>>>>>>>>>>>>> is native and
>>>>>>>>>>>>> allow direct operations like mmap() assuming it behaves as 
>>>>>>>>>>>>> backed by a
>>>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>>>>>>>>>> not always meaningful and the shmemfs backing store 
>>>>>>>>>>>>> misleading.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the dumb 
>>>>>>>>>>>>> buffer API,
>>>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks 
>>>>>>>>>>>> like vgem is
>>>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it 
>>>>>>>>>>>> doesn't call
>>>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>>>
>>>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via the 
>>>>>>>>>>> dumb buffer
>>>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had 
>>>>>>>>>>> to look at
>>>>>>>>>>> igt to see how it was being used.
>>>>>>>>>> Now it seems your fix is to disable "regular mmap" on imported 
>>>>>>>>>> dma buf
>>>>>>>>>> for vgem. I am not really a graphic guy, but then the api 
>>>>>>>>>> looks like:
>>>>>>>>>> for a gem handle, user space has to guess to find out the way 
>>>>>>>>>> to mmap
>>>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is 
>>>>>>>>>> this the
>>>>>>>>>> expected way
>>>>>>>>>> for people to handle gpu buffer?
>>>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you 
>>>>>>>>> have the
>>>>>>>>> handle, you have to use the dumb buffer API, there's no other 
>>>>>>>>> way to
>>>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it 
>>>>>>>>> directly. Those
>>>>>>>>> two are clear.
>>>>>>>>>
>>>>>>>>> It's when you import the dma-buf into vgem and create a handle 
>>>>>>>>> out of
>>>>>>>>> it, that's when the handle is no longer first class and certain 
>>>>>>>>> uAPI
>>>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>>>
>>>>>>>>> It's not brilliant, as you say, it requires the user to 
>>>>>>>>> remember the
>>>>>>>>> difference between the handles, but at the same time it does 
>>>>>>>>> prevent
>>>>>>>>> them falling into coherency traps by forcing them to use the right
>>>>>>>>> driver to handle the object, and have to consider the 
>>>>>>>>> additional ioctls
>>>>>>>>> that go along with that access.
>>>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of 
>>>>>>>> an importer
>>>>>>>> is illegal.
>>>>>>>>
>>>>>>>> What we could maybe try to do is to redirect this mmap() API 
>>>>>>>> call on the
>>>>>>>> importer to the exporter, but I'm pretty sure that the fs layer 
>>>>>>>> wouldn't
>>>>>>>> like that without changes.
>>>>>>> We already do that, there's a full helper-ified path from I think 
>>>>>>> shmem
>>>>>>> helpers through prime helpers to forward this all. Including 
>>>>>>> handling
>>>>>>> buffer offsets and all the other lolz back&forth.
>>>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs and
>>>>>> needs to be avoided.
>>>>>>
>>>>>> Each file descriptor is associated with an struct address_space. And
>>>>>> when you mmap() through the importer by redirecting the system 
>>>>>> call to
>>>>>> the exporter you end up with the wrong struct address_space in 
>>>>>> your VMA.
>>>>>>
>>>>>> That in turn can go up easily in flames when the exporter tries to
>>>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>>>
>>>>>> Where are we doing this? My last status was that this is forbidden.
>>>>> Hm I thought we're doing all that already, but looking at the code
>>>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>>>> So the right file->f_mapping is always set at file creation time.
>>>>>
>>>>> And we indeed don't frob this more when going another indirection ...
>>>>> Maybe we screwed up something somewhere :-/
>>>>>
>>>>> Also I thought the mapping is only taken after the vma is instatiated,
>>>>> otherwise the tricks we're playing with dma-buf already wouldn't work:
>>>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd mmap has
>>>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>>>> whether we could adjust vm_file too. Or is that the thing that's
>>>>> forbidden?
>>>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>>>
>>>> But changing vma->vm_file, that's something I haven't seen before and
>>>> most likely could blow up badly.
>>> Ok, I read the shmem helpers again, I think those are the only ones
>>> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
>>> break stuff all around here.
>>>
>>> They also remove the vma->vm_pgoff offset, which means
>>> unmap_mapping_range wont work anyway. I guess for drivers which use
>>> shmem helpers the hard assumption is that a) can't use p2p dma and b)
>>> pin everything into system memory.
>>>
>>> So not a problem. But something to keep in mind. I'll try to do a
>>> kerneldoc patch to note this somewhere. btw on that, did the
>>> timeline/syncobj documentation patch land by now? Just trying to make
>>> sure that doesn't get lost for another few months or so :-/
>> Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
>> which uses unmap_mapping_range underneath, and that's used by
>> panfrost. And panfrost also uses the mmap helper. Kinda wondering
>> whether we broke some stuff here, or whether the reverse map is
>> installed before we touch vma->vm_pgoff.
> 
> I think the key problem here is that unmap_mapping_range() doesn't blow 
> up immediately when this is wrong.
> 
> E.g. we just have a stale CPU page table entry which allows userspace to 
> write to freed up memory, but we don't really notice that immediately....
> 
> Maybe we should stop allowing to mmap() DMA-buf through the importer 
> file descriptor altogether and only allow mapping it through its own fd 
> or the exporter.

That is what I tried to do with panfrost a while ago:

    583bbf46133c drm/panfrost: Use drm_gem_map_offset()

    panfrost_ioctl_mmap_bo() contains a reimplementation of
    drm_gem_map_offset() but with a bug - it allows mapping imported
    objects (without going through the exporter). Fix this by switching to
    use the newly renamed drm_gem_map_offset() function instead which has
    the bonus of simplifying the code.

Sadly it was followed by:

    be855382bacb Revert "drm/panfrost: Use drm_gem_map_offset()"
    This reverts commit 583bbf46133c726bae277e8f4e32bfba2a528c7f.

    Turns out we need mmap to work on imported BOs even if the current code
    is buggy.

> Christian.
> 
>> panfrost folks, does panfrost purged buffer handling of mmap still
>> work correctly? Do you have some kind of igt or similar for this?

I'm not aware of any real testing of this. And I fear it probably isn't 
getting much in the way of real-world testing either otherwise someone 
would have grown tired of the "Purging xx bytes" message[1]

I'm a little bit lost on this thread - are you expecting this patch to 
break panfrost? We shouldn't be purging imported memory. Although I'm 
not sure what (if anything) stops you trying to mark imported memory as 
"don't need". Or indeed what would happen.

Steve

[1] I have a patch silencing that, but recently haven't had much time 
for working on Panfrost, and don't have my WFH setup quite as slick as I 
did when I was in the office.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
  2020-07-09 13:54                           ` Steven Price
  (?)
@ 2020-07-09 14:15                             ` Christian König
  -1 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-09 14:15 UTC (permalink / raw)
  To: Steven Price, Daniel Vetter, Rob Herring, Tomeu Vizoso,
	Alyssa Rosenzweig
  Cc: Thomas Hellström, # v4.10+,
	lepton, dri-devel, Chris Wilson, intel-gfx

Am 09.07.20 um 15:54 schrieb Steven Price:
> On 09/07/2020 09:48, Christian König wrote:
>> Am 08.07.20 um 18:19 schrieb Daniel Vetter:
>>> On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>>>> On Wed, Jul 8, 2020 at 5:05 PM Christian König 
>>>> <christian.koenig@amd.com> wrote:
>>>>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>>>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König 
>>>>>> <christian.koenig@amd.com> wrote:
>>>>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson 
>>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson 
>>>>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem 
>>>>>>>>>>>>>> bo is native and
>>>>>>>>>>>>>> allow direct operations like mmap() assuming it behaves 
>>>>>>>>>>>>>> as backed by a
>>>>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the 
>>>>>>>>>>>>>> obj->pages are
>>>>>>>>>>>>>> not always meaningful and the shmemfs backing store 
>>>>>>>>>>>>>> misleading.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the 
>>>>>>>>>>>>>> dumb buffer API,
>>>>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks 
>>>>>>>>>>>>> like vgem is
>>>>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then 
>>>>>>>>>>>>> it doesn't call
>>>>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>>>>
>>>>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via 
>>>>>>>>>>>> the dumb buffer
>>>>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had 
>>>>>>>>>>>> to look at
>>>>>>>>>>>> igt to see how it was being used.
>>>>>>>>>>> Now it seems your fix is to disable "regular mmap" on 
>>>>>>>>>>> imported dma buf
>>>>>>>>>>> for vgem. I am not really a graphic guy, but then the api 
>>>>>>>>>>> looks like:
>>>>>>>>>>> for a gem handle, user space has to guess to find out the 
>>>>>>>>>>> way to mmap
>>>>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is 
>>>>>>>>>>> this the
>>>>>>>>>>> expected way
>>>>>>>>>>> for people to handle gpu buffer?
>>>>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you 
>>>>>>>>>> have the
>>>>>>>>>> handle, you have to use the dumb buffer API, there's no other 
>>>>>>>>>> way to
>>>>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it 
>>>>>>>>>> directly. Those
>>>>>>>>>> two are clear.
>>>>>>>>>>
>>>>>>>>>> It's when you import the dma-buf into vgem and create a 
>>>>>>>>>> handle out of
>>>>>>>>>> it, that's when the handle is no longer first class and 
>>>>>>>>>> certain uAPI
>>>>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>>>>
>>>>>>>>>> It's not brilliant, as you say, it requires the user to 
>>>>>>>>>> remember the
>>>>>>>>>> difference between the handles, but at the same time it does 
>>>>>>>>>> prevent
>>>>>>>>>> them falling into coherency traps by forcing them to use the 
>>>>>>>>>> right
>>>>>>>>>> driver to handle the object, and have to consider the 
>>>>>>>>>> additional ioctls
>>>>>>>>>> that go along with that access.
>>>>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs 
>>>>>>>>> of an importer
>>>>>>>>> is illegal.
>>>>>>>>>
>>>>>>>>> What we could maybe try to do is to redirect this mmap() API 
>>>>>>>>> call on the
>>>>>>>>> importer to the exporter, but I'm pretty sure that the fs 
>>>>>>>>> layer wouldn't
>>>>>>>>> like that without changes.
>>>>>>>> We already do that, there's a full helper-ified path from I 
>>>>>>>> think shmem
>>>>>>>> helpers through prime helpers to forward this all. Including 
>>>>>>>> handling
>>>>>>>> buffer offsets and all the other lolz back&forth.
>>>>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs 
>>>>>>> and
>>>>>>> needs to be avoided.
>>>>>>>
>>>>>>> Each file descriptor is associated with an struct address_space. 
>>>>>>> And
>>>>>>> when you mmap() through the importer by redirecting the system 
>>>>>>> call to
>>>>>>> the exporter you end up with the wrong struct address_space in 
>>>>>>> your VMA.
>>>>>>>
>>>>>>> That in turn can go up easily in flames when the exporter tries to
>>>>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>>>>
>>>>>>> Where are we doing this? My last status was that this is forbidden.
>>>>>> Hm I thought we're doing all that already, but looking at the code
>>>>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>>>>> So the right file->f_mapping is always set at file creation time.
>>>>>>
>>>>>> And we indeed don't frob this more when going another indirection 
>>>>>> ...
>>>>>> Maybe we screwed up something somewhere :-/
>>>>>>
>>>>>> Also I thought the mapping is only taken after the vma is 
>>>>>> instatiated,
>>>>>> otherwise the tricks we're playing with dma-buf already wouldn't 
>>>>>> work:
>>>>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd 
>>>>>> mmap has
>>>>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>>>>> whether we could adjust vm_file too. Or is that the thing that's
>>>>>> forbidden?
>>>>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>>>>
>>>>> But changing vma->vm_file, that's something I haven't seen before and
>>>>> most likely could blow up badly.
>>>> Ok, I read the shmem helpers again, I think those are the only ones
>>>> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
>>>> break stuff all around here.
>>>>
>>>> They also remove the vma->vm_pgoff offset, which means
>>>> unmap_mapping_range wont work anyway. I guess for drivers which use
>>>> shmem helpers the hard assumption is that a) can't use p2p dma and b)
>>>> pin everything into system memory.
>>>>
>>>> So not a problem. But something to keep in mind. I'll try to do a
>>>> kerneldoc patch to note this somewhere. btw on that, did the
>>>> timeline/syncobj documentation patch land by now? Just trying to make
>>>> sure that doesn't get lost for another few months or so :-/
>>> Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
>>> which uses unmap_mapping_range underneath, and that's used by
>>> panfrost. And panfrost also uses the mmap helper. Kinda wondering
>>> whether we broke some stuff here, or whether the reverse map is
>>> installed before we touch vma->vm_pgoff.
>>
>> I think the key problem here is that unmap_mapping_range() doesn't 
>> blow up immediately when this is wrong.
>>
>> E.g. we just have a stale CPU page table entry which allows userspace 
>> to write to freed up memory, but we don't really notice that 
>> immediately....
>>
>> Maybe we should stop allowing to mmap() DMA-buf through the importer 
>> file descriptor altogether and only allow mapping it through its own 
>> fd or the exporter.
>
> That is what I tried to do with panfrost a while ago:
>
>    583bbf46133c drm/panfrost: Use drm_gem_map_offset()
>
>    panfrost_ioctl_mmap_bo() contains a reimplementation of
>    drm_gem_map_offset() but with a bug - it allows mapping imported
>    objects (without going through the exporter). Fix this by switching to
>    use the newly renamed drm_gem_map_offset() function instead which has
>    the bonus of simplifying the code.
>
> Sadly it was followed by:
>
>    be855382bacb Revert "drm/panfrost: Use drm_gem_map_offset()"
>    This reverts commit 583bbf46133c726bae277e8f4e32bfba2a528c7f.
>
>    Turns out we need mmap to work on imported BOs even if the current 
> code
>    is buggy.

Mhm, we might need to push back on the revert.

>> Christian.
>>
>>> panfrost folks, does panfrost purged buffer handling of mmap still
>>> work correctly? Do you have some kind of igt or similar for this?
>
> I'm not aware of any real testing of this. And I fear it probably 
> isn't getting much in the way of real-world testing either otherwise 
> someone would have grown tired of the "Purging xx bytes" message[1]
>
> I'm a little bit lost on this thread - are you expecting this patch to 
> break panfrost? We shouldn't be purging imported memory. Although I'm 
> not sure what (if anything) stops you trying to mark imported memory 
> as "don't need". Or indeed what would happen.

As long as panfrost keeps the imported DMA-bufs pinned everything should 
be fine as it is.

But this can open up a security hole you can push an elephant through. 
So you need to keep this in the back of your mind if this is ever 
implemented.

Christian.

>
> Steve
>
> [1] I have a patch silencing that, but recently haven't had much time 
> for working on Panfrost, and don't have my WFH setup quite as slick as 
> I did when I was in the office.


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-09 14:15                             ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-09 14:15 UTC (permalink / raw)
  To: Steven Price, Daniel Vetter, Rob Herring, Tomeu Vizoso,
	Alyssa Rosenzweig
  Cc: Thomas Hellström, dri-devel, lepton, # v4.10+,
	Chris Wilson, intel-gfx

Am 09.07.20 um 15:54 schrieb Steven Price:
> On 09/07/2020 09:48, Christian König wrote:
>> Am 08.07.20 um 18:19 schrieb Daniel Vetter:
>>> On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>>>> On Wed, Jul 8, 2020 at 5:05 PM Christian König 
>>>> <christian.koenig@amd.com> wrote:
>>>>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>>>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König 
>>>>>> <christian.koenig@amd.com> wrote:
>>>>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson 
>>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson 
>>>>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem 
>>>>>>>>>>>>>> bo is native and
>>>>>>>>>>>>>> allow direct operations like mmap() assuming it behaves 
>>>>>>>>>>>>>> as backed by a
>>>>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the 
>>>>>>>>>>>>>> obj->pages are
>>>>>>>>>>>>>> not always meaningful and the shmemfs backing store 
>>>>>>>>>>>>>> misleading.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the 
>>>>>>>>>>>>>> dumb buffer API,
>>>>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks 
>>>>>>>>>>>>> like vgem is
>>>>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then 
>>>>>>>>>>>>> it doesn't call
>>>>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>>>>
>>>>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via 
>>>>>>>>>>>> the dumb buffer
>>>>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had 
>>>>>>>>>>>> to look at
>>>>>>>>>>>> igt to see how it was being used.
>>>>>>>>>>> Now it seems your fix is to disable "regular mmap" on 
>>>>>>>>>>> imported dma buf
>>>>>>>>>>> for vgem. I am not really a graphic guy, but then the api 
>>>>>>>>>>> looks like:
>>>>>>>>>>> for a gem handle, user space has to guess to find out the 
>>>>>>>>>>> way to mmap
>>>>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is 
>>>>>>>>>>> this the
>>>>>>>>>>> expected way
>>>>>>>>>>> for people to handle gpu buffer?
>>>>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you 
>>>>>>>>>> have the
>>>>>>>>>> handle, you have to use the dumb buffer API, there's no other 
>>>>>>>>>> way to
>>>>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it 
>>>>>>>>>> directly. Those
>>>>>>>>>> two are clear.
>>>>>>>>>>
>>>>>>>>>> It's when you import the dma-buf into vgem and create a 
>>>>>>>>>> handle out of
>>>>>>>>>> it, that's when the handle is no longer first class and 
>>>>>>>>>> certain uAPI
>>>>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>>>>
>>>>>>>>>> It's not brilliant, as you say, it requires the user to 
>>>>>>>>>> remember the
>>>>>>>>>> difference between the handles, but at the same time it does 
>>>>>>>>>> prevent
>>>>>>>>>> them falling into coherency traps by forcing them to use the 
>>>>>>>>>> right
>>>>>>>>>> driver to handle the object, and have to consider the 
>>>>>>>>>> additional ioctls
>>>>>>>>>> that go along with that access.
>>>>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs 
>>>>>>>>> of an importer
>>>>>>>>> is illegal.
>>>>>>>>>
>>>>>>>>> What we could maybe try to do is to redirect this mmap() API 
>>>>>>>>> call on the
>>>>>>>>> importer to the exporter, but I'm pretty sure that the fs 
>>>>>>>>> layer wouldn't
>>>>>>>>> like that without changes.
>>>>>>>> We already do that, there's a full helper-ified path from I 
>>>>>>>> think shmem
>>>>>>>> helpers through prime helpers to forward this all. Including 
>>>>>>>> handling
>>>>>>>> buffer offsets and all the other lolz back&forth.
>>>>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs 
>>>>>>> and
>>>>>>> needs to be avoided.
>>>>>>>
>>>>>>> Each file descriptor is associated with an struct address_space. 
>>>>>>> And
>>>>>>> when you mmap() through the importer by redirecting the system 
>>>>>>> call to
>>>>>>> the exporter you end up with the wrong struct address_space in 
>>>>>>> your VMA.
>>>>>>>
>>>>>>> That in turn can go up easily in flames when the exporter tries to
>>>>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>>>>
>>>>>>> Where are we doing this? My last status was that this is forbidden.
>>>>>> Hm I thought we're doing all that already, but looking at the code
>>>>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>>>>> So the right file->f_mapping is always set at file creation time.
>>>>>>
>>>>>> And we indeed don't frob this more when going another indirection 
>>>>>> ...
>>>>>> Maybe we screwed up something somewhere :-/
>>>>>>
>>>>>> Also I thought the mapping is only taken after the vma is 
>>>>>> instatiated,
>>>>>> otherwise the tricks we're playing with dma-buf already wouldn't 
>>>>>> work:
>>>>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd 
>>>>>> mmap has
>>>>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>>>>> whether we could adjust vm_file too. Or is that the thing that's
>>>>>> forbidden?
>>>>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>>>>
>>>>> But changing vma->vm_file, that's something I haven't seen before and
>>>>> most likely could blow up badly.
>>>> Ok, I read the shmem helpers again, I think those are the only ones
>>>> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
>>>> break stuff all around here.
>>>>
>>>> They also remove the vma->vm_pgoff offset, which means
>>>> unmap_mapping_range wont work anyway. I guess for drivers which use
>>>> shmem helpers the hard assumption is that a) can't use p2p dma and b)
>>>> pin everything into system memory.
>>>>
>>>> So not a problem. But something to keep in mind. I'll try to do a
>>>> kerneldoc patch to note this somewhere. btw on that, did the
>>>> timeline/syncobj documentation patch land by now? Just trying to make
>>>> sure that doesn't get lost for another few months or so :-/
>>> Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
>>> which uses unmap_mapping_range underneath, and that's used by
>>> panfrost. And panfrost also uses the mmap helper. Kinda wondering
>>> whether we broke some stuff here, or whether the reverse map is
>>> installed before we touch vma->vm_pgoff.
>>
>> I think the key problem here is that unmap_mapping_range() doesn't 
>> blow up immediately when this is wrong.
>>
>> E.g. we just have a stale CPU page table entry which allows userspace 
>> to write to freed up memory, but we don't really notice that 
>> immediately....
>>
>> Maybe we should stop allowing to mmap() DMA-buf through the importer 
>> file descriptor altogether and only allow mapping it through its own 
>> fd or the exporter.
>
> That is what I tried to do with panfrost a while ago:
>
>    583bbf46133c drm/panfrost: Use drm_gem_map_offset()
>
>    panfrost_ioctl_mmap_bo() contains a reimplementation of
>    drm_gem_map_offset() but with a bug - it allows mapping imported
>    objects (without going through the exporter). Fix this by switching to
>    use the newly renamed drm_gem_map_offset() function instead which has
>    the bonus of simplifying the code.
>
> Sadly it was followed by:
>
>    be855382bacb Revert "drm/panfrost: Use drm_gem_map_offset()"
>    This reverts commit 583bbf46133c726bae277e8f4e32bfba2a528c7f.
>
>    Turns out we need mmap to work on imported BOs even if the current 
> code
>    is buggy.

Mhm, we might need to push back on the revert.

>> Christian.
>>
>>> panfrost folks, does panfrost purged buffer handling of mmap still
>>> work correctly? Do you have some kind of igt or similar for this?
>
> I'm not aware of any real testing of this. And I fear it probably 
> isn't getting much in the way of real-world testing either otherwise 
> someone would have grown tired of the "Purging xx bytes" message[1]
>
> I'm a little bit lost on this thread - are you expecting this patch to 
> break panfrost? We shouldn't be purging imported memory. Although I'm 
> not sure what (if anything) stops you trying to mark imported memory 
> as "don't need". Or indeed what would happen.

As long as panfrost keeps the imported DMA-bufs pinned everything should 
be fine as it is.

But this can open up a security hole you can push an elephant through. 
So you need to keep this in the back of your mind if this is ever 
implemented.

Christian.

>
> Steve
>
> [1] I have a patch silencing that, but recently haven't had much time 
> for working on Panfrost, and don't have my WFH setup quite as slick as 
> I did when I was in the office.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object
@ 2020-07-09 14:15                             ` Christian König
  0 siblings, 0 replies; 56+ messages in thread
From: Christian König @ 2020-07-09 14:15 UTC (permalink / raw)
  To: Steven Price, Daniel Vetter, Rob Herring, Tomeu Vizoso,
	Alyssa Rosenzweig
  Cc: dri-devel, lepton, # v4.10+, Chris Wilson, intel-gfx

Am 09.07.20 um 15:54 schrieb Steven Price:
> On 09/07/2020 09:48, Christian König wrote:
>> Am 08.07.20 um 18:19 schrieb Daniel Vetter:
>>> On Wed, Jul 8, 2020 at 6:11 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>>>> On Wed, Jul 8, 2020 at 5:05 PM Christian König 
>>>> <christian.koenig@amd.com> wrote:
>>>>> Am 08.07.20 um 17:01 schrieb Daniel Vetter:
>>>>>> On Wed, Jul 8, 2020 at 4:37 PM Christian König 
>>>>>> <christian.koenig@amd.com> wrote:
>>>>>>> Am 08.07.20 um 11:54 schrieb Daniel Vetter:
>>>>>>>> On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
>>>>>>>>> Am 07.07.20 um 20:35 schrieb Chris Wilson:
>>>>>>>>>> Quoting lepton (2020-07-07 19:17:51)
>>>>>>>>>>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson 
>>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>> Quoting lepton (2020-07-07 18:05:21)
>>>>>>>>>>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson 
>>>>>>>>>>>>> <chris@chris-wilson.co.uk> wrote:
>>>>>>>>>>>>>> If we assign obj->filp, we believe that the create vgem 
>>>>>>>>>>>>>> bo is native and
>>>>>>>>>>>>>> allow direct operations like mmap() assuming it behaves 
>>>>>>>>>>>>>> as backed by a
>>>>>>>>>>>>>> shmemfs inode. When imported from a dmabuf, the 
>>>>>>>>>>>>>> obj->pages are
>>>>>>>>>>>>>> not always meaningful and the shmemfs backing store 
>>>>>>>>>>>>>> misleading.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Note, that regular mmap access to a vgem bo is via the 
>>>>>>>>>>>>>> dumb buffer API,
>>>>>>>>>>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>>>>>>>>>>> What do you mean by "regular mmap access" here?  It looks 
>>>>>>>>>>>>> like vgem is
>>>>>>>>>>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then 
>>>>>>>>>>>>> it doesn't call
>>>>>>>>>>>>> drm_gem_dumb_map_offset
>>>>>>>>>>>> As I too found out, and so had to correct my story telling.
>>>>>>>>>>>>
>>>>>>>>>>>> By regular mmap() access I mean mmap on the vgem bo [via 
>>>>>>>>>>>> the dumb buffer
>>>>>>>>>>>> API] as opposed to mmap() via an exported dma-buf fd. I had 
>>>>>>>>>>>> to look at
>>>>>>>>>>>> igt to see how it was being used.
>>>>>>>>>>> Now it seems your fix is to disable "regular mmap" on 
>>>>>>>>>>> imported dma buf
>>>>>>>>>>> for vgem. I am not really a graphic guy, but then the api 
>>>>>>>>>>> looks like:
>>>>>>>>>>> for a gem handle, user space has to guess to find out the 
>>>>>>>>>>> way to mmap
>>>>>>>>>>> it. If user space guess wrong, then it will fail to mmap. Is 
>>>>>>>>>>> this the
>>>>>>>>>>> expected way
>>>>>>>>>>> for people to handle gpu buffer?
>>>>>>>>>> You either have a dumb buffer handle, or a dma-buf fd. If you 
>>>>>>>>>> have the
>>>>>>>>>> handle, you have to use the dumb buffer API, there's no other 
>>>>>>>>>> way to
>>>>>>>>>> mmap it. If you have the dma-buf fd, you should mmap it 
>>>>>>>>>> directly. Those
>>>>>>>>>> two are clear.
>>>>>>>>>>
>>>>>>>>>> It's when you import the dma-buf into vgem and create a 
>>>>>>>>>> handle out of
>>>>>>>>>> it, that's when the handle is no longer first class and 
>>>>>>>>>> certain uAPI
>>>>>>>>>> [the dumb buffer API in particular] fail.
>>>>>>>>>>
>>>>>>>>>> It's not brilliant, as you say, it requires the user to 
>>>>>>>>>> remember the
>>>>>>>>>> difference between the handles, but at the same time it does 
>>>>>>>>>> prevent
>>>>>>>>>> them falling into coherency traps by forcing them to use the 
>>>>>>>>>> right
>>>>>>>>>> driver to handle the object, and have to consider the 
>>>>>>>>>> additional ioctls
>>>>>>>>>> that go along with that access.
>>>>>>>>> Yes, Chris is right. Mapping DMA-buf through the mmap() APIs 
>>>>>>>>> of an importer
>>>>>>>>> is illegal.
>>>>>>>>>
>>>>>>>>> What we could maybe try to do is to redirect this mmap() API 
>>>>>>>>> call on the
>>>>>>>>> importer to the exporter, but I'm pretty sure that the fs 
>>>>>>>>> layer wouldn't
>>>>>>>>> like that without changes.
>>>>>>>> We already do that, there's a full helper-ified path from I 
>>>>>>>> think shmem
>>>>>>>> helpers through prime helpers to forward this all. Including 
>>>>>>>> handling
>>>>>>>> buffer offsets and all the other lolz back&forth.
>>>>>>> Oh, that most likely won't work correctly with unpinned DMA-bufs 
>>>>>>> and
>>>>>>> needs to be avoided.
>>>>>>>
>>>>>>> Each file descriptor is associated with an struct address_space. 
>>>>>>> And
>>>>>>> when you mmap() through the importer by redirecting the system 
>>>>>>> call to
>>>>>>> the exporter you end up with the wrong struct address_space in 
>>>>>>> your VMA.
>>>>>>>
>>>>>>> That in turn can go up easily in flames when the exporter tries to
>>>>>>> invalidate the CPU mappings for its DMA-buf while moving it.
>>>>>>>
>>>>>>> Where are we doing this? My last status was that this is forbidden.
>>>>>> Hm I thought we're doing all that already, but looking at the code
>>>>>> again we're only doing this when opening a new drm fd or dma-buf fd.
>>>>>> So the right file->f_mapping is always set at file creation time.
>>>>>>
>>>>>> And we indeed don't frob this more when going another indirection 
>>>>>> ...
>>>>>> Maybe we screwed up something somewhere :-/
>>>>>>
>>>>>> Also I thought the mapping is only taken after the vma is 
>>>>>> instatiated,
>>>>>> otherwise the tricks we're playing with dma-buf already wouldn't 
>>>>>> work:
>>>>>> dma-buf has the buffer always at offset 0, whereas gem drm_fd 
>>>>>> mmap has
>>>>>> it somewhere else. We already adjust vma->vm_pgoff, so I'm wondering
>>>>>> whether we could adjust vm_file too. Or is that the thing that's
>>>>>> forbidden?
>>>>> Yes, exactly. Modifying vm_pgoff is harmless, tons of code does that.
>>>>>
>>>>> But changing vma->vm_file, that's something I haven't seen before and
>>>>> most likely could blow up badly.
>>>> Ok, I read the shmem helpers again, I think those are the only ones
>>>> which do the importer mmap -> dma_buf_mmap() forwarding, and hence
>>>> break stuff all around here.
>>>>
>>>> They also remove the vma->vm_pgoff offset, which means
>>>> unmap_mapping_range wont work anyway. I guess for drivers which use
>>>> shmem helpers the hard assumption is that a) can't use p2p dma and b)
>>>> pin everything into system memory.
>>>>
>>>> So not a problem. But something to keep in mind. I'll try to do a
>>>> kerneldoc patch to note this somewhere. btw on that, did the
>>>> timeline/syncobj documentation patch land by now? Just trying to make
>>>> sure that doesn't get lost for another few months or so :-/
>>> Ok, so maybe it is a problem. Because there is a drm_gem_shmem_purge()
>>> which uses unmap_mapping_range underneath, and that's used by
>>> panfrost. And panfrost also uses the mmap helper. Kinda wondering
>>> whether we broke some stuff here, or whether the reverse map is
>>> installed before we touch vma->vm_pgoff.
>>
>> I think the key problem here is that unmap_mapping_range() doesn't 
>> blow up immediately when this is wrong.
>>
>> E.g. we just have a stale CPU page table entry which allows userspace 
>> to write to freed up memory, but we don't really notice that 
>> immediately....
>>
>> Maybe we should stop allowing to mmap() DMA-buf through the importer 
>> file descriptor altogether and only allow mapping it through its own 
>> fd or the exporter.
>
> That is what I tried to do with panfrost a while ago:
>
>    583bbf46133c drm/panfrost: Use drm_gem_map_offset()
>
>    panfrost_ioctl_mmap_bo() contains a reimplementation of
>    drm_gem_map_offset() but with a bug - it allows mapping imported
>    objects (without going through the exporter). Fix this by switching to
>    use the newly renamed drm_gem_map_offset() function instead which has
>    the bonus of simplifying the code.
>
> Sadly it was followed by:
>
>    be855382bacb Revert "drm/panfrost: Use drm_gem_map_offset()"
>    This reverts commit 583bbf46133c726bae277e8f4e32bfba2a528c7f.
>
>    Turns out we need mmap to work on imported BOs even if the current 
> code
>    is buggy.

Mhm, we might need to push back on the revert.

>> Christian.
>>
>>> panfrost folks, does panfrost purged buffer handling of mmap still
>>> work correctly? Do you have some kind of igt or similar for this?
>
> I'm not aware of any real testing of this. And I fear it probably 
> isn't getting much in the way of real-world testing either otherwise 
> someone would have grown tired of the "Purging xx bytes" message[1]
>
> I'm a little bit lost on this thread - are you expecting this patch to 
> break panfrost? We shouldn't be purging imported memory. Although I'm 
> not sure what (if anything) stops you trying to mark imported memory 
> as "don't need". Or indeed what would happen.

As long as panfrost keeps the imported DMA-bufs pinned everything should 
be fine as it is.

But this can open up a security hole you can push an elephant through. 
So you need to keep this in the back of your mind if this is ever 
implemented.

Christian.

>
> Steve
>
> [1] I have a patch silencing that, but recently haven't had much time 
> for working on Panfrost, and don't have my WFH setup quite as slick as 
> I did when I was in the office.

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

end of thread, other threads:[~2020-07-09 14:15 UTC | newest]

Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-07 16:00 [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object Chris Wilson
2020-07-07 16:00 ` [Intel-gfx] " Chris Wilson
2020-07-07 16:00 ` Chris Wilson
2020-07-07 16:00 ` [PATCH 2/2] drm/vgem: Replace opencoded version of drm_gem_dumb_map_offset() Chris Wilson
2020-07-07 16:00   ` [Intel-gfx] " Chris Wilson
2020-07-08  9:56   ` Daniel Vetter
2020-07-08  9:56     ` [Intel-gfx] " Daniel Vetter
2020-07-08 14:53     ` Chris Wilson
2020-07-08 14:53       ` [Intel-gfx] " Chris Wilson
2020-07-07 16:33 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object Patchwork
2020-07-07 17:05 ` [PATCH 1/2] " lepton
2020-07-07 17:05   ` [Intel-gfx] " lepton
2020-07-07 17:05   ` lepton
2020-07-07 17:20   ` Chris Wilson
2020-07-07 17:20     ` [Intel-gfx] " Chris Wilson
2020-07-07 18:17     ` lepton
2020-07-07 18:17       ` [Intel-gfx] " lepton
2020-07-07 18:17       ` lepton
2020-07-07 18:35       ` Chris Wilson
2020-07-07 18:35         ` [Intel-gfx] " Chris Wilson
2020-07-08  9:22         ` Christian König
2020-07-08  9:22           ` [Intel-gfx] " Christian König
2020-07-08  9:22           ` Christian König
2020-07-08  9:54           ` Daniel Vetter
2020-07-08  9:54             ` [Intel-gfx] " Daniel Vetter
2020-07-08  9:54             ` Daniel Vetter
2020-07-08 14:37             ` Christian König
2020-07-08 14:37               ` [Intel-gfx] " Christian König
2020-07-08 14:37               ` Christian König
2020-07-08 15:01               ` Daniel Vetter
2020-07-08 15:01                 ` [Intel-gfx] " Daniel Vetter
2020-07-08 15:01                 ` Daniel Vetter
2020-07-08 15:05                 ` Christian König
2020-07-08 15:05                   ` [Intel-gfx] " Christian König
2020-07-08 15:05                   ` Christian König
2020-07-08 16:11                   ` Daniel Vetter
2020-07-08 16:11                     ` [Intel-gfx] " Daniel Vetter
2020-07-08 16:11                     ` Daniel Vetter
2020-07-08 16:19                     ` Daniel Vetter
2020-07-08 16:19                       ` [Intel-gfx] " Daniel Vetter
2020-07-08 16:19                       ` Daniel Vetter
2020-07-09  8:48                       ` Christian König
2020-07-09  8:48                         ` [Intel-gfx] " Christian König
2020-07-09  8:48                         ` Christian König
2020-07-09 13:54                         ` Steven Price
2020-07-09 13:54                           ` [Intel-gfx] " Steven Price
2020-07-09 13:54                           ` Steven Price
2020-07-09 14:15                           ` Christian König
2020-07-09 14:15                             ` [Intel-gfx] " Christian König
2020-07-09 14:15                             ` Christian König
2020-07-09  8:49                     ` Christian König
2020-07-09  8:49                       ` [Intel-gfx] " Christian König
2020-07-09  8:49                       ` Christian König
2020-07-08  5:44 ` lepton
2020-07-08  5:44   ` [Intel-gfx] " lepton
2020-07-08  5:44   ` lepton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.