linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 01/18] drm/virtio: pass gem reservation object to ttm init
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-02 14:18 ` [PATCH v6 02/18] drm/virtio: switch virtio_gpu_wait_ioctl() to gem helper Gerd Hoffmann
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

With this gem and ttm will use the same reservation object,
so mixing and matching ttm / gem reservation helpers should
work fine.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/virtio/virtgpu_object.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index b2da31310d24..242766d644a7 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -132,7 +132,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 	virtio_gpu_init_ttm_placement(bo);
 	ret = ttm_bo_init(&vgdev->mman.bdev, &bo->tbo, params->size,
 			  ttm_bo_type_device, &bo->placement, 0,
-			  true, acc_size, NULL, NULL,
+			  true, acc_size, NULL,
+			  bo->gem_base.resv,
 			  &virtio_gpu_ttm_bo_destroy);
 	/* ttm_bo_init failure will call the destroy */
 	if (ret != 0)
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 02/18] drm/virtio: switch virtio_gpu_wait_ioctl() to gem helper.
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
  2019-07-02 14:18 ` [PATCH v6 01/18] drm/virtio: pass gem reservation object to ttm init Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-02 14:18 ` [PATCH v6 03/18] drm/virtio: simplify cursor updates Gerd Hoffmann
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

Use drm_gem_reservation_object_wait() in virtio_gpu_wait_ioctl().
This also makes the ioctl run lockless.

v5: handle lookup failure.
v2: use reservation_object_test_signaled_rcu for VIRTGPU_WAIT_NOWAIT.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 28 ++++++++++++--------------
 1 file changed, 13 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 1b50c34a29dc..c06dde541491 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -464,23 +464,21 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
 			    struct drm_file *file)
 {
 	struct drm_virtgpu_3d_wait *args = data;
-	struct drm_gem_object *gobj = NULL;
-	struct virtio_gpu_object *qobj = NULL;
+	struct drm_gem_object *obj;
+	long timeout = 15 * HZ;
 	int ret;
-	bool nowait = false;
 
-	gobj = drm_gem_object_lookup(file, args->handle);
-	if (gobj == NULL)
-		return -ENOENT;
-
-	qobj = gem_to_virtio_gpu_obj(gobj);
-
-	if (args->flags & VIRTGPU_WAIT_NOWAIT)
-		nowait = true;
-	ret = virtio_gpu_object_wait(qobj, nowait);
-
-	drm_gem_object_put_unlocked(gobj);
-	return ret;
+	if (args->flags & VIRTGPU_WAIT_NOWAIT) {
+		obj = drm_gem_object_lookup(file, args->handle);
+		if (obj == NULL)
+			return -ENOENT;
+		ret = reservation_object_test_signaled_rcu(obj->resv, true);
+		drm_gem_object_put_unlocked(obj);
+		return ret ? 0 : -EBUSY;
+	}
+
+	return drm_gem_reservation_object_wait(file, args->handle,
+					       true, timeout);
 }
 
 static int virtio_gpu_get_caps_ioctl(struct drm_device *dev,
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 03/18] drm/virtio: simplify cursor updates
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
  2019-07-02 14:18 ` [PATCH v6 01/18] drm/virtio: pass gem reservation object to ttm init Gerd Hoffmann
  2019-07-02 14:18 ` [PATCH v6 02/18] drm/virtio: switch virtio_gpu_wait_ioctl() to gem helper Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-02 14:18 ` [PATCH v6 04/18] drm/virtio: remove virtio_gpu_object_wait Gerd Hoffmann
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

No need to do the reservation dance,
we can just wait on the fence directly.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/virtio/virtgpu_plane.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 024c2aa0c929..4b805bf466d3 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -184,7 +184,6 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
 	struct virtio_gpu_framebuffer *vgfb;
 	struct virtio_gpu_object *bo = NULL;
 	uint32_t handle;
-	int ret = 0;
 
 	if (plane->state->crtc)
 		output = drm_crtc_to_virtio_gpu_output(plane->state->crtc);
@@ -208,15 +207,9 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
 			 cpu_to_le32(plane->state->crtc_w),
 			 cpu_to_le32(plane->state->crtc_h),
 			 0, 0, vgfb->fence);
-		ret = virtio_gpu_object_reserve(bo, false);
-		if (!ret) {
-			reservation_object_add_excl_fence(bo->tbo.resv,
-							  &vgfb->fence->f);
-			dma_fence_put(&vgfb->fence->f);
-			vgfb->fence = NULL;
-			virtio_gpu_object_unreserve(bo);
-			virtio_gpu_object_wait(bo, false);
-		}
+		dma_fence_wait(&vgfb->fence->f, true);
+		dma_fence_put(&vgfb->fence->f);
+		vgfb->fence = NULL;
 	}
 
 	if (plane->state->fb != old_state->fb) {
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 04/18] drm/virtio: remove virtio_gpu_object_wait
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (2 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 03/18] drm/virtio: simplify cursor updates Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-02 14:18 ` [PATCH v6 05/18] drm/virtio: drop no_wait argument from virtio_gpu_object_reserve Gerd Hoffmann
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

No users left.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h    |  1 -
 drivers/gpu/drm/virtio/virtgpu_object.c | 13 -------------
 2 files changed, 14 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 9e2d3062b01d..2cd96256ba37 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -364,7 +364,6 @@ int virtio_gpu_object_kmap(struct virtio_gpu_object *bo);
 int virtio_gpu_object_get_sg_table(struct virtio_gpu_device *qdev,
 				   struct virtio_gpu_object *bo);
 void virtio_gpu_object_free_sg_table(struct virtio_gpu_object *bo);
-int virtio_gpu_object_wait(struct virtio_gpu_object *bo, bool no_wait);
 
 /* virtgpu_prime.c */
 struct sg_table *virtgpu_gem_prime_get_sg_table(struct drm_gem_object *obj);
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 242766d644a7..82bfbf983fd2 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -233,16 +233,3 @@ void virtio_gpu_object_free_sg_table(struct virtio_gpu_object *bo)
 	kfree(bo->pages);
 	bo->pages = NULL;
 }
-
-int virtio_gpu_object_wait(struct virtio_gpu_object *bo, bool no_wait)
-{
-	int r;
-
-	r = ttm_bo_reserve(&bo->tbo, true, no_wait, NULL);
-	if (unlikely(r != 0))
-		return r;
-	r = ttm_bo_wait(&bo->tbo, true, no_wait);
-	ttm_bo_unreserve(&bo->tbo);
-	return r;
-}
-
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 05/18] drm/virtio: drop no_wait argument from virtio_gpu_object_reserve
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (3 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 04/18] drm/virtio: remove virtio_gpu_object_wait Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-02 14:18 ` [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve} Gerd Hoffmann
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

All callers pass no_wait = false.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   | 5 ++---
 drivers/gpu/drm/virtio/virtgpu_gem.c   | 4 ++--
 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 4 ++--
 3 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 2cd96256ba37..06cc0e961df6 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -398,12 +398,11 @@ static inline u64 virtio_gpu_object_mmap_offset(struct virtio_gpu_object *bo)
 	return drm_vma_node_offset_addr(&bo->tbo.vma_node);
 }
 
-static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo,
-					 bool no_wait)
+static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo)
 {
 	int r;
 
-	r = ttm_bo_reserve(&bo->tbo, true, no_wait, NULL);
+	r = ttm_bo_reserve(&bo->tbo, true, false, NULL);
 	if (unlikely(r != 0)) {
 		if (r != -ERESTARTSYS) {
 			struct virtio_gpu_device *qdev =
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 1e49e08dd545..9c9ad3b14080 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -140,7 +140,7 @@ int virtio_gpu_gem_object_open(struct drm_gem_object *obj,
 	if (!vgdev->has_virgl_3d)
 		return 0;
 
-	r = virtio_gpu_object_reserve(qobj, false);
+	r = virtio_gpu_object_reserve(qobj);
 	if (r)
 		return r;
 
@@ -161,7 +161,7 @@ void virtio_gpu_gem_object_close(struct drm_gem_object *obj,
 	if (!vgdev->has_virgl_3d)
 		return;
 
-	r = virtio_gpu_object_reserve(qobj, false);
+	r = virtio_gpu_object_reserve(qobj);
 	if (r)
 		return;
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index c06dde541491..0caff3fa623e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -375,7 +375,7 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
 
 	qobj = gem_to_virtio_gpu_obj(gobj);
 
-	ret = virtio_gpu_object_reserve(qobj, false);
+	ret = virtio_gpu_object_reserve(qobj);
 	if (ret)
 		goto out;
 
@@ -425,7 +425,7 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
 
 	qobj = gem_to_virtio_gpu_obj(gobj);
 
-	ret = virtio_gpu_object_reserve(qobj, false);
+	ret = virtio_gpu_object_reserve(qobj);
 	if (ret)
 		goto out;
 
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve}
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (4 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 05/18] drm/virtio: drop no_wait argument from virtio_gpu_object_reserve Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-03 18:02   ` Chia-I Wu
  2019-07-02 14:18 ` [PATCH v6 07/18] drm/virtio: add virtio_gpu_object_array & helpers Gerd Hoffmann
                   ` (11 subsequent siblings)
  17 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

Call reservation_object_* directly instead
of using ttm_bo_{reserve,unreserve}.

v4: check for EINTR only.
v3: check for EINTR too.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 06cc0e961df6..07f6001ea91e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -402,9 +402,9 @@ static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo)
 {
 	int r;
 
-	r = ttm_bo_reserve(&bo->tbo, true, false, NULL);
+	r = reservation_object_lock_interruptible(bo->gem_base.resv, NULL);
 	if (unlikely(r != 0)) {
-		if (r != -ERESTARTSYS) {
+		if (r != -EINTR) {
 			struct virtio_gpu_device *qdev =
 				bo->gem_base.dev->dev_private;
 			dev_err(qdev->dev, "%p reserve failed\n", bo);
@@ -416,7 +416,7 @@ static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo)
 
 static inline void virtio_gpu_object_unreserve(struct virtio_gpu_object *bo)
 {
-	ttm_bo_unreserve(&bo->tbo);
+	reservation_object_unlock(bo->gem_base.resv);
 }
 
 /* virgl debufs */
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 07/18] drm/virtio: add virtio_gpu_object_array & helpers
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (5 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve} Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-03 18:31   ` Chia-I Wu
  2019-07-02 14:18 ` [PATCH v6 08/18] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing Gerd Hoffmann
                   ` (10 subsequent siblings)
  17 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

Some helper functions to manage an array of gem objects.

v6:
 - add ticket to struct virtio_gpu_object_array.
 - add virtio_gpu_array_{lock,unlock}_resv helpers.
 - add virtio_gpu_array_add_fence helper.
v5: some small optimizations (Chia-I Wu).
v4: make them virtio-private instead of generic helpers.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h | 17 ++++++
 drivers/gpu/drm/virtio/virtgpu_gem.c | 83 ++++++++++++++++++++++++++++
 2 files changed, 100 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 07f6001ea91e..abb078a5dedf 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -84,6 +84,12 @@ struct virtio_gpu_object {
 #define gem_to_virtio_gpu_obj(gobj) \
 	container_of((gobj), struct virtio_gpu_object, gem_base)
 
+struct virtio_gpu_object_array {
+	struct ww_acquire_ctx ticket;
+	u32 nents, total;
+	struct drm_gem_object *objs[];
+};
+
 struct virtio_gpu_vbuffer;
 struct virtio_gpu_device;
 
@@ -251,6 +257,17 @@ int virtio_gpu_mode_dumb_mmap(struct drm_file *file_priv,
 			      struct drm_device *dev,
 			      uint32_t handle, uint64_t *offset_p);
 
+struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents);
+struct virtio_gpu_object_array*
+virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents);
+void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs,
+			      struct drm_gem_object *obj);
+int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs);
+void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs);
+void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs,
+				struct dma_fence *fence);
+void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs);
+
 /* virtio vg */
 int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev);
 void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev);
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 9c9ad3b14080..e88df5e06d06 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -169,3 +169,86 @@ void virtio_gpu_gem_object_close(struct drm_gem_object *obj,
 						qobj->hw_res_handle);
 	virtio_gpu_object_unreserve(qobj);
 }
+
+struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents)
+{
+	struct virtio_gpu_object_array *objs;
+	size_t size = sizeof(*objs) + sizeof(objs->objs[0]) * nents;
+
+	objs = kmalloc(size, GFP_KERNEL);
+	if (!objs)
+		return NULL;
+
+	objs->nents = 0;
+	objs->total = nents;
+	return objs;
+}
+
+static void virtio_gpu_array_free(struct virtio_gpu_object_array *objs)
+{
+	kfree(objs);
+}
+
+struct virtio_gpu_object_array*
+virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents)
+{
+	struct virtio_gpu_object_array *objs;
+	u32 i;
+
+	objs = virtio_gpu_array_alloc(nents);
+	if (!objs)
+		return NULL;
+
+	for (i = 0; i < nents; i++) {
+		objs->nents = i;
+		objs->objs[i] = drm_gem_object_lookup(drm_file, handles[i]);
+		if (!objs->objs[i]) {
+			virtio_gpu_array_put_free(objs);
+			return NULL;
+		}
+	}
+	objs->nents = i;
+	return objs;
+}
+
+void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs,
+			      struct drm_gem_object *obj)
+{
+	if (WARN_ON_ONCE(objs->nents == objs->total))
+		return;
+
+	drm_gem_object_get(obj);
+	objs->objs[objs->nents] = obj;
+	objs->nents++;
+}
+
+int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs)
+{
+	return drm_gem_lock_reservations(objs->objs, objs->nents,
+					 &objs->ticket);
+}
+
+void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs)
+{
+	drm_gem_unlock_reservations(objs->objs, objs->nents,
+				    &objs->ticket);
+}
+
+void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs,
+				struct dma_fence *fence)
+{
+	int i;
+
+	for (i = 0; i < objs->nents; i++)
+		reservation_object_add_excl_fence(objs->objs[i]->resv,
+						  fence);
+}
+
+void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs)
+{
+	u32 i;
+
+	for (i = 0; i < objs->nents; i++)
+		drm_gem_object_put_unlocked(objs->objs[i]);
+	virtio_gpu_array_free(objs);
+}
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 08/18] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (6 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 07/18] drm/virtio: add virtio_gpu_object_array & helpers Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-03 18:49   ` Chia-I Wu
  2019-07-02 14:18 ` [PATCH v6 09/18] drm/virtio: rework virtio_gpu_object_create fencing Gerd Hoffmann
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

Rework fencing workflow, starting with virtio_gpu_execbuffer_ioctl.
Stop using ttm helpers, use the virtio_gpu_array_* helpers (which work
on the reservation objects directly) instead.

New workflow:

 (1) All gem objects needed by a command are added to a
     virtio_gpu_object_array.
 (2) All reservation objects will be locked (virtio_gpu_array_lock_resv).
 (3) virtio_gpu_fence_emit() completes fence initialization.
 (4) fence gets added to the objects, reservation objects are unlocked
     (virtio_gpu_array_add_fence, virtio_gpu_array_unlock_resv).
 (5) virtio command is submitted to the host.
 (6) The completion callback (virtio_gpu_dequeue_ctrl_func)
     will drop object references and free virtio_gpu_object_array.

v6: rewrite most of the patch.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  6 ++-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 56 +++++++++-----------------
 drivers/gpu/drm/virtio/virtgpu_vq.c    | 21 +++++++---
 3 files changed, 38 insertions(+), 45 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index abb078a5dedf..98511d1dfff2 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -121,9 +121,9 @@ struct virtio_gpu_vbuffer {
 
 	char *resp_buf;
 	int resp_size;
-
 	virtio_gpu_resp_cb resp_cb;
 
+	struct virtio_gpu_object_array *objs;
 	struct list_head list;
 };
 
@@ -318,7 +318,9 @@ void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev,
 					    uint32_t resource_id);
 void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
 			   void *data, uint32_t data_size,
-			   uint32_t ctx_id, struct virtio_gpu_fence *fence);
+			   uint32_t ctx_id,
+			   struct virtio_gpu_object_array *objs,
+			   struct virtio_gpu_fence *fence);
 void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
 					  uint32_t resource_id, uint32_t ctx_id,
 					  uint64_t offset, uint32_t level,
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 0caff3fa623e..9735d7e5899b 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -105,16 +105,11 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
 	struct drm_virtgpu_execbuffer *exbuf = data;
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv;
-	struct drm_gem_object *gobj;
 	struct virtio_gpu_fence *out_fence;
-	struct virtio_gpu_object *qobj;
 	int ret;
 	uint32_t *bo_handles = NULL;
 	void __user *user_bo_handles = NULL;
-	struct list_head validate_list;
-	struct ttm_validate_buffer *buflist = NULL;
-	int i;
-	struct ww_acquire_ctx ticket;
+	struct virtio_gpu_object_array *buflist = NULL;
 	struct sync_file *sync_file;
 	int in_fence_fd = exbuf->fence_fd;
 	int out_fence_fd = -1;
@@ -155,15 +150,10 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
 			return out_fence_fd;
 	}
 
-	INIT_LIST_HEAD(&validate_list);
 	if (exbuf->num_bo_handles) {
-
 		bo_handles = kvmalloc_array(exbuf->num_bo_handles,
-					   sizeof(uint32_t), GFP_KERNEL);
-		buflist = kvmalloc_array(exbuf->num_bo_handles,
-					   sizeof(struct ttm_validate_buffer),
-					   GFP_KERNEL | __GFP_ZERO);
-		if (!bo_handles || !buflist) {
+					    sizeof(uint32_t), GFP_KERNEL);
+		if (!bo_handles) {
 			ret = -ENOMEM;
 			goto out_unused_fd;
 		}
@@ -175,25 +165,21 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
 			goto out_unused_fd;
 		}
 
-		for (i = 0; i < exbuf->num_bo_handles; i++) {
-			gobj = drm_gem_object_lookup(drm_file, bo_handles[i]);
-			if (!gobj) {
-				ret = -ENOENT;
-				goto out_unused_fd;
-			}
-
-			qobj = gem_to_virtio_gpu_obj(gobj);
-			buflist[i].bo = &qobj->tbo;
-
-			list_add(&buflist[i].head, &validate_list);
+		buflist = virtio_gpu_array_from_handles(drm_file, bo_handles,
+							exbuf->num_bo_handles);
+		if (!buflist) {
+			ret = -ENOENT;
+			goto out_unused_fd;
 		}
 		kvfree(bo_handles);
 		bo_handles = NULL;
 	}
 
-	ret = virtio_gpu_object_list_validate(&ticket, &validate_list);
-	if (ret)
-		goto out_free;
+	if (buflist) {
+		ret = virtio_gpu_array_lock_resv(buflist);
+		if (ret)
+			goto out_unused_fd;
+	}
 
 	buf = memdup_user(u64_to_user_ptr(exbuf->command), exbuf->size);
 	if (IS_ERR(buf)) {
@@ -220,24 +206,18 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
 	}
 
 	virtio_gpu_cmd_submit(vgdev, buf, exbuf->size,
-			      vfpriv->ctx_id, out_fence);
-
-	ttm_eu_fence_buffer_objects(&ticket, &validate_list, &out_fence->f);
-
-	/* fence the command bo */
-	virtio_gpu_unref_list(&validate_list);
-	kvfree(buflist);
+			      vfpriv->ctx_id, buflist, out_fence);
 	return 0;
 
 out_memdup:
 	kfree(buf);
 out_unresv:
-	ttm_eu_backoff_reservation(&ticket, &validate_list);
-out_free:
-	virtio_gpu_unref_list(&validate_list);
+	if (buflist)
+		virtio_gpu_array_unlock_resv(buflist);
 out_unused_fd:
 	kvfree(bo_handles);
-	kvfree(buflist);
+	if (buflist)
+		virtio_gpu_array_put_free(buflist);
 
 	if (out_fence_fd >= 0)
 		put_unused_fd(out_fence_fd);
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 6c1a90717535..dbe329801e84 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -191,7 +191,7 @@ void virtio_gpu_dequeue_ctrl_func(struct work_struct *work)
 	} while (!virtqueue_enable_cb(vgdev->ctrlq.vq));
 	spin_unlock(&vgdev->ctrlq.qlock);
 
-	list_for_each_entry_safe(entry, tmp, &reclaim_list, list) {
+	list_for_each_entry(entry, &reclaim_list, list) {
 		resp = (struct virtio_gpu_ctrl_hdr *)entry->resp_buf;
 
 		trace_virtio_gpu_cmd_response(vgdev->ctrlq.vq, resp);
@@ -218,14 +218,18 @@ void virtio_gpu_dequeue_ctrl_func(struct work_struct *work)
 		}
 		if (entry->resp_cb)
 			entry->resp_cb(vgdev, entry);
-
-		list_del(&entry->list);
-		free_vbuf(vgdev, entry);
 	}
 	wake_up(&vgdev->ctrlq.ack_queue);
 
 	if (fence_id)
 		virtio_gpu_fence_event_process(vgdev, fence_id);
+
+	list_for_each_entry_safe(entry, tmp, &reclaim_list, list) {
+		if (entry->objs)
+			virtio_gpu_array_put_free(entry->objs);
+		list_del(&entry->list);
+		free_vbuf(vgdev, entry);
+	}
 }
 
 void virtio_gpu_dequeue_cursor_func(struct work_struct *work)
@@ -337,6 +341,10 @@ static int virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *vgdev,
 
 	if (fence)
 		virtio_gpu_fence_emit(vgdev, hdr, fence);
+	if (vbuf->objs) {
+		virtio_gpu_array_add_fence(vbuf->objs, &fence->f);
+		virtio_gpu_array_unlock_resv(vbuf->objs);
+	}
 	rc = virtio_gpu_queue_ctrl_buffer_locked(vgdev, vbuf);
 	spin_unlock(&vgdev->ctrlq.qlock);
 	return rc;
@@ -939,7 +947,9 @@ void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
 
 void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
 			   void *data, uint32_t data_size,
-			   uint32_t ctx_id, struct virtio_gpu_fence *fence)
+			   uint32_t ctx_id,
+			   struct virtio_gpu_object_array *objs,
+			   struct virtio_gpu_fence *fence)
 {
 	struct virtio_gpu_cmd_submit *cmd_p;
 	struct virtio_gpu_vbuffer *vbuf;
@@ -949,6 +959,7 @@ void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
 
 	vbuf->data_buf = data;
 	vbuf->data_size = data_size;
+	vbuf->objs = objs;
 
 	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_SUBMIT_3D);
 	cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id);
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 09/18] drm/virtio: rework virtio_gpu_object_create fencing
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (7 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 08/18] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-02 14:18 ` [PATCH v6 10/18] drm/virtio: drop virtio_gpu_object_list_validate/virtio_gpu_unref_list Gerd Hoffmann
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

Rework fencing workflow.  Stop using ttm helpers, use the
virtio_gpu_array_* helpers instead.

Due to using the gem reservation object it is initialized and ready for
use before calling ttm_bo_init.  So we can simply use the standard
fencing workflow and drop the tricky logic which checks whenever the
command is in flight still.

v6: rewrite most of the patch.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h    |  2 +
 drivers/gpu/drm/virtio/virtgpu_object.c | 74 +++++++++++--------------
 drivers/gpu/drm/virtio/virtgpu_vq.c     |  4 ++
 3 files changed, 37 insertions(+), 43 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 98511d1dfff2..bc8cb9218732 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -274,6 +274,7 @@ void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev);
 void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
 				    struct virtio_gpu_object *bo,
 				    struct virtio_gpu_object_params *params,
+				    struct virtio_gpu_object_array *objs,
 				    struct virtio_gpu_fence *fence);
 void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
 				   uint32_t resource_id);
@@ -336,6 +337,7 @@ void
 virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
 				  struct virtio_gpu_object *bo,
 				  struct virtio_gpu_object_params *params,
+				  struct virtio_gpu_object_array *objs,
 				  struct virtio_gpu_fence *fence);
 void virtio_gpu_ctrl_ack(struct virtqueue *vq);
 void virtio_gpu_cursor_ack(struct virtqueue *vq);
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 82bfbf983fd2..2902487f051a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -97,6 +97,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 			     struct virtio_gpu_object **bo_ptr,
 			     struct virtio_gpu_fence *fence)
 {
+	struct virtio_gpu_object_array *objs = NULL;
 	struct virtio_gpu_object *bo;
 	size_t acc_size;
 	int ret;
@@ -110,23 +111,34 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 	if (bo == NULL)
 		return -ENOMEM;
 	ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle);
-	if (ret < 0) {
-		kfree(bo);
-		return ret;
-	}
+	if (ret < 0)
+		goto err_free_gem;
+
 	params->size = roundup(params->size, PAGE_SIZE);
 	ret = drm_gem_object_init(vgdev->ddev, &bo->gem_base, params->size);
-	if (ret != 0) {
-		virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
-		kfree(bo);
-		return ret;
-	}
+	if (ret != 0)
+		goto err_put_id;
+
 	bo->dumb = params->dumb;
 
+	if (fence) {
+		ret = -ENOMEM;
+		objs = virtio_gpu_array_alloc(1);
+		if (!objs)
+			goto err_put_id;
+		virtio_gpu_array_add_obj(objs, &bo->gem_base);
+
+		ret = virtio_gpu_array_lock_resv(objs);
+		if (ret != 0)
+			goto err_put_objs;
+	}
+
 	if (params->virgl) {
-		virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, fence);
+		virtio_gpu_cmd_resource_create_3d(vgdev, bo, params,
+						  objs, fence);
 	} else {
-		virtio_gpu_cmd_create_resource(vgdev, bo, params, fence);
+		virtio_gpu_cmd_create_resource(vgdev, bo, params,
+					       objs, fence);
 	}
 
 	virtio_gpu_init_ttm_placement(bo);
@@ -139,40 +151,16 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 	if (ret != 0)
 		return ret;
 
-	if (fence) {
-		struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv;
-		struct list_head validate_list;
-		struct ttm_validate_buffer mainbuf;
-		struct ww_acquire_ctx ticket;
-		unsigned long irq_flags;
-		bool signaled;
-
-		INIT_LIST_HEAD(&validate_list);
-		memset(&mainbuf, 0, sizeof(struct ttm_validate_buffer));
-
-		/* use a gem reference since unref list undoes them */
-		drm_gem_object_get(&bo->gem_base);
-		mainbuf.bo = &bo->tbo;
-		list_add(&mainbuf.head, &validate_list);
-
-		ret = virtio_gpu_object_list_validate(&ticket, &validate_list);
-		if (ret == 0) {
-			spin_lock_irqsave(&drv->lock, irq_flags);
-			signaled = virtio_fence_signaled(&fence->f);
-			if (!signaled)
-				/* virtio create command still in flight */
-				ttm_eu_fence_buffer_objects(&ticket, &validate_list,
-							    &fence->f);
-			spin_unlock_irqrestore(&drv->lock, irq_flags);
-			if (signaled)
-				/* virtio create command finished */
-				ttm_eu_backoff_reservation(&ticket, &validate_list);
-		}
-		virtio_gpu_unref_list(&validate_list);
-	}
-
 	*bo_ptr = bo;
 	return 0;
+
+err_put_objs:
+	virtio_gpu_array_put_free(objs);
+err_put_id:
+	virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
+err_free_gem:
+	kfree(bo);
+	return ret;
 }
 
 void virtio_gpu_object_kunmap(struct virtio_gpu_object *bo)
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index dbe329801e84..03b7e0845d98 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -395,6 +395,7 @@ static int virtio_gpu_queue_cursor(struct virtio_gpu_device *vgdev,
 void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
 				    struct virtio_gpu_object *bo,
 				    struct virtio_gpu_object_params *params,
+				    struct virtio_gpu_object_array *objs,
 				    struct virtio_gpu_fence *fence)
 {
 	struct virtio_gpu_resource_create_2d *cmd_p;
@@ -402,6 +403,7 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
 
 	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
 	memset(cmd_p, 0, sizeof(*cmd_p));
+	vbuf->objs = objs;
 
 	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_2D);
 	cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
@@ -868,6 +870,7 @@ void
 virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
 				  struct virtio_gpu_object *bo,
 				  struct virtio_gpu_object_params *params,
+				  struct virtio_gpu_object_array *objs,
 				  struct virtio_gpu_fence *fence)
 {
 	struct virtio_gpu_resource_create_3d *cmd_p;
@@ -875,6 +878,7 @@ virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
 
 	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
 	memset(cmd_p, 0, sizeof(*cmd_p));
+	vbuf->objs = objs;
 
 	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_3D);
 	cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 10/18] drm/virtio: drop virtio_gpu_object_list_validate/virtio_gpu_unref_list
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (8 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 09/18] drm/virtio: rework virtio_gpu_object_create fencing Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-02 14:18 ` [PATCH v6 11/18] drm/virtio: switch from ttm to gem shmem helpers Gerd Hoffmann
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

No users left.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  3 --
 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 39 --------------------------
 2 files changed, 42 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index bc8cb9218732..12168067a874 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -223,9 +223,6 @@ struct virtio_gpu_fpriv {
 /* virtio_ioctl.c */
 #define DRM_VIRTIO_NUM_IOCTLS 10
 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
-int virtio_gpu_object_list_validate(struct ww_acquire_ctx *ticket,
-				    struct list_head *head);
-void virtio_gpu_unref_list(struct list_head *head);
 
 /* virtio_kms.c */
 int virtio_gpu_init(struct drm_device *dev);
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 9735d7e5899b..1fa8aa91d590 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -54,45 +54,6 @@ static int virtio_gpu_map_ioctl(struct drm_device *dev, void *data,
 					 &virtio_gpu_map->offset);
 }
 
-int virtio_gpu_object_list_validate(struct ww_acquire_ctx *ticket,
-				    struct list_head *head)
-{
-	struct ttm_operation_ctx ctx = { false, false };
-	struct ttm_validate_buffer *buf;
-	struct ttm_buffer_object *bo;
-	struct virtio_gpu_object *qobj;
-	int ret;
-
-	ret = ttm_eu_reserve_buffers(ticket, head, true, NULL, true);
-	if (ret != 0)
-		return ret;
-
-	list_for_each_entry(buf, head, head) {
-		bo = buf->bo;
-		qobj = container_of(bo, struct virtio_gpu_object, tbo);
-		ret = ttm_bo_validate(bo, &qobj->placement, &ctx);
-		if (ret) {
-			ttm_eu_backoff_reservation(ticket, head);
-			return ret;
-		}
-	}
-	return 0;
-}
-
-void virtio_gpu_unref_list(struct list_head *head)
-{
-	struct ttm_validate_buffer *buf;
-	struct ttm_buffer_object *bo;
-	struct virtio_gpu_object *qobj;
-
-	list_for_each_entry(buf, head, head) {
-		bo = buf->bo;
-		qobj = container_of(bo, struct virtio_gpu_object, tbo);
-
-		drm_gem_object_put_unlocked(&qobj->gem_base);
-	}
-}
-
 /*
  * Usage of execbuffer:
  * Relocations need to take into account the full VIRTIO_GPUDrawable size.
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 11/18] drm/virtio: switch from ttm to gem shmem helpers
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (9 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 10/18] drm/virtio: drop virtio_gpu_object_list_validate/virtio_gpu_unref_list Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-04 13:33   ` Emil Velikov
  2019-07-17  6:04   ` Chia-I Wu
  2019-07-02 14:18 ` [PATCH v6 12/18] drm/virtio: remove virtio_gpu_alloc_object Gerd Hoffmann
                   ` (6 subsequent siblings)
  17 siblings, 2 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list, open list:VIRTIO GPU DRIVER

virtio-gpu basically needs a sg_table for the bo, to tell the host where
the backing pages for the object are.  So the gem shmem helpers are a
perfect fit.  Some drm_gem_object_funcs need thin wrappers to update the
host state, but otherwise the helpers handle everything just fine.

Once the fencing was sorted the switch was surprisingly easy and for the
most part just removing the ttm code.

v4: fix drm_gem_object_funcs name.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h    |  52 +---
 drivers/gpu/drm/virtio/virtgpu_drv.c    |  20 +-
 drivers/gpu/drm/virtio/virtgpu_gem.c    |  16 +-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  |  19 +-
 drivers/gpu/drm/virtio/virtgpu_kms.c    |   9 -
 drivers/gpu/drm/virtio/virtgpu_object.c | 146 ++++--------
 drivers/gpu/drm/virtio/virtgpu_prime.c  |  37 ---
 drivers/gpu/drm/virtio/virtgpu_ttm.c    | 304 ------------------------
 drivers/gpu/drm/virtio/virtgpu_vq.c     |  24 +-
 drivers/gpu/drm/virtio/Kconfig          |   2 +-
 drivers/gpu/drm/virtio/Makefile         |   2 +-
 11 files changed, 82 insertions(+), 549 deletions(-)
 delete mode 100644 drivers/gpu/drm/virtio/virtgpu_ttm.c

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 12168067a874..f8a586029400 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -33,14 +33,11 @@
 
 #include <drm/drmP.h>
 #include <drm/drm_gem.h>
+#include <drm/drm_gem_shmem_helper.h>
 #include <drm/drm_atomic.h>
 #include <drm/drm_encoder.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_probe_helper.h>
-#include <drm/ttm/ttm_bo_api.h>
-#include <drm/ttm/ttm_bo_driver.h>
-#include <drm/ttm/ttm_placement.h>
-#include <drm/ttm/ttm_module.h>
 
 #define DRIVER_NAME "virtio_gpu"
 #define DRIVER_DESC "virtio GPU"
@@ -68,21 +65,16 @@ struct virtio_gpu_object_params {
 };
 
 struct virtio_gpu_object {
-	struct drm_gem_object gem_base;
+	struct drm_gem_shmem_object base;
 	uint32_t hw_res_handle;
 
 	struct sg_table *pages;
 	uint32_t mapped;
-	void *vmap;
 	bool dumb;
-	struct ttm_place                placement_code;
-	struct ttm_placement		placement;
-	struct ttm_buffer_object	tbo;
-	struct ttm_bo_kmap_obj		kmap;
 	bool created;
 };
 #define gem_to_virtio_gpu_obj(gobj) \
-	container_of((gobj), struct virtio_gpu_object, gem_base)
+	container_of((gobj), struct virtio_gpu_object, base.base)
 
 struct virtio_gpu_object_array {
 	struct ww_acquire_ctx ticket;
@@ -153,10 +145,6 @@ struct virtio_gpu_framebuffer {
 #define to_virtio_gpu_framebuffer(x) \
 	container_of(x, struct virtio_gpu_framebuffer, base)
 
-struct virtio_gpu_mman {
-	struct ttm_bo_device		bdev;
-};
-
 struct virtio_gpu_queue {
 	struct virtqueue *vq;
 	spinlock_t qlock;
@@ -185,8 +173,6 @@ struct virtio_gpu_device {
 
 	struct virtio_device *vdev;
 
-	struct virtio_gpu_mman mman;
-
 	struct virtio_gpu_output outputs[VIRTIO_GPU_MAX_SCANOUTS];
 	uint32_t num_scanouts;
 
@@ -357,11 +343,6 @@ struct drm_plane *virtio_gpu_plane_init(struct virtio_gpu_device *vgdev,
 					enum drm_plane_type type,
 					int index);
 
-/* virtio_gpu_ttm.c */
-int virtio_gpu_ttm_init(struct virtio_gpu_device *vgdev);
-void virtio_gpu_ttm_fini(struct virtio_gpu_device *vgdev);
-int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma);
-
 /* virtio_gpu_fence.c */
 bool virtio_fence_signaled(struct dma_fence *f);
 struct virtio_gpu_fence *virtio_gpu_fence_alloc(
@@ -373,58 +354,47 @@ void virtio_gpu_fence_event_process(struct virtio_gpu_device *vdev,
 				    u64 last_seq);
 
 /* virtio_gpu_object */
+struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
+						size_t size);
 int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 			     struct virtio_gpu_object_params *params,
 			     struct virtio_gpu_object **bo_ptr,
 			     struct virtio_gpu_fence *fence);
-void virtio_gpu_object_kunmap(struct virtio_gpu_object *bo);
-int virtio_gpu_object_kmap(struct virtio_gpu_object *bo);
-int virtio_gpu_object_get_sg_table(struct virtio_gpu_device *qdev,
-				   struct virtio_gpu_object *bo);
-void virtio_gpu_object_free_sg_table(struct virtio_gpu_object *bo);
 
 /* virtgpu_prime.c */
-struct sg_table *virtgpu_gem_prime_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *sgt);
-void *virtgpu_gem_prime_vmap(struct drm_gem_object *obj);
-void virtgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
-int virtgpu_gem_prime_mmap(struct drm_gem_object *obj,
-			   struct vm_area_struct *vma);
 
 static inline struct virtio_gpu_object*
 virtio_gpu_object_ref(struct virtio_gpu_object *bo)
 {
-	ttm_bo_get(&bo->tbo);
+	drm_gem_object_get(&bo->base.base);
 	return bo;
 }
 
 static inline void virtio_gpu_object_unref(struct virtio_gpu_object **bo)
 {
-	struct ttm_buffer_object *tbo;
-
 	if ((*bo) == NULL)
 		return;
-	tbo = &((*bo)->tbo);
-	ttm_bo_put(tbo);
+	drm_gem_object_put(&(*bo)->base.base);
 	*bo = NULL;
 }
 
 static inline u64 virtio_gpu_object_mmap_offset(struct virtio_gpu_object *bo)
 {
-	return drm_vma_node_offset_addr(&bo->tbo.vma_node);
+	return drm_vma_node_offset_addr(&bo->base.base.vma_node);
 }
 
 static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo)
 {
 	int r;
 
-	r = reservation_object_lock_interruptible(bo->gem_base.resv, NULL);
+	r = reservation_object_lock_interruptible(bo->base.base.resv, NULL);
 	if (unlikely(r != 0)) {
 		if (r != -EINTR) {
 			struct virtio_gpu_device *qdev =
-				bo->gem_base.dev->dev_private;
+				bo->base.base.dev->dev_private;
 			dev_err(qdev->dev, "%p reserve failed\n", bo);
 		}
 		return r;
@@ -434,7 +404,7 @@ static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo)
 
 static inline void virtio_gpu_object_unreserve(struct virtio_gpu_object *bo)
 {
-	reservation_object_unlock(bo->gem_base.resv);
+	reservation_object_unlock(bo->base.base.resv);
 }
 
 /* virgl debufs */
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
index 99bcd290f1fb..8ce193e2800b 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -182,17 +182,7 @@ MODULE_AUTHOR("Dave Airlie <airlied@redhat.com>");
 MODULE_AUTHOR("Gerd Hoffmann <kraxel@redhat.com>");
 MODULE_AUTHOR("Alon Levy");
 
-static const struct file_operations virtio_gpu_driver_fops = {
-	.owner = THIS_MODULE,
-	.open = drm_open,
-	.mmap = virtio_gpu_mmap,
-	.poll = drm_poll,
-	.read = drm_read,
-	.unlocked_ioctl	= drm_ioctl,
-	.release = drm_release,
-	.compat_ioctl = drm_compat_ioctl,
-	.llseek = noop_llseek,
-};
+DEFINE_DRM_GEM_SHMEM_FOPS(virtio_gpu_driver_fops);
 
 static struct drm_driver driver = {
 	.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_RENDER | DRIVER_ATOMIC,
@@ -207,15 +197,9 @@ static struct drm_driver driver = {
 #endif
 	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
-	.gem_prime_get_sg_table = virtgpu_gem_prime_get_sg_table,
 	.gem_prime_import_sg_table = virtgpu_gem_prime_import_sg_table,
-	.gem_prime_vmap = virtgpu_gem_prime_vmap,
-	.gem_prime_vunmap = virtgpu_gem_prime_vunmap,
-	.gem_prime_mmap = virtgpu_gem_prime_mmap,
 
-	.gem_free_object_unlocked = virtio_gpu_gem_free_object,
-	.gem_open_object = virtio_gpu_gem_object_open,
-	.gem_close_object = virtio_gpu_gem_object_close,
+	.gem_create_object = virtio_gpu_create_object,
 	.fops = &virtio_gpu_driver_fops,
 
 	.ioctls = virtio_gpu_ioctls,
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index e88df5e06d06..8a95864404ca 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -26,14 +26,6 @@
 #include <drm/drmP.h>
 #include "virtgpu_drv.h"
 
-void virtio_gpu_gem_free_object(struct drm_gem_object *gem_obj)
-{
-	struct virtio_gpu_object *obj = gem_to_virtio_gpu_obj(gem_obj);
-
-	if (obj)
-		virtio_gpu_object_unref(&obj);
-}
-
 struct virtio_gpu_object*
 virtio_gpu_alloc_object(struct drm_device *dev,
 			struct virtio_gpu_object_params *params,
@@ -64,16 +56,16 @@ int virtio_gpu_gem_create(struct drm_file *file,
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	ret = drm_gem_handle_create(file, &obj->gem_base, &handle);
+	ret = drm_gem_handle_create(file, &obj->base.base, &handle);
 	if (ret) {
-		drm_gem_object_release(&obj->gem_base);
+		drm_gem_object_release(&obj->base.base);
 		return ret;
 	}
 
-	*obj_p = &obj->gem_base;
+	*obj_p = &obj->base.base;
 
 	/* drop reference from allocate - handle holds it now */
-	drm_gem_object_put_unlocked(&obj->gem_base);
+	drm_gem_object_put_unlocked(&obj->base.base);
 
 	*handle_p = handle;
 	return 0;
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 1fa8aa91d590..107057816e1f 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -27,7 +27,6 @@
 
 #include <drm/drmP.h>
 #include <drm/virtgpu_drm.h>
-#include <drm/ttm/ttm_execbuf_util.h>
 #include <linux/sync_file.h>
 
 #include "virtgpu_drv.h"
@@ -259,7 +258,7 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
 	dma_fence_put(&fence->f);
 	if (IS_ERR(qobj))
 		return PTR_ERR(qobj);
-	obj = &qobj->gem_base;
+	obj = &qobj->base.base;
 
 	ret = drm_gem_handle_create(file_priv, obj, &handle);
 	if (ret) {
@@ -286,7 +285,7 @@ static int virtio_gpu_resource_info_ioctl(struct drm_device *dev, void *data,
 
 	qobj = gem_to_virtio_gpu_obj(gobj);
 
-	ri->size = qobj->gem_base.size;
+	ri->size = qobj->base.base.size;
 	ri->res_handle = qobj->hw_res_handle;
 	drm_gem_object_put_unlocked(gobj);
 	return 0;
@@ -299,7 +298,6 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
 	struct drm_virtgpu_3d_transfer_from_host *args = data;
-	struct ttm_operation_ctx ctx = { true, false };
 	struct drm_gem_object *gobj = NULL;
 	struct virtio_gpu_object *qobj = NULL;
 	struct virtio_gpu_fence *fence;
@@ -320,10 +318,6 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
 	if (ret)
 		goto out;
 
-	ret = ttm_bo_validate(&qobj->tbo, &qobj->placement, &ctx);
-	if (unlikely(ret))
-		goto out_unres;
-
 	convert_to_hw_box(&box, &args->box);
 
 	fence = virtio_gpu_fence_alloc(vgdev);
@@ -335,7 +329,7 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
 		(vgdev, qobj->hw_res_handle,
 		 vfpriv->ctx_id, offset, args->level,
 		 &box, fence);
-	reservation_object_add_excl_fence(qobj->tbo.resv,
+	reservation_object_add_excl_fence(qobj->base.base.resv,
 					  &fence->f);
 
 	dma_fence_put(&fence->f);
@@ -352,7 +346,6 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
 	struct drm_virtgpu_3d_transfer_to_host *args = data;
-	struct ttm_operation_ctx ctx = { true, false };
 	struct drm_gem_object *gobj = NULL;
 	struct virtio_gpu_object *qobj = NULL;
 	struct virtio_gpu_fence *fence;
@@ -370,10 +363,6 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		goto out;
 
-	ret = ttm_bo_validate(&qobj->tbo, &qobj->placement, &ctx);
-	if (unlikely(ret))
-		goto out_unres;
-
 	convert_to_hw_box(&box, &args->box);
 	if (!vgdev->has_virgl_3d) {
 		virtio_gpu_cmd_transfer_to_host_2d
@@ -389,7 +378,7 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
 			(vgdev, qobj,
 			 vfpriv ? vfpriv->ctx_id : 0, offset,
 			 args->level, &box, fence);
-		reservation_object_add_excl_fence(qobj->tbo.resv,
+		reservation_object_add_excl_fence(qobj->base.base.resv,
 						  &fence->f);
 		dma_fence_put(&fence->f);
 	}
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 84b6a6bf00c6..0bc6abaeafca 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -171,12 +171,6 @@ int virtio_gpu_init(struct drm_device *dev)
 		goto err_vbufs;
 	}
 
-	ret = virtio_gpu_ttm_init(vgdev);
-	if (ret) {
-		DRM_ERROR("failed to init ttm %d\n", ret);
-		goto err_ttm;
-	}
-
 	/* get display info */
 	virtio_cread(vgdev->vdev, struct virtio_gpu_config,
 		     num_scanouts, &num_scanouts);
@@ -208,8 +202,6 @@ int virtio_gpu_init(struct drm_device *dev)
 	return 0;
 
 err_scanouts:
-	virtio_gpu_ttm_fini(vgdev);
-err_ttm:
 	virtio_gpu_free_vbufs(vgdev);
 err_vbufs:
 	vgdev->vdev->config->del_vqs(vgdev->vdev);
@@ -240,7 +232,6 @@ void virtio_gpu_deinit(struct drm_device *dev)
 	vgdev->vdev->config->del_vqs(vgdev->vdev);
 
 	virtio_gpu_modeset_fini(vgdev);
-	virtio_gpu_ttm_fini(vgdev);
 	virtio_gpu_free_vbufs(vgdev);
 	virtio_gpu_cleanup_cap_cache(vgdev);
 	kfree(vgdev->capsets);
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 2902487f051a..fb9d05a68f4c 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -23,8 +23,6 @@
  * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
  */
 
-#include <drm/ttm/ttm_execbuf_util.h>
-
 #include "virtgpu_drv.h"
 
 static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
@@ -57,39 +55,45 @@ static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t
 #endif
 }
 
-static void virtio_gpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
+static void virtio_gpu_free_object(struct drm_gem_object *obj)
 {
-	struct virtio_gpu_object *bo;
-	struct virtio_gpu_device *vgdev;
-
-	bo = container_of(tbo, struct virtio_gpu_object, tbo);
-	vgdev = (struct virtio_gpu_device *)bo->gem_base.dev->dev_private;
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
 
+	if (bo->pages)
+		virtio_gpu_object_detach(vgdev, bo);
 	if (bo->created)
 		virtio_gpu_cmd_unref_resource(vgdev, bo->hw_res_handle);
-	if (bo->pages)
-		virtio_gpu_object_free_sg_table(bo);
-	if (bo->vmap)
-		virtio_gpu_object_kunmap(bo);
-	drm_gem_object_release(&bo->gem_base);
 	virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
-	kfree(bo);
+
+	drm_gem_shmem_free_object(obj);
 }
 
-static void virtio_gpu_init_ttm_placement(struct virtio_gpu_object *vgbo)
+static const struct drm_gem_object_funcs virtio_gpu_gem_funcs = {
+	.free = virtio_gpu_free_object,
+	.open = virtio_gpu_gem_object_open,
+	.close = virtio_gpu_gem_object_close,
+
+	.print_info = drm_gem_shmem_print_info,
+	.pin = drm_gem_shmem_pin,
+	.unpin = drm_gem_shmem_unpin,
+	.get_sg_table = drm_gem_shmem_get_sg_table,
+	.vmap = drm_gem_shmem_vmap,
+	.vunmap = drm_gem_shmem_vunmap,
+	.vm_ops = &drm_gem_shmem_vm_ops,
+};
+
+struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
+						size_t size)
 {
-	u32 c = 1;
+	struct virtio_gpu_object *bo;
 
-	vgbo->placement.placement = &vgbo->placement_code;
-	vgbo->placement.busy_placement = &vgbo->placement_code;
-	vgbo->placement_code.fpfn = 0;
-	vgbo->placement_code.lpfn = 0;
-	vgbo->placement_code.flags =
-		TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT |
-		TTM_PL_FLAG_NO_EVICT;
-	vgbo->placement.num_placement = c;
-	vgbo->placement.num_busy_placement = c;
+	bo = kzalloc(sizeof(*bo), GFP_KERNEL);
+	if (!bo)
+		return NULL;
 
+	bo->base.base.funcs = &virtio_gpu_gem_funcs;
+	return &bo->base.base;
 }
 
 int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
@@ -98,27 +102,22 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 			     struct virtio_gpu_fence *fence)
 {
 	struct virtio_gpu_object_array *objs = NULL;
+	struct drm_gem_shmem_object *shmem_obj;
 	struct virtio_gpu_object *bo;
-	size_t acc_size;
 	int ret;
 
 	*bo_ptr = NULL;
 
-	acc_size = ttm_bo_dma_acc_size(&vgdev->mman.bdev, params->size,
-				       sizeof(struct virtio_gpu_object));
+	params->size = roundup(params->size, PAGE_SIZE);
+	shmem_obj = drm_gem_shmem_create(vgdev->ddev, params->size);
+	if (IS_ERR(shmem_obj))
+		return PTR_ERR(shmem_obj);
+	bo = gem_to_virtio_gpu_obj(&shmem_obj->base);
 
-	bo = kzalloc(sizeof(struct virtio_gpu_object), GFP_KERNEL);
-	if (bo == NULL)
-		return -ENOMEM;
 	ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle);
 	if (ret < 0)
 		goto err_free_gem;
 
-	params->size = roundup(params->size, PAGE_SIZE);
-	ret = drm_gem_object_init(vgdev->ddev, &bo->gem_base, params->size);
-	if (ret != 0)
-		goto err_put_id;
-
 	bo->dumb = params->dumb;
 
 	if (fence) {
@@ -126,7 +125,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 		objs = virtio_gpu_array_alloc(1);
 		if (!objs)
 			goto err_put_id;
-		virtio_gpu_array_add_obj(objs, &bo->gem_base);
+		virtio_gpu_array_add_obj(objs, &bo->base.base);
 
 		ret = virtio_gpu_array_lock_resv(objs);
 		if (ret != 0)
@@ -141,15 +140,11 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 					       objs, fence);
 	}
 
-	virtio_gpu_init_ttm_placement(bo);
-	ret = ttm_bo_init(&vgdev->mman.bdev, &bo->tbo, params->size,
-			  ttm_bo_type_device, &bo->placement, 0,
-			  true, acc_size, NULL,
-			  bo->gem_base.resv,
-			  &virtio_gpu_ttm_bo_destroy);
-	/* ttm_bo_init failure will call the destroy */
-	if (ret != 0)
+	ret = virtio_gpu_object_attach(vgdev, bo, NULL);
+	if (ret != 0) {
+		virtio_gpu_free_object(&shmem_obj->base);
 		return ret;
+	}
 
 	*bo_ptr = bo;
 	return 0;
@@ -159,65 +154,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 err_put_id:
 	virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
 err_free_gem:
-	kfree(bo);
+	drm_gem_shmem_free_object(&shmem_obj->base);
 	return ret;
 }
-
-void virtio_gpu_object_kunmap(struct virtio_gpu_object *bo)
-{
-	bo->vmap = NULL;
-	ttm_bo_kunmap(&bo->kmap);
-}
-
-int virtio_gpu_object_kmap(struct virtio_gpu_object *bo)
-{
-	bool is_iomem;
-	int r;
-
-	WARN_ON(bo->vmap);
-
-	r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
-	if (r)
-		return r;
-	bo->vmap = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
-	return 0;
-}
-
-int virtio_gpu_object_get_sg_table(struct virtio_gpu_device *qdev,
-				   struct virtio_gpu_object *bo)
-{
-	int ret;
-	struct page **pages = bo->tbo.ttm->pages;
-	int nr_pages = bo->tbo.num_pages;
-	struct ttm_operation_ctx ctx = {
-		.interruptible = false,
-		.no_wait_gpu = false
-	};
-
-	/* wtf swapping */
-	if (bo->pages)
-		return 0;
-
-	if (bo->tbo.ttm->state == tt_unpopulated)
-		bo->tbo.ttm->bdev->driver->ttm_tt_populate(bo->tbo.ttm, &ctx);
-	bo->pages = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
-	if (!bo->pages)
-		goto out;
-
-	ret = sg_alloc_table_from_pages(bo->pages, pages, nr_pages, 0,
-					nr_pages << PAGE_SHIFT, GFP_KERNEL);
-	if (ret)
-		goto out;
-	return 0;
-out:
-	kfree(bo->pages);
-	bo->pages = NULL;
-	return -ENOMEM;
-}
-
-void virtio_gpu_object_free_sg_table(struct virtio_gpu_object *bo)
-{
-	sg_free_table(bo->pages);
-	kfree(bo->pages);
-	bo->pages = NULL;
-}
diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c
index 8fbf71bd0c5e..18a155cd08d5 100644
--- a/drivers/gpu/drm/virtio/virtgpu_prime.c
+++ b/drivers/gpu/drm/virtio/virtgpu_prime.c
@@ -28,46 +28,9 @@
  * device that might share buffers with virtgpu
  */
 
-struct sg_table *virtgpu_gem_prime_get_sg_table(struct drm_gem_object *obj)
-{
-	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
-
-	if (!bo->tbo.ttm->pages || !bo->tbo.ttm->num_pages)
-		/* should not happen */
-		return ERR_PTR(-EINVAL);
-
-	return drm_prime_pages_to_sg(bo->tbo.ttm->pages,
-				     bo->tbo.ttm->num_pages);
-}
-
 struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *table)
 {
 	return ERR_PTR(-ENODEV);
 }
-
-void *virtgpu_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
-	int ret;
-
-	ret = virtio_gpu_object_kmap(bo);
-	if (ret)
-		return NULL;
-	return bo->vmap;
-}
-
-void virtgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	virtio_gpu_object_kunmap(gem_to_virtio_gpu_obj(obj));
-}
-
-int virtgpu_gem_prime_mmap(struct drm_gem_object *obj,
-			   struct vm_area_struct *vma)
-{
-	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
-
-	bo->gem_base.vma_node.vm_node.start = bo->tbo.vma_node.vm_node.start;
-	return drm_gem_prime_mmap(obj, vma);
-}
diff --git a/drivers/gpu/drm/virtio/virtgpu_ttm.c b/drivers/gpu/drm/virtio/virtgpu_ttm.c
deleted file mode 100644
index 300ef3a83538..000000000000
--- a/drivers/gpu/drm/virtio/virtgpu_ttm.c
+++ /dev/null
@@ -1,304 +0,0 @@
-/*
- * Copyright (C) 2015 Red Hat, Inc.
- * All Rights Reserved.
- *
- * Authors:
- *    Dave Airlie
- *    Alon Levy
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- */
-
-#include <drm/ttm/ttm_bo_api.h>
-#include <drm/ttm/ttm_bo_driver.h>
-#include <drm/ttm/ttm_placement.h>
-#include <drm/ttm/ttm_page_alloc.h>
-#include <drm/ttm/ttm_module.h>
-#include <drm/drmP.h>
-#include <drm/drm.h>
-#include <drm/virtgpu_drm.h>
-#include "virtgpu_drv.h"
-
-#include <linux/delay.h>
-
-static struct
-virtio_gpu_device *virtio_gpu_get_vgdev(struct ttm_bo_device *bdev)
-{
-	struct virtio_gpu_mman *mman;
-	struct virtio_gpu_device *vgdev;
-
-	mman = container_of(bdev, struct virtio_gpu_mman, bdev);
-	vgdev = container_of(mman, struct virtio_gpu_device, mman);
-	return vgdev;
-}
-
-int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	struct drm_file *file_priv;
-	struct virtio_gpu_device *vgdev;
-	int r;
-
-	file_priv = filp->private_data;
-	vgdev = file_priv->minor->dev->dev_private;
-	if (vgdev == NULL) {
-		DRM_ERROR(
-		 "filp->private_data->minor->dev->dev_private == NULL\n");
-		return -EINVAL;
-	}
-	r = ttm_bo_mmap(filp, vma, &vgdev->mman.bdev);
-
-	return r;
-}
-
-static int virtio_gpu_invalidate_caches(struct ttm_bo_device *bdev,
-					uint32_t flags)
-{
-	return 0;
-}
-
-static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man,
-			       struct ttm_buffer_object *bo,
-			       const struct ttm_place *place,
-			       struct ttm_mem_reg *mem)
-{
-	mem->mm_node = (void *)1;
-	return 0;
-}
-
-static void ttm_bo_man_put_node(struct ttm_mem_type_manager *man,
-				struct ttm_mem_reg *mem)
-{
-	mem->mm_node = (void *)NULL;
-}
-
-static int ttm_bo_man_init(struct ttm_mem_type_manager *man,
-			   unsigned long p_size)
-{
-	return 0;
-}
-
-static int ttm_bo_man_takedown(struct ttm_mem_type_manager *man)
-{
-	return 0;
-}
-
-static void ttm_bo_man_debug(struct ttm_mem_type_manager *man,
-			     struct drm_printer *printer)
-{
-}
-
-static const struct ttm_mem_type_manager_func virtio_gpu_bo_manager_func = {
-	.init = ttm_bo_man_init,
-	.takedown = ttm_bo_man_takedown,
-	.get_node = ttm_bo_man_get_node,
-	.put_node = ttm_bo_man_put_node,
-	.debug = ttm_bo_man_debug
-};
-
-static int virtio_gpu_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
-				    struct ttm_mem_type_manager *man)
-{
-	switch (type) {
-	case TTM_PL_SYSTEM:
-		/* System memory */
-		man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
-		man->available_caching = TTM_PL_MASK_CACHING;
-		man->default_caching = TTM_PL_FLAG_CACHED;
-		break;
-	case TTM_PL_TT:
-		man->func = &virtio_gpu_bo_manager_func;
-		man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
-		man->available_caching = TTM_PL_MASK_CACHING;
-		man->default_caching = TTM_PL_FLAG_CACHED;
-		break;
-	default:
-		DRM_ERROR("Unsupported memory type %u\n", (unsigned int)type);
-		return -EINVAL;
-	}
-	return 0;
-}
-
-static void virtio_gpu_evict_flags(struct ttm_buffer_object *bo,
-				struct ttm_placement *placement)
-{
-	static const struct ttm_place placements = {
-		.fpfn  = 0,
-		.lpfn  = 0,
-		.flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM,
-	};
-
-	placement->placement = &placements;
-	placement->busy_placement = &placements;
-	placement->num_placement = 1;
-	placement->num_busy_placement = 1;
-}
-
-static int virtio_gpu_verify_access(struct ttm_buffer_object *bo,
-				    struct file *filp)
-{
-	return 0;
-}
-
-static int virtio_gpu_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
-					 struct ttm_mem_reg *mem)
-{
-	struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
-
-	mem->bus.addr = NULL;
-	mem->bus.offset = 0;
-	mem->bus.size = mem->num_pages << PAGE_SHIFT;
-	mem->bus.base = 0;
-	mem->bus.is_iomem = false;
-	if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
-		return -EINVAL;
-	switch (mem->mem_type) {
-	case TTM_PL_SYSTEM:
-	case TTM_PL_TT:
-		/* system memory */
-		return 0;
-	default:
-		return -EINVAL;
-	}
-	return 0;
-}
-
-static void virtio_gpu_ttm_io_mem_free(struct ttm_bo_device *bdev,
-				       struct ttm_mem_reg *mem)
-{
-}
-
-/*
- * TTM backend functions.
- */
-struct virtio_gpu_ttm_tt {
-	struct ttm_dma_tt		ttm;
-	struct virtio_gpu_object        *obj;
-};
-
-static int virtio_gpu_ttm_tt_bind(struct ttm_tt *ttm,
-				  struct ttm_mem_reg *bo_mem)
-{
-	struct virtio_gpu_ttm_tt *gtt =
-		container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
-	struct virtio_gpu_device *vgdev =
-		virtio_gpu_get_vgdev(gtt->obj->tbo.bdev);
-
-	virtio_gpu_object_attach(vgdev, gtt->obj, NULL);
-	return 0;
-}
-
-static int virtio_gpu_ttm_tt_unbind(struct ttm_tt *ttm)
-{
-	struct virtio_gpu_ttm_tt *gtt =
-		container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
-	struct virtio_gpu_device *vgdev =
-		virtio_gpu_get_vgdev(gtt->obj->tbo.bdev);
-
-	virtio_gpu_object_detach(vgdev, gtt->obj);
-	return 0;
-}
-
-static void virtio_gpu_ttm_tt_destroy(struct ttm_tt *ttm)
-{
-	struct virtio_gpu_ttm_tt *gtt =
-		container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
-
-	ttm_dma_tt_fini(&gtt->ttm);
-	kfree(gtt);
-}
-
-static struct ttm_backend_func virtio_gpu_tt_func = {
-	.bind = &virtio_gpu_ttm_tt_bind,
-	.unbind = &virtio_gpu_ttm_tt_unbind,
-	.destroy = &virtio_gpu_ttm_tt_destroy,
-};
-
-static struct ttm_tt *virtio_gpu_ttm_tt_create(struct ttm_buffer_object *bo,
-					       uint32_t page_flags)
-{
-	struct virtio_gpu_device *vgdev;
-	struct virtio_gpu_ttm_tt *gtt;
-
-	vgdev = virtio_gpu_get_vgdev(bo->bdev);
-	gtt = kzalloc(sizeof(struct virtio_gpu_ttm_tt), GFP_KERNEL);
-	if (gtt == NULL)
-		return NULL;
-	gtt->ttm.ttm.func = &virtio_gpu_tt_func;
-	gtt->obj = container_of(bo, struct virtio_gpu_object, tbo);
-	if (ttm_dma_tt_init(&gtt->ttm, bo, page_flags)) {
-		kfree(gtt);
-		return NULL;
-	}
-	return &gtt->ttm.ttm;
-}
-
-static void virtio_gpu_bo_swap_notify(struct ttm_buffer_object *tbo)
-{
-	struct virtio_gpu_object *bo;
-
-	bo = container_of(tbo, struct virtio_gpu_object, tbo);
-
-	if (bo->pages)
-		virtio_gpu_object_free_sg_table(bo);
-}
-
-static struct ttm_bo_driver virtio_gpu_bo_driver = {
-	.ttm_tt_create = &virtio_gpu_ttm_tt_create,
-	.invalidate_caches = &virtio_gpu_invalidate_caches,
-	.init_mem_type = &virtio_gpu_init_mem_type,
-	.eviction_valuable = ttm_bo_eviction_valuable,
-	.evict_flags = &virtio_gpu_evict_flags,
-	.verify_access = &virtio_gpu_verify_access,
-	.io_mem_reserve = &virtio_gpu_ttm_io_mem_reserve,
-	.io_mem_free = &virtio_gpu_ttm_io_mem_free,
-	.swap_notify = &virtio_gpu_bo_swap_notify,
-};
-
-int virtio_gpu_ttm_init(struct virtio_gpu_device *vgdev)
-{
-	int r;
-
-	/* No others user of address space so set it to 0 */
-	r = ttm_bo_device_init(&vgdev->mman.bdev,
-			       &virtio_gpu_bo_driver,
-			       vgdev->ddev->anon_inode->i_mapping,
-			       false);
-	if (r) {
-		DRM_ERROR("failed initializing buffer object driver(%d).\n", r);
-		goto err_dev_init;
-	}
-
-	r = ttm_bo_init_mm(&vgdev->mman.bdev, TTM_PL_TT, 0);
-	if (r) {
-		DRM_ERROR("Failed initializing GTT heap.\n");
-		goto err_mm_init;
-	}
-	return 0;
-
-err_mm_init:
-	ttm_bo_device_release(&vgdev->mman.bdev);
-err_dev_init:
-	return r;
-}
-
-void virtio_gpu_ttm_fini(struct virtio_gpu_device *vgdev)
-{
-	ttm_bo_device_release(&vgdev->mman.bdev);
-	DRM_INFO("virtio_gpu: ttm finalized\n");
-}
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 03b7e0845d98..fc908d5cb217 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -979,17 +979,21 @@ int virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
 	bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
 	struct virtio_gpu_mem_entry *ents;
 	struct scatterlist *sg;
-	int si, nents;
+	int si, nents, ret;
 
 	if (WARN_ON_ONCE(!obj->created))
 		return -EINVAL;
+	if (WARN_ON_ONCE(obj->pages))
+		return -EINVAL;
 
-	if (!obj->pages) {
-		int ret;
+	ret = drm_gem_shmem_pin(&obj->base.base);
+	if (ret < 0)
+		return -EINVAL;
 
-		ret = virtio_gpu_object_get_sg_table(vgdev, obj);
-		if (ret)
-			return ret;
+	obj->pages = drm_gem_shmem_get_sg_table(&obj->base.base);
+	if (obj->pages == NULL) {
+		drm_gem_shmem_unpin(&obj->base.base);
+		return -EINVAL;
 	}
 
 	if (use_dma_api) {
@@ -1028,6 +1032,9 @@ void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
 {
 	bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
 
+	if (WARN_ON_ONCE(!obj->pages))
+		return;
+
 	if (use_dma_api && obj->mapped) {
 		struct virtio_gpu_fence *fence = virtio_gpu_fence_alloc(vgdev);
 		/* detach backing and wait for the host process it ... */
@@ -1043,6 +1050,11 @@ void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
 	} else {
 		virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle, NULL);
 	}
+
+	sg_free_table(obj->pages);
+	obj->pages = NULL;
+
+	drm_gem_shmem_unpin(&obj->base.base);
 }
 
 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
diff --git a/drivers/gpu/drm/virtio/Kconfig b/drivers/gpu/drm/virtio/Kconfig
index ba36e933bb49..eff3047052d4 100644
--- a/drivers/gpu/drm/virtio/Kconfig
+++ b/drivers/gpu/drm/virtio/Kconfig
@@ -3,7 +3,7 @@ config DRM_VIRTIO_GPU
 	tristate "Virtio GPU driver"
 	depends on DRM && VIRTIO && MMU
 	select DRM_KMS_HELPER
-	select DRM_TTM
+	select DRM_GEM_SHMEM_HELPER
 	help
 	   This is the virtual GPU driver for virtio.  It can be used with
 	   QEMU based VMMs (like KVM or Xen).
diff --git a/drivers/gpu/drm/virtio/Makefile b/drivers/gpu/drm/virtio/Makefile
index 458e606a936f..92aa2b3d349d 100644
--- a/drivers/gpu/drm/virtio/Makefile
+++ b/drivers/gpu/drm/virtio/Makefile
@@ -4,7 +4,7 @@
 # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
 
 virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_gem.o \
-	virtgpu_display.o virtgpu_vq.o virtgpu_ttm.o \
+	virtgpu_display.o virtgpu_vq.o \
 	virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \
 	virtgpu_ioctl.o virtgpu_prime.o virtgpu_trace_points.o
 
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 12/18] drm/virtio: remove virtio_gpu_alloc_object
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (10 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 11/18] drm/virtio: switch from ttm to gem shmem helpers Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-02 14:18 ` [PATCH v6 13/18] drm/virtio: drop virtio_gpu_object_{ref,unref} Gerd Hoffmann
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

Thin wrapper around virtio_gpu_object_create(),
but calling that directly works equally well.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  4 ----
 drivers/gpu/drm/virtio/virtgpu_gem.c   | 23 ++++-------------------
 drivers/gpu/drm/virtio/virtgpu_ioctl.c |  6 +++---
 3 files changed, 7 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index f8a586029400..577a8103670e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -229,10 +229,6 @@ int virtio_gpu_gem_object_open(struct drm_gem_object *obj,
 			       struct drm_file *file);
 void virtio_gpu_gem_object_close(struct drm_gem_object *obj,
 				 struct drm_file *file);
-struct virtio_gpu_object*
-virtio_gpu_alloc_object(struct drm_device *dev,
-			struct virtio_gpu_object_params *params,
-			struct virtio_gpu_fence *fence);
 int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
 				struct drm_device *dev,
 				struct drm_mode_create_dumb *args);
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 8a95864404ca..6baf64141645 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -26,35 +26,20 @@
 #include <drm/drmP.h>
 #include "virtgpu_drv.h"
 
-struct virtio_gpu_object*
-virtio_gpu_alloc_object(struct drm_device *dev,
-			struct virtio_gpu_object_params *params,
-			struct virtio_gpu_fence *fence)
-{
-	struct virtio_gpu_device *vgdev = dev->dev_private;
-	struct virtio_gpu_object *obj;
-	int ret;
-
-	ret = virtio_gpu_object_create(vgdev, params, &obj, fence);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return obj;
-}
-
 int virtio_gpu_gem_create(struct drm_file *file,
 			  struct drm_device *dev,
 			  struct virtio_gpu_object_params *params,
 			  struct drm_gem_object **obj_p,
 			  uint32_t *handle_p)
 {
+	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_object *obj;
 	int ret;
 	u32 handle;
 
-	obj = virtio_gpu_alloc_object(dev, params, NULL);
-	if (IS_ERR(obj))
-		return PTR_ERR(obj);
+	ret = virtio_gpu_object_create(vgdev, params, &obj, NULL);
+	if (ret < 0)
+		return ret;
 
 	ret = drm_gem_handle_create(file, &obj->base.base, &handle);
 	if (ret) {
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 107057816e1f..0d0acf0b85ed 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -254,10 +254,10 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
 	fence = virtio_gpu_fence_alloc(vgdev);
 	if (!fence)
 		return -ENOMEM;
-	qobj = virtio_gpu_alloc_object(dev, &params, fence);
+	ret = virtio_gpu_object_create(vgdev, &params, &qobj, fence);
 	dma_fence_put(&fence->f);
-	if (IS_ERR(qobj))
-		return PTR_ERR(qobj);
+	if (ret < 0)
+		return ret;
 	obj = &qobj->base.base;
 
 	ret = drm_gem_handle_create(file_priv, obj, &handle);
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 13/18] drm/virtio: drop virtio_gpu_object_{ref,unref}
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (11 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 12/18] drm/virtio: remove virtio_gpu_alloc_object Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-02 14:18 ` [PATCH v6 14/18] drm/virtio: rework virtio_gpu_transfer_from_host_ioctl fencing Gerd Hoffmann
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

No users left.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h | 15 ---------------
 1 file changed, 15 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 577a8103670e..78dc5a19a358 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -362,21 +362,6 @@ struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *sgt);
 
-static inline struct virtio_gpu_object*
-virtio_gpu_object_ref(struct virtio_gpu_object *bo)
-{
-	drm_gem_object_get(&bo->base.base);
-	return bo;
-}
-
-static inline void virtio_gpu_object_unref(struct virtio_gpu_object **bo)
-{
-	if ((*bo) == NULL)
-		return;
-	drm_gem_object_put(&(*bo)->base.base);
-	*bo = NULL;
-}
-
 static inline u64 virtio_gpu_object_mmap_offset(struct virtio_gpu_object *bo)
 {
 	return drm_vma_node_offset_addr(&bo->base.base.vma_node);
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 14/18] drm/virtio: rework virtio_gpu_transfer_from_host_ioctl fencing
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (12 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 13/18] drm/virtio: drop virtio_gpu_object_{ref,unref} Gerd Hoffmann
@ 2019-07-02 14:18 ` Gerd Hoffmann
  2019-07-03 20:05   ` Chia-I Wu
  2019-07-02 14:19 ` [PATCH v6 15/18] drm/virtio: rework virtio_gpu_transfer_to_host_ioctl fencing Gerd Hoffmann
                   ` (3 subsequent siblings)
  17 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:18 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

Switch to the virtio_gpu_array_* helper workflow.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  3 ++-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 35 +++++++++++---------------
 drivers/gpu/drm/virtio/virtgpu_vq.c    |  8 ++++--
 3 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 78dc5a19a358..4df760ba018e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -302,9 +302,10 @@ void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
 			   struct virtio_gpu_object_array *objs,
 			   struct virtio_gpu_fence *fence);
 void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
-					  uint32_t resource_id, uint32_t ctx_id,
+					  uint32_t ctx_id,
 					  uint64_t offset, uint32_t level,
 					  struct virtio_gpu_box *box,
+					  struct virtio_gpu_object_array *objs,
 					  struct virtio_gpu_fence *fence);
 void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
 					struct virtio_gpu_object *bo,
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 0d0acf0b85ed..56182abdbf36 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -298,8 +298,7 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
 	struct drm_virtgpu_3d_transfer_from_host *args = data;
-	struct drm_gem_object *gobj = NULL;
-	struct virtio_gpu_object *qobj = NULL;
+	struct virtio_gpu_object_array *objs;
 	struct virtio_gpu_fence *fence;
 	int ret;
 	u32 offset = args->offset;
@@ -308,35 +307,31 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
 	if (vgdev->has_virgl_3d == false)
 		return -ENOSYS;
 
-	gobj = drm_gem_object_lookup(file, args->bo_handle);
-	if (gobj == NULL)
+	objs = virtio_gpu_array_from_handles(file, &args->bo_handle, 1);
+	if (objs == NULL)
 		return -ENOENT;
 
-	qobj = gem_to_virtio_gpu_obj(gobj);
-
-	ret = virtio_gpu_object_reserve(qobj);
-	if (ret)
-		goto out;
+	ret = virtio_gpu_array_lock_resv(objs);
+	if (ret != 0)
+		goto err_put_free;
 
 	convert_to_hw_box(&box, &args->box);
 
 	fence = virtio_gpu_fence_alloc(vgdev);
 	if (!fence) {
 		ret = -ENOMEM;
-		goto out_unres;
+		goto err_unlock;
 	}
 	virtio_gpu_cmd_transfer_from_host_3d
-		(vgdev, qobj->hw_res_handle,
-		 vfpriv->ctx_id, offset, args->level,
-		 &box, fence);
-	reservation_object_add_excl_fence(qobj->base.base.resv,
-					  &fence->f);
-
+		(vgdev, vfpriv->ctx_id, offset, args->level,
+		 &box, objs, fence);
 	dma_fence_put(&fence->f);
-out_unres:
-	virtio_gpu_object_unreserve(qobj);
-out:
-	drm_gem_object_put_unlocked(gobj);
+	return 0;
+
+err_unlock:
+	virtio_gpu_array_unlock_resv(objs);
+err_put_free:
+	virtio_gpu_array_put_free(objs);
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index fc908d5cb217..bef7036f4508 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -928,20 +928,24 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
 }
 
 void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
-					  uint32_t resource_id, uint32_t ctx_id,
+					  uint32_t ctx_id,
 					  uint64_t offset, uint32_t level,
 					  struct virtio_gpu_box *box,
+					  struct virtio_gpu_object_array *objs,
 					  struct virtio_gpu_fence *fence)
 {
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]);
 	struct virtio_gpu_transfer_host_3d *cmd_p;
 	struct virtio_gpu_vbuffer *vbuf;
 
 	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
 	memset(cmd_p, 0, sizeof(*cmd_p));
 
+	vbuf->objs = objs;
+
 	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D);
 	cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id);
-	cmd_p->resource_id = cpu_to_le32(resource_id);
+	cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
 	cmd_p->box = *box;
 	cmd_p->offset = cpu_to_le64(offset);
 	cmd_p->level = cpu_to_le32(level);
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 15/18] drm/virtio: rework virtio_gpu_transfer_to_host_ioctl fencing
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (13 preceding siblings ...)
  2019-07-02 14:18 ` [PATCH v6 14/18] drm/virtio: rework virtio_gpu_transfer_from_host_ioctl fencing Gerd Hoffmann
@ 2019-07-02 14:19 ` Gerd Hoffmann
  2019-07-03 19:55   ` Chia-I Wu
  2019-07-02 14:19 ` [PATCH v6 16/18] drm/virtio: rework virtio_gpu_cmd_context_{attach,detach}_resource Gerd Hoffmann
                   ` (2 subsequent siblings)
  17 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:19 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

Switch to the virtio_gpu_array_* helper workflow.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  2 +-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 43 ++++++++++++--------------
 drivers/gpu/drm/virtio/virtgpu_vq.c    |  5 ++-
 3 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 4df760ba018e..b1f63a21abb6 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -308,10 +308,10 @@ void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
 					  struct virtio_gpu_object_array *objs,
 					  struct virtio_gpu_fence *fence);
 void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
-					struct virtio_gpu_object *bo,
 					uint32_t ctx_id,
 					uint64_t offset, uint32_t level,
 					struct virtio_gpu_box *box,
+					struct virtio_gpu_object_array *objs,
 					struct virtio_gpu_fence *fence);
 void
 virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 56182abdbf36..b220918df6a1 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -341,47 +341,44 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
 	struct drm_virtgpu_3d_transfer_to_host *args = data;
-	struct drm_gem_object *gobj = NULL;
-	struct virtio_gpu_object *qobj = NULL;
+	struct virtio_gpu_object_array *objs;
 	struct virtio_gpu_fence *fence;
 	struct virtio_gpu_box box;
 	int ret;
 	u32 offset = args->offset;
 
-	gobj = drm_gem_object_lookup(file, args->bo_handle);
-	if (gobj == NULL)
+	objs = virtio_gpu_array_from_handles(file, &args->bo_handle, 1);
+	if (objs == NULL)
 		return -ENOENT;
 
-	qobj = gem_to_virtio_gpu_obj(gobj);
-
-	ret = virtio_gpu_object_reserve(qobj);
-	if (ret)
-		goto out;
-
 	convert_to_hw_box(&box, &args->box);
 	if (!vgdev->has_virgl_3d) {
 		virtio_gpu_cmd_transfer_to_host_2d
-			(vgdev, qobj, offset,
+			(vgdev, gem_to_virtio_gpu_obj(objs->objs[0]), offset,
 			 box.w, box.h, box.x, box.y, NULL);
+		virtio_gpu_array_put_free(objs);
 	} else {
+		ret = virtio_gpu_array_lock_resv(objs);
+		if (ret != 0)
+			goto err_put_free;
+
+		ret = -ENOMEM;
 		fence = virtio_gpu_fence_alloc(vgdev);
-		if (!fence) {
-			ret = -ENOMEM;
-			goto out_unres;
-		}
+		if (!fence)
+			goto err_unlock;
+
 		virtio_gpu_cmd_transfer_to_host_3d
-			(vgdev, qobj,
+			(vgdev,
 			 vfpriv ? vfpriv->ctx_id : 0, offset,
-			 args->level, &box, fence);
-		reservation_object_add_excl_fence(qobj->base.base.resv,
-						  &fence->f);
+			 args->level, &box, objs, fence);
 		dma_fence_put(&fence->f);
 	}
+	return 0;
 
-out_unres:
-	virtio_gpu_object_unreserve(qobj);
-out:
-	drm_gem_object_put_unlocked(gobj);
+err_unlock:
+	virtio_gpu_array_unlock_resv(objs);
+err_put_free:
+	virtio_gpu_array_put_free(objs);
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index bef7036f4508..1c0097de419a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -899,12 +899,13 @@ virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
 }
 
 void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
-					struct virtio_gpu_object *bo,
 					uint32_t ctx_id,
 					uint64_t offset, uint32_t level,
 					struct virtio_gpu_box *box,
+					struct virtio_gpu_object_array *objs,
 					struct virtio_gpu_fence *fence)
 {
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]);
 	struct virtio_gpu_transfer_host_3d *cmd_p;
 	struct virtio_gpu_vbuffer *vbuf;
 	bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
@@ -917,6 +918,8 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
 	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
 	memset(cmd_p, 0, sizeof(*cmd_p));
 
+	vbuf->objs = objs;
+
 	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D);
 	cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id);
 	cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 16/18] drm/virtio: rework virtio_gpu_cmd_context_{attach,detach}_resource
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (14 preceding siblings ...)
  2019-07-02 14:19 ` [PATCH v6 15/18] drm/virtio: rework virtio_gpu_transfer_to_host_ioctl fencing Gerd Hoffmann
@ 2019-07-02 14:19 ` Gerd Hoffmann
       [not found]   ` <CAAfnVBmKotCfkrM4hph4++FDrVUYR8WKpomP7Y0-aergqHTSyA@mail.gmail.com>
  2019-07-02 14:19 ` [PATCH v6 17/18] drm/virtio: drop virtio_gpu_object_{reserve,unreserve} Gerd Hoffmann
  2019-07-02 14:19 ` [PATCH v6 18/18] drm/virtio: add fence sanity check Gerd Hoffmann
  17 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:19 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

Switch to the virtio_gpu_array_* helper workflow.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h |  4 ++--
 drivers/gpu/drm/virtio/virtgpu_gem.c | 24 +++++++++++-------------
 drivers/gpu/drm/virtio/virtgpu_vq.c  | 10 ++++++----
 3 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index b1f63a21abb6..d99c54abd090 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -292,10 +292,10 @@ void virtio_gpu_cmd_context_destroy(struct virtio_gpu_device *vgdev,
 				    uint32_t id);
 void virtio_gpu_cmd_context_attach_resource(struct virtio_gpu_device *vgdev,
 					    uint32_t ctx_id,
-					    uint32_t resource_id);
+					    struct virtio_gpu_object_array *objs);
 void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev,
 					    uint32_t ctx_id,
-					    uint32_t resource_id);
+					    struct virtio_gpu_object_array *objs);
 void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
 			   void *data, uint32_t data_size,
 			   uint32_t ctx_id,
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 6baf64141645..e75819dbba80 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -111,19 +111,18 @@ int virtio_gpu_gem_object_open(struct drm_gem_object *obj,
 {
 	struct virtio_gpu_device *vgdev = obj->dev->dev_private;
 	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
-	struct virtio_gpu_object *qobj = gem_to_virtio_gpu_obj(obj);
-	int r;
+	struct virtio_gpu_object_array *objs;
 
 	if (!vgdev->has_virgl_3d)
 		return 0;
 
-	r = virtio_gpu_object_reserve(qobj);
-	if (r)
-		return r;
+	objs = virtio_gpu_array_alloc(1);
+	if (!objs)
+		return -ENOMEM;
+	virtio_gpu_array_add_obj(objs, obj);
 
 	virtio_gpu_cmd_context_attach_resource(vgdev, vfpriv->ctx_id,
-					       qobj->hw_res_handle);
-	virtio_gpu_object_unreserve(qobj);
+					       objs);
 	return 0;
 }
 
@@ -132,19 +131,18 @@ void virtio_gpu_gem_object_close(struct drm_gem_object *obj,
 {
 	struct virtio_gpu_device *vgdev = obj->dev->dev_private;
 	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
-	struct virtio_gpu_object *qobj = gem_to_virtio_gpu_obj(obj);
-	int r;
+	struct virtio_gpu_object_array *objs;
 
 	if (!vgdev->has_virgl_3d)
 		return;
 
-	r = virtio_gpu_object_reserve(qobj);
-	if (r)
+	objs = virtio_gpu_array_alloc(1);
+	if (!objs)
 		return;
+	virtio_gpu_array_add_obj(objs, obj);
 
 	virtio_gpu_cmd_context_detach_resource(vgdev, vfpriv->ctx_id,
-						qobj->hw_res_handle);
-	virtio_gpu_object_unreserve(qobj);
+					       objs);
 }
 
 struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents)
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 1c0097de419a..799d1339ee0f 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -835,8 +835,9 @@ void virtio_gpu_cmd_context_destroy(struct virtio_gpu_device *vgdev,
 
 void virtio_gpu_cmd_context_attach_resource(struct virtio_gpu_device *vgdev,
 					    uint32_t ctx_id,
-					    uint32_t resource_id)
+					    struct virtio_gpu_object_array *objs)
 {
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]);
 	struct virtio_gpu_ctx_resource *cmd_p;
 	struct virtio_gpu_vbuffer *vbuf;
 
@@ -845,15 +846,16 @@ void virtio_gpu_cmd_context_attach_resource(struct virtio_gpu_device *vgdev,
 
 	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE);
 	cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id);
-	cmd_p->resource_id = cpu_to_le32(resource_id);
+	cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
 	virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
 
 }
 
 void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev,
 					    uint32_t ctx_id,
-					    uint32_t resource_id)
+					    struct virtio_gpu_object_array *objs)
 {
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]);
 	struct virtio_gpu_ctx_resource *cmd_p;
 	struct virtio_gpu_vbuffer *vbuf;
 
@@ -862,7 +864,7 @@ void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev,
 
 	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE);
 	cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id);
-	cmd_p->resource_id = cpu_to_le32(resource_id);
+	cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
 	virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
 }
 
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 17/18] drm/virtio: drop virtio_gpu_object_{reserve,unreserve}
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (15 preceding siblings ...)
  2019-07-02 14:19 ` [PATCH v6 16/18] drm/virtio: rework virtio_gpu_cmd_context_{attach,detach}_resource Gerd Hoffmann
@ 2019-07-02 14:19 ` Gerd Hoffmann
  2019-07-02 14:19 ` [PATCH v6 18/18] drm/virtio: add fence sanity check Gerd Hoffmann
  17 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:19 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

No users left.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h | 21 ---------------------
 1 file changed, 21 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index d99c54abd090..23670109d167 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -368,27 +368,6 @@ static inline u64 virtio_gpu_object_mmap_offset(struct virtio_gpu_object *bo)
 	return drm_vma_node_offset_addr(&bo->base.base.vma_node);
 }
 
-static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo)
-{
-	int r;
-
-	r = reservation_object_lock_interruptible(bo->base.base.resv, NULL);
-	if (unlikely(r != 0)) {
-		if (r != -EINTR) {
-			struct virtio_gpu_device *qdev =
-				bo->base.base.dev->dev_private;
-			dev_err(qdev->dev, "%p reserve failed\n", bo);
-		}
-		return r;
-	}
-	return 0;
-}
-
-static inline void virtio_gpu_object_unreserve(struct virtio_gpu_object *bo)
-{
-	reservation_object_unlock(bo->base.base.resv);
-}
-
 /* virgl debufs */
 int virtio_gpu_debugfs_init(struct drm_minor *minor);
 
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v6 18/18] drm/virtio: add fence sanity check
       [not found] <20190702141903.1131-1-kraxel@redhat.com>
                   ` (16 preceding siblings ...)
  2019-07-02 14:19 ` [PATCH v6 17/18] drm/virtio: drop virtio_gpu_object_{reserve,unreserve} Gerd Hoffmann
@ 2019-07-02 14:19 ` Gerd Hoffmann
  17 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-02 14:19 UTC (permalink / raw)
  To: dri-devel
  Cc: olvaffe, gurchetansingh, Gerd Hoffmann, David Airlie,
	Daniel Vetter, open list:VIRTIO GPU DRIVER, open list

Make sure we don't leak half-initialized fences outside the driver.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/virtio/virtgpu_fence.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_fence.c b/drivers/gpu/drm/virtio/virtgpu_fence.c
index 70d6c4329778..98358ff9996c 100644
--- a/drivers/gpu/drm/virtio/virtgpu_fence.c
+++ b/drivers/gpu/drm/virtio/virtgpu_fence.c
@@ -41,6 +41,10 @@ bool virtio_fence_signaled(struct dma_fence *f)
 {
 	struct virtio_gpu_fence *fence = to_virtio_fence(f);
 
+	if (WARN_ON_ONCE(fence->f.seqno == 0))
+		/* leaked fence outside driver before completing
+		 * initialization with virtio_gpu_fence_emit */
+		return false;
 	if (atomic64_read(&fence->drv->last_seq) >= fence->f.seqno)
 		return true;
 	return false;
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve}
  2019-07-02 14:18 ` [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve} Gerd Hoffmann
@ 2019-07-03 18:02   ` Chia-I Wu
  2019-07-04 11:10     ` Gerd Hoffmann
  0 siblings, 1 reply; 43+ messages in thread
From: Chia-I Wu @ 2019-07-03 18:02 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> Call reservation_object_* directly instead
> of using ttm_bo_{reserve,unreserve}.
>
> v4: check for EINTR only.
> v3: check for EINTR too.
>
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index 06cc0e961df6..07f6001ea91e 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -402,9 +402,9 @@ static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo)
>  {
>         int r;
>
> -       r = ttm_bo_reserve(&bo->tbo, true, false, NULL);
> +       r = reservation_object_lock_interruptible(bo->gem_base.resv, NULL);
Can you elaborate a bit about how TTM keeps the BOs alive in, for
example, virtio_gpu_transfer_from_host_ioctl?  In that function, only
three TTM functions are called: ttm_bo_reserve, ttm_bo_validate, and
ttm_bo_unreserve.  I am curious how they keep the BO alive.

>         if (unlikely(r != 0)) {
> -               if (r != -ERESTARTSYS) {
> +               if (r != -EINTR) {
>                         struct virtio_gpu_device *qdev =
>                                 bo->gem_base.dev->dev_private;
>                         dev_err(qdev->dev, "%p reserve failed\n", bo);
> @@ -416,7 +416,7 @@ static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo)
>
>  static inline void virtio_gpu_object_unreserve(struct virtio_gpu_object *bo)
>  {
> -       ttm_bo_unreserve(&bo->tbo);
> +       reservation_object_unlock(bo->gem_base.resv);
>  }
>
>  /* virgl debufs */
> --
> 2.18.1
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 07/18] drm/virtio: add virtio_gpu_object_array & helpers
  2019-07-02 14:18 ` [PATCH v6 07/18] drm/virtio: add virtio_gpu_object_array & helpers Gerd Hoffmann
@ 2019-07-03 18:31   ` Chia-I Wu
  2019-07-03 19:52     ` Chia-I Wu
  2019-07-04 11:11     ` Gerd Hoffmann
  0 siblings, 2 replies; 43+ messages in thread
From: Chia-I Wu @ 2019-07-03 18:31 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> Some helper functions to manage an array of gem objects.
>
> v6:
>  - add ticket to struct virtio_gpu_object_array.
>  - add virtio_gpu_array_{lock,unlock}_resv helpers.
>  - add virtio_gpu_array_add_fence helper.
> v5: some small optimizations (Chia-I Wu).
> v4: make them virtio-private instead of generic helpers.
>
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.h | 17 ++++++
>  drivers/gpu/drm/virtio/virtgpu_gem.c | 83 ++++++++++++++++++++++++++++
>  2 files changed, 100 insertions(+)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index 07f6001ea91e..abb078a5dedf 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -84,6 +84,12 @@ struct virtio_gpu_object {
>  #define gem_to_virtio_gpu_obj(gobj) \
>         container_of((gobj), struct virtio_gpu_object, gem_base)
>
> +struct virtio_gpu_object_array {
> +       struct ww_acquire_ctx ticket;
> +       u32 nents, total;
> +       struct drm_gem_object *objs[];
> +};
> +
>  struct virtio_gpu_vbuffer;
>  struct virtio_gpu_device;
>
> @@ -251,6 +257,17 @@ int virtio_gpu_mode_dumb_mmap(struct drm_file *file_priv,
>                               struct drm_device *dev,
>                               uint32_t handle, uint64_t *offset_p);
>
> +struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents);
> +struct virtio_gpu_object_array*
> +virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents);
> +void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs,
> +                             struct drm_gem_object *obj);
> +int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs);
> +void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs);
> +void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs,
> +                               struct dma_fence *fence);
> +void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs);
> +
>  /* virtio vg */
>  int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev);
>  void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev);
> diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
> index 9c9ad3b14080..e88df5e06d06 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_gem.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
> @@ -169,3 +169,86 @@ void virtio_gpu_gem_object_close(struct drm_gem_object *obj,
>                                                 qobj->hw_res_handle);
>         virtio_gpu_object_unreserve(qobj);
>  }
> +
> +struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents)
> +{
> +       struct virtio_gpu_object_array *objs;
> +       size_t size = sizeof(*objs) + sizeof(objs->objs[0]) * nents;
> +
> +       objs = kmalloc(size, GFP_KERNEL);
> +       if (!objs)
> +               return NULL;
> +
> +       objs->nents = 0;
> +       objs->total = nents;
> +       return objs;
> +}
> +
> +static void virtio_gpu_array_free(struct virtio_gpu_object_array *objs)
> +{
> +       kfree(objs);
> +}
> +
> +struct virtio_gpu_object_array*
> +virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents)
> +{
> +       struct virtio_gpu_object_array *objs;
> +       u32 i;
> +
> +       objs = virtio_gpu_array_alloc(nents);
> +       if (!objs)
> +               return NULL;
> +
> +       for (i = 0; i < nents; i++) {
> +               objs->nents = i;
This line can be moved into the if-block just below.
> +               objs->objs[i] = drm_gem_object_lookup(drm_file, handles[i]);
> +               if (!objs->objs[i]) {
> +                       virtio_gpu_array_put_free(objs);
> +                       return NULL;
> +               }
> +       }
> +       objs->nents = i;
> +       return objs;
> +}
> +
> +void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs,
> +                             struct drm_gem_object *obj)
> +{
> +       if (WARN_ON_ONCE(objs->nents == objs->total))
> +               return;
> +
> +       drm_gem_object_get(obj);
> +       objs->objs[objs->nents] = obj;
> +       objs->nents++;
> +}
> +
> +int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs)
> +{
> +       return drm_gem_lock_reservations(objs->objs, objs->nents,
> +                                        &objs->ticket);
> +}
> +
> +void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs)
> +{
> +       drm_gem_unlock_reservations(objs->objs, objs->nents,
> +                                   &objs->ticket);
> +}
> +
> +void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs,
> +                               struct dma_fence *fence)
> +{
> +       int i;
> +
> +       for (i = 0; i < objs->nents; i++)
> +               reservation_object_add_excl_fence(objs->objs[i]->resv,
> +                                                 fence);
> +}
> +
> +void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs)
> +{
> +       u32 i;
> +
> +       for (i = 0; i < objs->nents; i++)
> +               drm_gem_object_put_unlocked(objs->objs[i]);
> +       virtio_gpu_array_free(objs);
> +}
> --
> 2.18.1
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 08/18] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing
  2019-07-02 14:18 ` [PATCH v6 08/18] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing Gerd Hoffmann
@ 2019-07-03 18:49   ` Chia-I Wu
  2019-07-04 11:25     ` Gerd Hoffmann
  0 siblings, 1 reply; 43+ messages in thread
From: Chia-I Wu @ 2019-07-03 18:49 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> Rework fencing workflow, starting with virtio_gpu_execbuffer_ioctl.
> Stop using ttm helpers, use the virtio_gpu_array_* helpers (which work
> on the reservation objects directly) instead.
>
> New workflow:
>
>  (1) All gem objects needed by a command are added to a
>      virtio_gpu_object_array.
>  (2) All reservation objects will be locked (virtio_gpu_array_lock_resv).
>  (3) virtio_gpu_fence_emit() completes fence initialization.
>  (4) fence gets added to the objects, reservation objects are unlocked
>      (virtio_gpu_array_add_fence, virtio_gpu_array_unlock_resv).
>  (5) virtio command is submitted to the host.
>  (6) The completion callback (virtio_gpu_dequeue_ctrl_func)
>      will drop object references and free virtio_gpu_object_array.
>
> v6: rewrite most of the patch.
>
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.h   |  6 ++-
>  drivers/gpu/drm/virtio/virtgpu_ioctl.c | 56 +++++++++-----------------
>  drivers/gpu/drm/virtio/virtgpu_vq.c    | 21 +++++++---
>  3 files changed, 38 insertions(+), 45 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index abb078a5dedf..98511d1dfff2 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -121,9 +121,9 @@ struct virtio_gpu_vbuffer {
>
>         char *resp_buf;
>         int resp_size;
> -
>         virtio_gpu_resp_cb resp_cb;
>
> +       struct virtio_gpu_object_array *objs;
>         struct list_head list;
>  };
>
> @@ -318,7 +318,9 @@ void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev,
>                                             uint32_t resource_id);
>  void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
>                            void *data, uint32_t data_size,
> -                          uint32_t ctx_id, struct virtio_gpu_fence *fence);
> +                          uint32_t ctx_id,
> +                          struct virtio_gpu_object_array *objs,
> +                          struct virtio_gpu_fence *fence);
>  void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
>                                           uint32_t resource_id, uint32_t ctx_id,
>                                           uint64_t offset, uint32_t level,
> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> index 0caff3fa623e..9735d7e5899b 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> @@ -105,16 +105,11 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
>         struct drm_virtgpu_execbuffer *exbuf = data;
>         struct virtio_gpu_device *vgdev = dev->dev_private;
>         struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv;
> -       struct drm_gem_object *gobj;
>         struct virtio_gpu_fence *out_fence;
> -       struct virtio_gpu_object *qobj;
>         int ret;
>         uint32_t *bo_handles = NULL;
>         void __user *user_bo_handles = NULL;
> -       struct list_head validate_list;
> -       struct ttm_validate_buffer *buflist = NULL;
> -       int i;
> -       struct ww_acquire_ctx ticket;
> +       struct virtio_gpu_object_array *buflist = NULL;
>         struct sync_file *sync_file;
>         int in_fence_fd = exbuf->fence_fd;
>         int out_fence_fd = -1;
> @@ -155,15 +150,10 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
>                         return out_fence_fd;
>         }
>
> -       INIT_LIST_HEAD(&validate_list);
>         if (exbuf->num_bo_handles) {
> -
>                 bo_handles = kvmalloc_array(exbuf->num_bo_handles,
> -                                          sizeof(uint32_t), GFP_KERNEL);
> -               buflist = kvmalloc_array(exbuf->num_bo_handles,
> -                                          sizeof(struct ttm_validate_buffer),
> -                                          GFP_KERNEL | __GFP_ZERO);
> -               if (!bo_handles || !buflist) {
> +                                           sizeof(uint32_t), GFP_KERNEL);
> +               if (!bo_handles) {
>                         ret = -ENOMEM;
>                         goto out_unused_fd;
>                 }
> @@ -175,25 +165,21 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
>                         goto out_unused_fd;
>                 }
>
> -               for (i = 0; i < exbuf->num_bo_handles; i++) {
> -                       gobj = drm_gem_object_lookup(drm_file, bo_handles[i]);
> -                       if (!gobj) {
> -                               ret = -ENOENT;
> -                               goto out_unused_fd;
> -                       }
> -
> -                       qobj = gem_to_virtio_gpu_obj(gobj);
> -                       buflist[i].bo = &qobj->tbo;
> -
> -                       list_add(&buflist[i].head, &validate_list);
> +               buflist = virtio_gpu_array_from_handles(drm_file, bo_handles,
> +                                                       exbuf->num_bo_handles);
> +               if (!buflist) {
> +                       ret = -ENOENT;
> +                       goto out_unused_fd;
>                 }
>                 kvfree(bo_handles);
>                 bo_handles = NULL;
>         }
>
> -       ret = virtio_gpu_object_list_validate(&ticket, &validate_list);
> -       if (ret)
> -               goto out_free;
> +       if (buflist) {
> +               ret = virtio_gpu_array_lock_resv(buflist);
> +               if (ret)
> +                       goto out_unused_fd;
> +       }
>
>         buf = memdup_user(u64_to_user_ptr(exbuf->command), exbuf->size);
>         if (IS_ERR(buf)) {
> @@ -220,24 +206,18 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
>         }
>
>         virtio_gpu_cmd_submit(vgdev, buf, exbuf->size,
> -                             vfpriv->ctx_id, out_fence);
> -
> -       ttm_eu_fence_buffer_objects(&ticket, &validate_list, &out_fence->f);
> -
> -       /* fence the command bo */
> -       virtio_gpu_unref_list(&validate_list);
> -       kvfree(buflist);
> +                             vfpriv->ctx_id, buflist, out_fence);
>         return 0;
>
>  out_memdup:
>         kfree(buf);
>  out_unresv:
> -       ttm_eu_backoff_reservation(&ticket, &validate_list);
> -out_free:
> -       virtio_gpu_unref_list(&validate_list);
> +       if (buflist)
> +               virtio_gpu_array_unlock_resv(buflist);
>  out_unused_fd:
>         kvfree(bo_handles);
> -       kvfree(buflist);
> +       if (buflist)
> +               virtio_gpu_array_put_free(buflist);
>
>         if (out_fence_fd >= 0)
>                 put_unused_fd(out_fence_fd);
> diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
> index 6c1a90717535..dbe329801e84 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_vq.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
> @@ -191,7 +191,7 @@ void virtio_gpu_dequeue_ctrl_func(struct work_struct *work)
>         } while (!virtqueue_enable_cb(vgdev->ctrlq.vq));
>         spin_unlock(&vgdev->ctrlq.qlock);
>
> -       list_for_each_entry_safe(entry, tmp, &reclaim_list, list) {
> +       list_for_each_entry(entry, &reclaim_list, list) {
>                 resp = (struct virtio_gpu_ctrl_hdr *)entry->resp_buf;
>
>                 trace_virtio_gpu_cmd_response(vgdev->ctrlq.vq, resp);
> @@ -218,14 +218,18 @@ void virtio_gpu_dequeue_ctrl_func(struct work_struct *work)
>                 }
>                 if (entry->resp_cb)
>                         entry->resp_cb(vgdev, entry);
> -
> -               list_del(&entry->list);
> -               free_vbuf(vgdev, entry);
>         }
>         wake_up(&vgdev->ctrlq.ack_queue);
>
>         if (fence_id)
>                 virtio_gpu_fence_event_process(vgdev, fence_id);
> +
> +       list_for_each_entry_safe(entry, tmp, &reclaim_list, list) {
> +               if (entry->objs)
> +                       virtio_gpu_array_put_free(entry->objs);
> +               list_del(&entry->list);
> +               free_vbuf(vgdev, entry);
> +       }
>  }
>
>  void virtio_gpu_dequeue_cursor_func(struct work_struct *work)
> @@ -337,6 +341,10 @@ static int virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *vgdev,
>
>         if (fence)
>                 virtio_gpu_fence_emit(vgdev, hdr, fence);
> +       if (vbuf->objs) {
> +               virtio_gpu_array_add_fence(vbuf->objs, &fence->f);
> +               virtio_gpu_array_unlock_resv(vbuf->objs);
> +       }
This is with the spinlock held.  Maybe we should move the
virtio_gpu_array_unlock_resv call out of the critical section.

I am actually more concerned about virtio_gpu_array_add_fence, but it
is also harder to move.  Should we add a kref to the object array?

This bothers me because I recently ran into a CPU-bound game with very
bad lock contention here.

>         rc = virtio_gpu_queue_ctrl_buffer_locked(vgdev, vbuf);
>         spin_unlock(&vgdev->ctrlq.qlock);
>         return rc;
> @@ -939,7 +947,9 @@ void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
>
>  void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
>                            void *data, uint32_t data_size,
> -                          uint32_t ctx_id, struct virtio_gpu_fence *fence)
> +                          uint32_t ctx_id,
> +                          struct virtio_gpu_object_array *objs,
> +                          struct virtio_gpu_fence *fence)
>  {
>         struct virtio_gpu_cmd_submit *cmd_p;
>         struct virtio_gpu_vbuffer *vbuf;
> @@ -949,6 +959,7 @@ void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
>
>         vbuf->data_buf = data;
>         vbuf->data_size = data_size;
> +       vbuf->objs = objs;
>
>         cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_SUBMIT_3D);
>         cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id);
> --
> 2.18.1
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 07/18] drm/virtio: add virtio_gpu_object_array & helpers
  2019-07-03 18:31   ` Chia-I Wu
@ 2019-07-03 19:52     ` Chia-I Wu
  2019-07-04 11:19       ` Gerd Hoffmann
  2019-07-04 11:11     ` Gerd Hoffmann
  1 sibling, 1 reply; 43+ messages in thread
From: Chia-I Wu @ 2019-07-03 19:52 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Wed, Jul 3, 2019 at 11:31 AM Chia-I Wu <olvaffe@gmail.com> wrote:
>
> On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> > Some helper functions to manage an array of gem objects.
> >
> > v6:
> >  - add ticket to struct virtio_gpu_object_array.
> >  - add virtio_gpu_array_{lock,unlock}_resv helpers.
> >  - add virtio_gpu_array_add_fence helper.
> > v5: some small optimizations (Chia-I Wu).
> > v4: make them virtio-private instead of generic helpers.
> >
> > Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> > ---
> >  drivers/gpu/drm/virtio/virtgpu_drv.h | 17 ++++++
> >  drivers/gpu/drm/virtio/virtgpu_gem.c | 83 ++++++++++++++++++++++++++++
> >  2 files changed, 100 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
> > index 07f6001ea91e..abb078a5dedf 100644
> > --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> > @@ -84,6 +84,12 @@ struct virtio_gpu_object {
> >  #define gem_to_virtio_gpu_obj(gobj) \
> >         container_of((gobj), struct virtio_gpu_object, gem_base)
> >
> > +struct virtio_gpu_object_array {
> > +       struct ww_acquire_ctx ticket;
> > +       u32 nents, total;
> > +       struct drm_gem_object *objs[];
> > +};
> > +
> >  struct virtio_gpu_vbuffer;
> >  struct virtio_gpu_device;
> >
> > @@ -251,6 +257,17 @@ int virtio_gpu_mode_dumb_mmap(struct drm_file *file_priv,
> >                               struct drm_device *dev,
> >                               uint32_t handle, uint64_t *offset_p);
> >
> > +struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents);
> > +struct virtio_gpu_object_array*
> > +virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents);
> > +void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs,
> > +                             struct drm_gem_object *obj);
> > +int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs);
> > +void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs);
> > +void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs,
> > +                               struct dma_fence *fence);
> > +void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs);
> > +
> >  /* virtio vg */
> >  int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev);
> >  void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev);
> > diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
> > index 9c9ad3b14080..e88df5e06d06 100644
> > --- a/drivers/gpu/drm/virtio/virtgpu_gem.c
> > +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
> > @@ -169,3 +169,86 @@ void virtio_gpu_gem_object_close(struct drm_gem_object *obj,
> >                                                 qobj->hw_res_handle);
> >         virtio_gpu_object_unreserve(qobj);
> >  }
> > +
> > +struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents)
> > +{
> > +       struct virtio_gpu_object_array *objs;
> > +       size_t size = sizeof(*objs) + sizeof(objs->objs[0]) * nents;
> > +
> > +       objs = kmalloc(size, GFP_KERNEL);
> > +       if (!objs)
> > +               return NULL;
> > +
> > +       objs->nents = 0;
> > +       objs->total = nents;
> > +       return objs;
> > +}
> > +
> > +static void virtio_gpu_array_free(struct virtio_gpu_object_array *objs)
> > +{
> > +       kfree(objs);
> > +}
> > +
> > +struct virtio_gpu_object_array*
> > +virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents)
> > +{
> > +       struct virtio_gpu_object_array *objs;
> > +       u32 i;
> > +
> > +       objs = virtio_gpu_array_alloc(nents);
> > +       if (!objs)
> > +               return NULL;
> > +
> > +       for (i = 0; i < nents; i++) {
> > +               objs->nents = i;
> This line can be moved into the if-block just below.
> > +               objs->objs[i] = drm_gem_object_lookup(drm_file, handles[i]);
> > +               if (!objs->objs[i]) {
> > +                       virtio_gpu_array_put_free(objs);
> > +                       return NULL;
> > +               }
> > +       }
> > +       objs->nents = i;
> > +       return objs;
> > +}
> > +
> > +void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs,
> > +                             struct drm_gem_object *obj)
> > +{
> > +       if (WARN_ON_ONCE(objs->nents == objs->total))
> > +               return;
> > +
> > +       drm_gem_object_get(obj);
> > +       objs->objs[objs->nents] = obj;
> > +       objs->nents++;
> > +}
> > +
> > +int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs)
> > +{
> > +       return drm_gem_lock_reservations(objs->objs, objs->nents,
> > +                                        &objs->ticket);
Unlike in other drivers where an "object array" is only needed in
execbuffer, we will use this in several places, and often with only 1
object in the array.  Can we special case that and do a quick
reservation_object_lock?

> > +}
> > +
> > +void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs)
> > +{
> > +       drm_gem_unlock_reservations(objs->objs, objs->nents,
> > +                                   &objs->ticket);
> > +}
> > +
> > +void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs,
> > +                               struct dma_fence *fence)
> > +{
> > +       int i;
> > +
> > +       for (i = 0; i < objs->nents; i++)
> > +               reservation_object_add_excl_fence(objs->objs[i]->resv,
> > +                                                 fence);
> > +}
> > +
> > +void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs)
> > +{
> > +       u32 i;
> > +
> > +       for (i = 0; i < objs->nents; i++)
> > +               drm_gem_object_put_unlocked(objs->objs[i]);
> > +       virtio_gpu_array_free(objs);
> > +}
> > --
> > 2.18.1
> >

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 15/18] drm/virtio: rework virtio_gpu_transfer_to_host_ioctl fencing
  2019-07-02 14:19 ` [PATCH v6 15/18] drm/virtio: rework virtio_gpu_transfer_to_host_ioctl fencing Gerd Hoffmann
@ 2019-07-03 19:55   ` Chia-I Wu
  2019-07-04 11:51     ` Gerd Hoffmann
  0 siblings, 1 reply; 43+ messages in thread
From: Chia-I Wu @ 2019-07-03 19:55 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> Switch to the virtio_gpu_array_* helper workflow.
>
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.h   |  2 +-
>  drivers/gpu/drm/virtio/virtgpu_ioctl.c | 43 ++++++++++++--------------
>  drivers/gpu/drm/virtio/virtgpu_vq.c    |  5 ++-
>  3 files changed, 25 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index 4df760ba018e..b1f63a21abb6 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -308,10 +308,10 @@ void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
>                                           struct virtio_gpu_object_array *objs,
>                                           struct virtio_gpu_fence *fence);
>  void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
> -                                       struct virtio_gpu_object *bo,
>                                         uint32_t ctx_id,
>                                         uint64_t offset, uint32_t level,
>                                         struct virtio_gpu_box *box,
> +                                       struct virtio_gpu_object_array *objs,
>                                         struct virtio_gpu_fence *fence);
>  void
>  virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> index 56182abdbf36..b220918df6a1 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> @@ -341,47 +341,44 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
>         struct virtio_gpu_device *vgdev = dev->dev_private;
>         struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
>         struct drm_virtgpu_3d_transfer_to_host *args = data;
> -       struct drm_gem_object *gobj = NULL;
> -       struct virtio_gpu_object *qobj = NULL;
> +       struct virtio_gpu_object_array *objs;
>         struct virtio_gpu_fence *fence;
>         struct virtio_gpu_box box;
>         int ret;
>         u32 offset = args->offset;
>
> -       gobj = drm_gem_object_lookup(file, args->bo_handle);
> -       if (gobj == NULL)
> +       objs = virtio_gpu_array_from_handles(file, &args->bo_handle, 1);
> +       if (objs == NULL)
>                 return -ENOENT;
>
> -       qobj = gem_to_virtio_gpu_obj(gobj);
> -
> -       ret = virtio_gpu_object_reserve(qobj);
> -       if (ret)
> -               goto out;
> -
>         convert_to_hw_box(&box, &args->box);
>         if (!vgdev->has_virgl_3d) {
>                 virtio_gpu_cmd_transfer_to_host_2d
> -                       (vgdev, qobj, offset,
> +                       (vgdev, gem_to_virtio_gpu_obj(objs->objs[0]), offset,
>                          box.w, box.h, box.x, box.y, NULL);
> +               virtio_gpu_array_put_free(objs);
Don't we need this in non-3D case as well?
>         } else {
> +               ret = virtio_gpu_array_lock_resv(objs);
> +               if (ret != 0)
> +                       goto err_put_free;
> +
> +               ret = -ENOMEM;
>                 fence = virtio_gpu_fence_alloc(vgdev);
> -               if (!fence) {
> -                       ret = -ENOMEM;
> -                       goto out_unres;
> -               }
> +               if (!fence)
> +                       goto err_unlock;
> +
>                 virtio_gpu_cmd_transfer_to_host_3d
> -                       (vgdev, qobj,
> +                       (vgdev,
>                          vfpriv ? vfpriv->ctx_id : 0, offset,
> -                        args->level, &box, fence);
> -               reservation_object_add_excl_fence(qobj->base.base.resv,
> -                                                 &fence->f);
> +                        args->level, &box, objs, fence);
>                 dma_fence_put(&fence->f);
>         }
> +       return 0;
>
> -out_unres:
> -       virtio_gpu_object_unreserve(qobj);
> -out:
> -       drm_gem_object_put_unlocked(gobj);
> +err_unlock:
> +       virtio_gpu_array_unlock_resv(objs);
> +err_put_free:
> +       virtio_gpu_array_put_free(objs);
>         return ret;
>  }
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
> index bef7036f4508..1c0097de419a 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_vq.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
> @@ -899,12 +899,13 @@ virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
>  }
>
>  void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
> -                                       struct virtio_gpu_object *bo,
>                                         uint32_t ctx_id,
>                                         uint64_t offset, uint32_t level,
>                                         struct virtio_gpu_box *box,
> +                                       struct virtio_gpu_object_array *objs,
>                                         struct virtio_gpu_fence *fence)
>  {
> +       struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]);
>         struct virtio_gpu_transfer_host_3d *cmd_p;
>         struct virtio_gpu_vbuffer *vbuf;
>         bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
> @@ -917,6 +918,8 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
>         cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
>         memset(cmd_p, 0, sizeof(*cmd_p));
>
> +       vbuf->objs = objs;
> +
>         cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D);
>         cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id);
>         cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
> --
> 2.18.1
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 14/18] drm/virtio: rework virtio_gpu_transfer_from_host_ioctl fencing
  2019-07-02 14:18 ` [PATCH v6 14/18] drm/virtio: rework virtio_gpu_transfer_from_host_ioctl fencing Gerd Hoffmann
@ 2019-07-03 20:05   ` Chia-I Wu
  2019-07-04 11:47     ` Gerd Hoffmann
  0 siblings, 1 reply; 43+ messages in thread
From: Chia-I Wu @ 2019-07-03 20:05 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> Switch to the virtio_gpu_array_* helper workflow.
(just repeating my question on patch 6)

Does this fix the obj refcount issue?  When was the issue introduced?

>
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.h   |  3 ++-
>  drivers/gpu/drm/virtio/virtgpu_ioctl.c | 35 +++++++++++---------------
>  drivers/gpu/drm/virtio/virtgpu_vq.c    |  8 ++++--
>  3 files changed, 23 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index 78dc5a19a358..4df760ba018e 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -302,9 +302,10 @@ void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
>                            struct virtio_gpu_object_array *objs,
>                            struct virtio_gpu_fence *fence);
>  void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
> -                                         uint32_t resource_id, uint32_t ctx_id,
> +                                         uint32_t ctx_id,
>                                           uint64_t offset, uint32_t level,
>                                           struct virtio_gpu_box *box,
> +                                         struct virtio_gpu_object_array *objs,
>                                           struct virtio_gpu_fence *fence);
>  void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
>                                         struct virtio_gpu_object *bo,
> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> index 0d0acf0b85ed..56182abdbf36 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> @@ -298,8 +298,7 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
>         struct virtio_gpu_device *vgdev = dev->dev_private;
>         struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
>         struct drm_virtgpu_3d_transfer_from_host *args = data;
> -       struct drm_gem_object *gobj = NULL;
> -       struct virtio_gpu_object *qobj = NULL;
> +       struct virtio_gpu_object_array *objs;
>         struct virtio_gpu_fence *fence;
>         int ret;
>         u32 offset = args->offset;
> @@ -308,35 +307,31 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
>         if (vgdev->has_virgl_3d == false)
>                 return -ENOSYS;
>
> -       gobj = drm_gem_object_lookup(file, args->bo_handle);
> -       if (gobj == NULL)
> +       objs = virtio_gpu_array_from_handles(file, &args->bo_handle, 1);
> +       if (objs == NULL)
>                 return -ENOENT;
>
> -       qobj = gem_to_virtio_gpu_obj(gobj);
> -
> -       ret = virtio_gpu_object_reserve(qobj);
> -       if (ret)
> -               goto out;
> +       ret = virtio_gpu_array_lock_resv(objs);
> +       if (ret != 0)
> +               goto err_put_free;
>
>         convert_to_hw_box(&box, &args->box);
>
>         fence = virtio_gpu_fence_alloc(vgdev);
>         if (!fence) {
>                 ret = -ENOMEM;
> -               goto out_unres;
> +               goto err_unlock;
>         }
>         virtio_gpu_cmd_transfer_from_host_3d
> -               (vgdev, qobj->hw_res_handle,
> -                vfpriv->ctx_id, offset, args->level,
> -                &box, fence);
> -       reservation_object_add_excl_fence(qobj->base.base.resv,
> -                                         &fence->f);
> -
> +               (vgdev, vfpriv->ctx_id, offset, args->level,
> +                &box, objs, fence);
>         dma_fence_put(&fence->f);
> -out_unres:
> -       virtio_gpu_object_unreserve(qobj);
> -out:
> -       drm_gem_object_put_unlocked(gobj);
> +       return 0;
> +
> +err_unlock:
> +       virtio_gpu_array_unlock_resv(objs);
> +err_put_free:
> +       virtio_gpu_array_put_free(objs);
>         return ret;
>  }
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
> index fc908d5cb217..bef7036f4508 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_vq.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
> @@ -928,20 +928,24 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
>  }
>
>  void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
> -                                         uint32_t resource_id, uint32_t ctx_id,
> +                                         uint32_t ctx_id,
>                                           uint64_t offset, uint32_t level,
>                                           struct virtio_gpu_box *box,
> +                                         struct virtio_gpu_object_array *objs,
>                                           struct virtio_gpu_fence *fence)
>  {
> +       struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]);
>         struct virtio_gpu_transfer_host_3d *cmd_p;
>         struct virtio_gpu_vbuffer *vbuf;
>
>         cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
>         memset(cmd_p, 0, sizeof(*cmd_p));
>
> +       vbuf->objs = objs;
> +
>         cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D);
>         cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id);
> -       cmd_p->resource_id = cpu_to_le32(resource_id);
> +       cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
>         cmd_p->box = *box;
>         cmd_p->offset = cpu_to_le64(offset);
>         cmd_p->level = cpu_to_le32(level);
> --
> 2.18.1
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve}
  2019-07-03 18:02   ` Chia-I Wu
@ 2019-07-04 11:10     ` Gerd Hoffmann
  2019-07-04 19:17       ` Chia-I Wu
  0 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-04 11:10 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

  Hi,

> > -       r = ttm_bo_reserve(&bo->tbo, true, false, NULL);
> > +       r = reservation_object_lock_interruptible(bo->gem_base.resv, NULL);
> Can you elaborate a bit about how TTM keeps the BOs alive in, for
> example, virtio_gpu_transfer_from_host_ioctl?  In that function, only
> three TTM functions are called: ttm_bo_reserve, ttm_bo_validate, and
> ttm_bo_unreserve.  I am curious how they keep the BO alive.

It can't go away between reserve and unreserve, and I think it also
can't be evicted then.  Havn't checked how ttm implements that.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 07/18] drm/virtio: add virtio_gpu_object_array & helpers
  2019-07-03 18:31   ` Chia-I Wu
  2019-07-03 19:52     ` Chia-I Wu
@ 2019-07-04 11:11     ` Gerd Hoffmann
  1 sibling, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-04 11:11 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

> > +       for (i = 0; i < nents; i++) {
> > +               objs->nents = i;
> This line can be moved into the if-block just below.
> > +               objs->objs[i] = drm_gem_object_lookup(drm_file, handles[i]);
> > +               if (!objs->objs[i]) {
> > +                       virtio_gpu_array_put_free(objs);
> > +                       return NULL;
> > +               }

Done.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 07/18] drm/virtio: add virtio_gpu_object_array & helpers
  2019-07-03 19:52     ` Chia-I Wu
@ 2019-07-04 11:19       ` Gerd Hoffmann
  0 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-04 11:19 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

> > > +int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs)
> > > +{
> > > +       return drm_gem_lock_reservations(objs->objs, objs->nents,
> > > +                                        &objs->ticket);
> Unlike in other drivers where an "object array" is only needed in
> execbuffer, we will use this in several places, and often with only 1
> object in the array.  Can we special case that and do a quick
> reservation_object_lock?

Yes, done.

> > > +void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs)
> > > +{
> > > +       drm_gem_unlock_reservations(objs->objs, objs->nents,
> > > +                                   &objs->ticket);
> > > +}
> > > +

Likewise here.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 08/18] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing
  2019-07-03 18:49   ` Chia-I Wu
@ 2019-07-04 11:25     ` Gerd Hoffmann
  2019-07-04 18:46       ` Chia-I Wu
  0 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-04 11:25 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

  Hi,

> >         if (fence)
> >                 virtio_gpu_fence_emit(vgdev, hdr, fence);
> > +       if (vbuf->objs) {
> > +               virtio_gpu_array_add_fence(vbuf->objs, &fence->f);
> > +               virtio_gpu_array_unlock_resv(vbuf->objs);
> > +       }
> This is with the spinlock held.  Maybe we should move the
> virtio_gpu_array_unlock_resv call out of the critical section.

That would bring back the race ...

> I am actually more concerned about virtio_gpu_array_add_fence, but it
> is also harder to move.  Should we add a kref to the object array?

Yep, refcounting would be the other way to fix the race.

> This bothers me because I recently ran into a CPU-bound game with very
> bad lock contention here.

Hmm.  Any clue where this comes from?  Multiple threads competing for
virtio buffers I guess?  Maybe we should have larger virtqueues?

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 14/18] drm/virtio: rework virtio_gpu_transfer_from_host_ioctl fencing
  2019-07-03 20:05   ` Chia-I Wu
@ 2019-07-04 11:47     ` Gerd Hoffmann
  2019-07-04 18:55       ` Chia-I Wu
  0 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-04 11:47 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Wed, Jul 03, 2019 at 01:05:12PM -0700, Chia-I Wu wrote:
> On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> > Switch to the virtio_gpu_array_* helper workflow.
> (just repeating my question on patch 6)
> 
> Does this fix the obj refcount issue?  When was the issue introduced?

obj refcount should be fine in both old and new code.

old code:
  drm_gem_object_lookup
  drm_gem_object_put_unlocked

new code:
  virtio_gpu_array_from_handles
  virtio_gpu_array_put_free (in virtio_gpu_dequeue_ctrl_func).

Or did I miss something?

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 15/18] drm/virtio: rework virtio_gpu_transfer_to_host_ioctl fencing
  2019-07-03 19:55   ` Chia-I Wu
@ 2019-07-04 11:51     ` Gerd Hoffmann
  2019-07-04 19:08       ` Chia-I Wu
  0 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-04 11:51 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

  Hi,

> >         convert_to_hw_box(&box, &args->box);
> >         if (!vgdev->has_virgl_3d) {
> >                 virtio_gpu_cmd_transfer_to_host_2d
> > -                       (vgdev, qobj, offset,
> > +                       (vgdev, gem_to_virtio_gpu_obj(objs->objs[0]), offset,
> >                          box.w, box.h, box.x, box.y, NULL);
> > +               virtio_gpu_array_put_free(objs);
> Don't we need this in non-3D case as well?

No, ...

> >                 virtio_gpu_cmd_transfer_to_host_3d
> > -                       (vgdev, qobj,
> > +                       (vgdev,
> >                          vfpriv ? vfpriv->ctx_id : 0, offset,
> > -                        args->level, &box, fence);
> > -               reservation_object_add_excl_fence(qobj->base.base.resv,
> > -                                                 &fence->f);
> > +                        args->level, &box, objs, fence);

... 3d case passes the objs list to virtio_gpu_cmd_transfer_to_host_3d,
so it gets added to the vbuf and released when the command is finished.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 16/18] drm/virtio: rework virtio_gpu_cmd_context_{attach,detach}_resource
       [not found]   ` <CAAfnVBmKotCfkrM4hph4++FDrVUYR8WKpomP7Y0-aergqHTSyA@mail.gmail.com>
@ 2019-07-04 12:00     ` Gerd Hoffmann
  0 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-04 12:00 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: ML dri-devel, Chia-I Wu, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Tue, Jul 02, 2019 at 05:08:46PM -0700, Gurchetan Singh wrote:
> On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> 
> > Switch to the virtio_gpu_array_* helper workflow.
> >
> > Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> > ---
> >  drivers/gpu/drm/virtio/virtgpu_drv.h |  4 ++--
> >  drivers/gpu/drm/virtio/virtgpu_gem.c | 24 +++++++++++-------------
> >  drivers/gpu/drm/virtio/virtgpu_vq.c  | 10 ++++++----
> >  3 files changed, 19 insertions(+), 19 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h
> > b/drivers/gpu/drm/virtio/virtgpu_drv.h
> > index b1f63a21abb6..d99c54abd090 100644
> > --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> > @@ -292,10 +292,10 @@ void virtio_gpu_cmd_context_destroy(struct
> > virtio_gpu_device *vgdev,
> >                                     uint32_t id);
> >  void virtio_gpu_cmd_context_attach_resource(struct virtio_gpu_device
> > *vgdev,
> >                                             uint32_t ctx_id,
> > -                                           uint32_t resource_id);
> > +                                           struct virtio_gpu_object_array
> > *objs);
> >  void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device
> > *vgdev,
> >                                             uint32_t ctx_id,
> > -                                           uint32_t resource_id);
> > +                                           struct virtio_gpu_object_array
> > *objs);
> >  void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
> >                            void *data, uint32_t data_size,
> >                            uint32_t ctx_id,
> > diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c
> > b/drivers/gpu/drm/virtio/virtgpu_gem.c
> > index 6baf64141645..e75819dbba80 100644
> > --- a/drivers/gpu/drm/virtio/virtgpu_gem.c
> > +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
> > @@ -111,19 +111,18 @@ int virtio_gpu_gem_object_open(struct drm_gem_object
> > *obj,
> >  {
> >         struct virtio_gpu_device *vgdev = obj->dev->dev_private;
> >         struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
> > -       struct virtio_gpu_object *qobj = gem_to_virtio_gpu_obj(obj);
> > -       int r;
> > +       struct virtio_gpu_object_array *objs;
> >
> >         if (!vgdev->has_virgl_3d)
> >                 return 0;
> >
> > -       r = virtio_gpu_object_reserve(qobj);
> > -       if (r)
> > -               return r;
> > +       objs = virtio_gpu_array_alloc(1);
> > +       if (!objs)
> > +               return -ENOMEM;
> > +       virtio_gpu_array_add_obj(objs, obj);
> >
> >         virtio_gpu_cmd_context_attach_resource(vgdev, vfpriv->ctx_id,
> > -                                              qobj->hw_res_handle);
> > -       virtio_gpu_object_unreserve(qobj);
> > +                                              objs);
> >         return 0;
> >  }
> >
> > @@ -132,19 +131,18 @@ void virtio_gpu_gem_object_close(struct
> > drm_gem_object *obj,
> >  {
> >         struct virtio_gpu_device *vgdev = obj->dev->dev_private;
> >         struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
> > -       struct virtio_gpu_object *qobj = gem_to_virtio_gpu_obj(obj);
> > -       int r;
> > +       struct virtio_gpu_object_array *objs;
> >
> >         if (!vgdev->has_virgl_3d)
> >                 return;
> >
> > -       r = virtio_gpu_object_reserve(qobj);
> > -       if (r)
> > +       objs = virtio_gpu_array_alloc(1);
> > +       if (!objs)
> >                 return;
> > +       virtio_gpu_array_add_obj(objs, obj);
> >
> 
> This seems to call drm_gem_object_get.  Without adding the objects to the
> vbuf, aren't we missing the corresponding drm_gem_object_put_unlocked?

Yes.  Fixed.

> Some miscellaneous comments:
> 1) Maybe virtio_gpu_array can have it's own header and file?  virtgpu_drv.h
> is getting rather big..

Longer-term it might move out anyway due to becoming a generic drm helper.

> 2) What data are you trying to protect with the additional references?  Is
> it host side resources (i.e, the host GL texture or buffer object) or is it
> guest side resources (fences)?

Protect the (guest) gem object, specifically make sure the
bo->hw_res_handle stays valid.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 11/18] drm/virtio: switch from ttm to gem shmem helpers
  2019-07-02 14:18 ` [PATCH v6 11/18] drm/virtio: switch from ttm to gem shmem helpers Gerd Hoffmann
@ 2019-07-04 13:33   ` Emil Velikov
  2019-07-17  6:04   ` Chia-I Wu
  1 sibling, 0 replies; 43+ messages in thread
From: Emil Velikov @ 2019-07-04 13:33 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, David Airlie, open list, Gurchetan Singh,
	open list:VIRTIO GPU DRIVER

Hi Gerd,

On Tue, 2 Jul 2019 at 15:19, Gerd Hoffmann <kraxel@redhat.com> wrote:
>         .gem_prime_import_sg_table = virtgpu_gem_prime_import_sg_table,

I think that you can drop this entry-point. AFAICT it's only purpose
is to return an error - something already handled by core DRM when the
function pointer is NULL.
It's not strictly related to this series, so feel free to keep is a
separate standalone patch.

-Emil

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 08/18] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing
  2019-07-04 11:25     ` Gerd Hoffmann
@ 2019-07-04 18:46       ` Chia-I Wu
  2019-07-11  2:35         ` Chia-I Wu
  0 siblings, 1 reply; 43+ messages in thread
From: Chia-I Wu @ 2019-07-04 18:46 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Thu, Jul 4, 2019 at 4:25 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
>   Hi,
>
> > >         if (fence)
> > >                 virtio_gpu_fence_emit(vgdev, hdr, fence);
> > > +       if (vbuf->objs) {
> > > +               virtio_gpu_array_add_fence(vbuf->objs, &fence->f);
> > > +               virtio_gpu_array_unlock_resv(vbuf->objs);
> > > +       }
> > This is with the spinlock held.  Maybe we should move the
> > virtio_gpu_array_unlock_resv call out of the critical section.
>
> That would bring back the race ...
Right...
>
> > I am actually more concerned about virtio_gpu_array_add_fence, but it
> > is also harder to move.  Should we add a kref to the object array?
>
> Yep, refcounting would be the other way to fix the race.
>
> > This bothers me because I recently ran into a CPU-bound game with very
> > bad lock contention here.
>
> Hmm.  Any clue where this comes from?  Multiple threads competing for
> virtio buffers I guess?  Maybe we should have larger virtqueues?
The game was single-threaded.  I guess it was the game and Xorg
competing for virtio buffers.  That was also on an older kernel
without explicit fences.  The userspace had to create dummy resources
frequently to do VIRTIO_IOCTL_RESOURCE_WAIT.

I think this is fine for now as far as I am concerned.  I can look
into this more closely after this series lands.


>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 14/18] drm/virtio: rework virtio_gpu_transfer_from_host_ioctl fencing
  2019-07-04 11:47     ` Gerd Hoffmann
@ 2019-07-04 18:55       ` Chia-I Wu
  2019-07-05  9:01         ` Gerd Hoffmann
  0 siblings, 1 reply; 43+ messages in thread
From: Chia-I Wu @ 2019-07-04 18:55 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Thu, Jul 4, 2019 at 4:48 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Wed, Jul 03, 2019 at 01:05:12PM -0700, Chia-I Wu wrote:
> > On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > >
> > > Switch to the virtio_gpu_array_* helper workflow.
> > (just repeating my question on patch 6)
> >
> > Does this fix the obj refcount issue?  When was the issue introduced?
>
> obj refcount should be fine in both old and new code.
>
> old code:
>   drm_gem_object_lookup
>   drm_gem_object_put_unlocked
>
> new code:
>   virtio_gpu_array_from_handles
>   virtio_gpu_array_put_free (in virtio_gpu_dequeue_ctrl_func).
>
> Or did I miss something?
In the old code, drm_gem_object_put_unlocked is called before the vbuf
using the object is retired.  Isn't that what object array wants to
fix?

We get away with that because the host only sees  hw_res_handles, and
executes the commands in order.

Maybe it was me who missed something..?

>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 15/18] drm/virtio: rework virtio_gpu_transfer_to_host_ioctl fencing
  2019-07-04 11:51     ` Gerd Hoffmann
@ 2019-07-04 19:08       ` Chia-I Wu
  2019-07-05  9:05         ` Gerd Hoffmann
  0 siblings, 1 reply; 43+ messages in thread
From: Chia-I Wu @ 2019-07-04 19:08 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Thu, Jul 4, 2019 at 4:51 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
>   Hi,
>
> > >         convert_to_hw_box(&box, &args->box);
> > >         if (!vgdev->has_virgl_3d) {
> > >                 virtio_gpu_cmd_transfer_to_host_2d
> > > -                       (vgdev, qobj, offset,
> > > +                       (vgdev, gem_to_virtio_gpu_obj(objs->objs[0]), offset,
> > >                          box.w, box.h, box.x, box.y, NULL);
> > > +               virtio_gpu_array_put_free(objs);
> > Don't we need this in non-3D case as well?
>
> No, ...
>
> > >                 virtio_gpu_cmd_transfer_to_host_3d
> > > -                       (vgdev, qobj,
> > > +                       (vgdev,
> > >                          vfpriv ? vfpriv->ctx_id : 0, offset,
> > > -                        args->level, &box, fence);
> > > -               reservation_object_add_excl_fence(qobj->base.base.resv,
> > > -                                                 &fence->f);
> > > +                        args->level, &box, objs, fence);
>
> ... 3d case passes the objs list to virtio_gpu_cmd_transfer_to_host_3d,
> so it gets added to the vbuf and released when the command is finished.
Why doesn't this apply to virtio_gpu_cmd_transfer_to_host_2d?

When object array was introduced, it was said that the object array
was to keep the objects alive until the vbuf using the objects is
retired..  That sounded applicable to any vbuf that uses objects.


>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve}
  2019-07-04 11:10     ` Gerd Hoffmann
@ 2019-07-04 19:17       ` Chia-I Wu
  2019-07-05  8:53         ` Gerd Hoffmann
  0 siblings, 1 reply; 43+ messages in thread
From: Chia-I Wu @ 2019-07-04 19:17 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Thu, Jul 4, 2019 at 4:10 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
>   Hi,
>
> > > -       r = ttm_bo_reserve(&bo->tbo, true, false, NULL);
> > > +       r = reservation_object_lock_interruptible(bo->gem_base.resv, NULL);
> > Can you elaborate a bit about how TTM keeps the BOs alive in, for
> > example, virtio_gpu_transfer_from_host_ioctl?  In that function, only
> > three TTM functions are called: ttm_bo_reserve, ttm_bo_validate, and
> > ttm_bo_unreserve.  I am curious how they keep the BO alive.
>
> It can't go away between reserve and unreserve, and I think it also
> can't be evicted then.  Havn't checked how ttm implements that.
Hm, but the vbuf using the BO outlives the reserve/unreserve section.
The NO_EVICT flag applies only when the BO is still alive.  Someone
needs to hold a reference to the BO to keep it alive, otherwise the BO
can go away before the vbuf is retired.

I can be wrong, but on the other hand, it seems fine for a BO to go
away before the vbuf using it is retired.  When that happens, the
driver emits a RESOURCE_UNREF vbuf which is *after* the original vbuf.


>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve}
  2019-07-04 19:17       ` Chia-I Wu
@ 2019-07-05  8:53         ` Gerd Hoffmann
  2019-07-07  5:30           ` Chia-I Wu
  0 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-05  8:53 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Thu, Jul 04, 2019 at 12:17:48PM -0700, Chia-I Wu wrote:
> On Thu, Jul 4, 2019 at 4:10 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> >   Hi,
> >
> > > > -       r = ttm_bo_reserve(&bo->tbo, true, false, NULL);
> > > > +       r = reservation_object_lock_interruptible(bo->gem_base.resv, NULL);
> > > Can you elaborate a bit about how TTM keeps the BOs alive in, for
> > > example, virtio_gpu_transfer_from_host_ioctl?  In that function, only
> > > three TTM functions are called: ttm_bo_reserve, ttm_bo_validate, and
> > > ttm_bo_unreserve.  I am curious how they keep the BO alive.
> >
> > It can't go away between reserve and unreserve, and I think it also
> > can't be evicted then.  Havn't checked how ttm implements that.
> Hm, but the vbuf using the BO outlives the reserve/unreserve section.
> The NO_EVICT flag applies only when the BO is still alive.  Someone
> needs to hold a reference to the BO to keep it alive, otherwise the BO
> can go away before the vbuf is retired.

Note that patches 14+15 rework virtio_gpu_transfer_*_ioctl to keep
gem reference until the command is finished and patch 17 drops
virtio_gpu_object_{reserve,unreserve} altogether.

Maybe I should try to reorder the series, then squash 6+17 to reduce
confusion.  I suspect that'll cause quite a few conflicts though ...

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 14/18] drm/virtio: rework virtio_gpu_transfer_from_host_ioctl fencing
  2019-07-04 18:55       ` Chia-I Wu
@ 2019-07-05  9:01         ` Gerd Hoffmann
  0 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-05  9:01 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Thu, Jul 04, 2019 at 11:55:59AM -0700, Chia-I Wu wrote:
> On Thu, Jul 4, 2019 at 4:48 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> > On Wed, Jul 03, 2019 at 01:05:12PM -0700, Chia-I Wu wrote:
> > > On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > >
> > > > Switch to the virtio_gpu_array_* helper workflow.
> > > (just repeating my question on patch 6)
> > >
> > > Does this fix the obj refcount issue?  When was the issue introduced?
> >
> > obj refcount should be fine in both old and new code.
> >
> > old code:
> >   drm_gem_object_lookup
> >   drm_gem_object_put_unlocked
> >
> > new code:
> >   virtio_gpu_array_from_handles
> >   virtio_gpu_array_put_free (in virtio_gpu_dequeue_ctrl_func).
> >
> > Or did I miss something?
> In the old code, drm_gem_object_put_unlocked is called before the vbuf
> using the object is retired.  Isn't that what object array wants to
> fix?

I think the fence keeps the bo alive.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 15/18] drm/virtio: rework virtio_gpu_transfer_to_host_ioctl fencing
  2019-07-04 19:08       ` Chia-I Wu
@ 2019-07-05  9:05         ` Gerd Hoffmann
  2019-07-05 14:07           ` Gerd Hoffmann
  0 siblings, 1 reply; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-05  9:05 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Thu, Jul 04, 2019 at 12:08:14PM -0700, Chia-I Wu wrote:
> On Thu, Jul 4, 2019 at 4:51 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> >   Hi,
> >
> > > >         convert_to_hw_box(&box, &args->box);
> > > >         if (!vgdev->has_virgl_3d) {
> > > >                 virtio_gpu_cmd_transfer_to_host_2d
> > > > -                       (vgdev, qobj, offset,
> > > > +                       (vgdev, gem_to_virtio_gpu_obj(objs->objs[0]), offset,
> > > >                          box.w, box.h, box.x, box.y, NULL);
> > > > +               virtio_gpu_array_put_free(objs);
> > > Don't we need this in non-3D case as well?
> >
> > No, ...
> >
> > > >                 virtio_gpu_cmd_transfer_to_host_3d
> > > > -                       (vgdev, qobj,
> > > > +                       (vgdev,
> > > >                          vfpriv ? vfpriv->ctx_id : 0, offset,
> > > > -                        args->level, &box, fence);
> > > > -               reservation_object_add_excl_fence(qobj->base.base.resv,
> > > > -                                                 &fence->f);
> > > > +                        args->level, &box, objs, fence);
> >
> > ... 3d case passes the objs list to virtio_gpu_cmd_transfer_to_host_3d,
> > so it gets added to the vbuf and released when the command is finished.
> Why doesn't this apply to virtio_gpu_cmd_transfer_to_host_2d?

Hmm, yes, makes sense to handle both the same way.

With virgl=off qemu processes the commands from the guest
synchronously, so it'll work fine as-is.  So you can't hit
the theoretical race window in practice.  But depending
on that host-side implementation detail isn't a good idea
indeed.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 15/18] drm/virtio: rework virtio_gpu_transfer_to_host_ioctl fencing
  2019-07-05  9:05         ` Gerd Hoffmann
@ 2019-07-05 14:07           ` Gerd Hoffmann
  0 siblings, 0 replies; 43+ messages in thread
From: Gerd Hoffmann @ 2019-07-05 14:07 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

> > > ... 3d case passes the objs list to virtio_gpu_cmd_transfer_to_host_3d,
> > > so it gets added to the vbuf and released when the command is finished.
> > Why doesn't this apply to virtio_gpu_cmd_transfer_to_host_2d?
> 
> Hmm, yes, makes sense to handle both the same way.
> 
> With virgl=off qemu processes the commands from the guest
> synchronously, so it'll work fine as-is.  So you can't hit
> the theoretical race window in practice.  But depending
> on that host-side implementation detail isn't a good idea
> indeed.

Branch with incremental fixes:
https://git.kraxel.org/cgit/linux/log/?h=drm-virtio-kill-ttm

I'll be offline three weeks now, will resume this when I'm back.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve}
  2019-07-05  8:53         ` Gerd Hoffmann
@ 2019-07-07  5:30           ` Chia-I Wu
  0 siblings, 0 replies; 43+ messages in thread
From: Chia-I Wu @ 2019-07-07  5:30 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

[-- Attachment #1: Type: text/plain, Size: 2547 bytes --]

On Fri, Jul 5, 2019 at 1:53 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Thu, Jul 04, 2019 at 12:17:48PM -0700, Chia-I Wu wrote:
> > On Thu, Jul 4, 2019 at 4:10 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > >
> > >   Hi,
> > >
> > > > > -       r = ttm_bo_reserve(&bo->tbo, true, false, NULL);
> > > > > +       r = reservation_object_lock_interruptible(bo->gem_base.resv, NULL);
> > > > Can you elaborate a bit about how TTM keeps the BOs alive in, for
> > > > example, virtio_gpu_transfer_from_host_ioctl?  In that function, only
> > > > three TTM functions are called: ttm_bo_reserve, ttm_bo_validate, and
> > > > ttm_bo_unreserve.  I am curious how they keep the BO alive.
> > >
> > > It can't go away between reserve and unreserve, and I think it also
> > > can't be evicted then.  Havn't checked how ttm implements that.
> > Hm, but the vbuf using the BO outlives the reserve/unreserve section.
> > The NO_EVICT flag applies only when the BO is still alive.  Someone
> > needs to hold a reference to the BO to keep it alive, otherwise the BO
> > can go away before the vbuf is retired.
>
> Note that patches 14+15 rework virtio_gpu_transfer_*_ioctl to keep
> gem reference until the command is finished and patch 17 drops
> virtio_gpu_object_{reserve,unreserve} altogether.
>
> Maybe I should try to reorder the series, then squash 6+17 to reduce
> confusion.  I suspect that'll cause quite a few conflicts though ...
This may be well-known and is what you meant by "the fence keeps the
bo alive", but I finally realize that ttm_bo_put delays the deletion
of a BO when it is busy.

In the current design, vbuf does not hold references to its BOs.  Nor
do fences.  It is possible for a BO to lose all its references and
gets virtio_gpu_gem_free_object()ed  while it is still busy.  The key
is ttm_bo_put.

ttm_bo_put calls ttm_bo_cleanup_refs_or_queue to decide whether to
delete the BO immediately (when the BO is already idle) or to queue
the BO to a delayed delete list (when the BO is still busy).  If a BO
is queued to the delayed delete list, ttm_bo_delayed_delete is called
every 10ms (HZ/100 to be exact) to scan through the list and delete
idled BOs.

I wrote a simple test (attached) and added a bunch of printk's to confirm this.

Anyway, I believe the culprit is patch 11, when we switch from
ttm_bo_put to drm_gem_shmem_free_object to free a BO whose last
reference is gone.  The deletion becomes immediately after the switch.
We need to fix vbuf to refcount its BOs before we can do the switch.


>
> cheers,
>   Gerd
>

[-- Attachment #2: virtio-gpu-bo.c --]
[-- Type: text/x-c-code, Size: 2659 bytes --]

/* gcc -std=c11 -D_GNU_SOURCE -o virtio-gpu-bo virtio-gpu-bo.c */

#include <assert.h>
#include <stdint.h>
#include <stdio.h>

#include <fcntl.h>
#include <libdrm/drm.h>
#include <sys/ioctl.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>

#define PIPE_BUFFER 0
#define VIRGL_FORMAT_R8_UNORM 64
#define VIRGL_BIND_CONSTANT_BUFFER (1 << 6)
#define DRM_VIRTGPU_RESOURCE_CREATE 0x04
#define DRM_IOCTL_VIRTGPU_RESOURCE_CREATE \
    DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_RESOURCE_CREATE, \
            struct drm_virtgpu_resource_create)
struct drm_virtgpu_resource_create {
    uint32_t target;
    uint32_t format;
    uint32_t bind;
    uint32_t width;
    uint32_t height;
    uint32_t depth;
    uint32_t array_size;
    uint32_t last_level;
    uint32_t nr_samples;
    uint32_t flags;
    uint32_t bo_handle;
    uint32_t res_handle;
    uint32_t size;
    uint32_t stride;
};

struct drm_virtgpu_3d_box { 
    uint32_t x, y, z;
    uint32_t w, h, d;
};

#define DRM_VIRTGPU_TRANSFER_TO_HOST 0x07
#define DRM_IOCTL_VIRTGPU_TRANSFER_TO_HOST \
    DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_TRANSFER_TO_HOST, \
            struct drm_virtgpu_3d_transfer_to_host)
struct drm_virtgpu_3d_transfer_to_host {
    uint32_t bo_handle;
    struct drm_virtgpu_3d_box box;
    uint32_t level;
    uint32_t offset; 
}; 

static uint32_t buffer_create(int fd, uint32_t size)
{
    struct drm_virtgpu_resource_create args = {
        .target = PIPE_BUFFER,
        .format = VIRGL_FORMAT_R8_UNORM,
        .bind = VIRGL_BIND_CONSTANT_BUFFER,
        .width = size,
        .height = 1,
        .depth = 1,
        .array_size = 1,
        .nr_samples = 1,
    };
    int ret = ioctl(fd, DRM_IOCTL_VIRTGPU_RESOURCE_CREATE, &args);
    assert(!ret);
    return args.bo_handle;
}

static void buffer_close(int fd, uint32_t bo)
{
    struct drm_gem_close args = {
        .handle = bo,
    };
    int ret = ioctl(fd, DRM_IOCTL_GEM_CLOSE, &args);
    assert(!ret);
}
static void transfer_to_host(int fd, uint32_t bo, uint32_t size)
{
    struct drm_virtgpu_3d_transfer_to_host args = {
        .bo_handle = bo,
        .box.w = size,
        .box.h = 1,
        .box.d = 1,
    };
    int ret = ioctl(fd, DRM_IOCTL_VIRTGPU_TRANSFER_TO_HOST, &args);
    assert(!ret);
}

int main()
{
    const uint32_t size = 1 * 1024 * 1024;

    int fd = open("/dev/dri/renderD128", O_RDWR);
    assert(fd >= 0);

    uint32_t bo = buffer_create(fd, size);
    printf("transfer and close the BO immediately...\n");
    transfer_to_host(fd, bo, size);
    buffer_close(fd, bo);

    printf("wait for 1 second...\n");
    usleep(1000 * 1000);

    close(fd);

    return 0;
}

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 08/18] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing
  2019-07-04 18:46       ` Chia-I Wu
@ 2019-07-11  2:35         ` Chia-I Wu
  0 siblings, 0 replies; 43+ messages in thread
From: Chia-I Wu @ 2019-07-11  2:35 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

On Thu, Jul 4, 2019 at 11:46 AM Chia-I Wu <olvaffe@gmail.com> wrote:
>
> On Thu, Jul 4, 2019 at 4:25 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> >   Hi,
> >
> > > >         if (fence)
> > > >                 virtio_gpu_fence_emit(vgdev, hdr, fence);
> > > > +       if (vbuf->objs) {
> > > > +               virtio_gpu_array_add_fence(vbuf->objs, &fence->f);
> > > > +               virtio_gpu_array_unlock_resv(vbuf->objs);
> > > > +       }
> > > This is with the spinlock held.  Maybe we should move the
> > > virtio_gpu_array_unlock_resv call out of the critical section.
> >
> > That would bring back the race ...
> Right...
> >
> > > I am actually more concerned about virtio_gpu_array_add_fence, but it
> > > is also harder to move.  Should we add a kref to the object array?
> >
> > Yep, refcounting would be the other way to fix the race.
> >
> > > This bothers me because I recently ran into a CPU-bound game with very
> > > bad lock contention here.
> >
> > Hmm.  Any clue where this comes from?  Multiple threads competing for
> > virtio buffers I guess?  Maybe we should have larger virtqueues?
> The game was single-threaded.  I guess it was the game and Xorg
> competing for virtio buffers.  That was also on an older kernel
> without explicit fences.  The userspace had to create dummy resources
> frequently to do VIRTIO_IOCTL_RESOURCE_WAIT.
>
> I think this is fine for now as far as I am concerned.  I can look
> into this more closely after this series lands.
It was virtio_gpu_dequeue_ctrl_func who wanted to grab the lock to
handle the responses.  I sent a patch for it

  https://patchwork.freedesktop.org/series/63529/

>
>
> >
> > cheers,
> >   Gerd
> >

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v6 11/18] drm/virtio: switch from ttm to gem shmem helpers
  2019-07-02 14:18 ` [PATCH v6 11/18] drm/virtio: switch from ttm to gem shmem helpers Gerd Hoffmann
  2019-07-04 13:33   ` Emil Velikov
@ 2019-07-17  6:04   ` Chia-I Wu
  1 sibling, 0 replies; 43+ messages in thread
From: Chia-I Wu @ 2019-07-17  6:04 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, Gurchetan Singh, David Airlie, Daniel Vetter,
	open list, open list:VIRTIO GPU DRIVER

On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> virtio-gpu basically needs a sg_table for the bo, to tell the host where
> the backing pages for the object are.  So the gem shmem helpers are a
> perfect fit.  Some drm_gem_object_funcs need thin wrappers to update the
> host state, but otherwise the helpers handle everything just fine.
>
> Once the fencing was sorted the switch was surprisingly easy and for the
> most part just removing the ttm code.
>
> v4: fix drm_gem_object_funcs name.
>
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.h    |  52 +---
>  drivers/gpu/drm/virtio/virtgpu_drv.c    |  20 +-
>  drivers/gpu/drm/virtio/virtgpu_gem.c    |  16 +-
>  drivers/gpu/drm/virtio/virtgpu_ioctl.c  |  19 +-
>  drivers/gpu/drm/virtio/virtgpu_kms.c    |   9 -
>  drivers/gpu/drm/virtio/virtgpu_object.c | 146 ++++--------
>  drivers/gpu/drm/virtio/virtgpu_prime.c  |  37 ---
>  drivers/gpu/drm/virtio/virtgpu_ttm.c    | 304 ------------------------
>  drivers/gpu/drm/virtio/virtgpu_vq.c     |  24 +-
>  drivers/gpu/drm/virtio/Kconfig          |   2 +-
>  drivers/gpu/drm/virtio/Makefile         |   2 +-
>  11 files changed, 82 insertions(+), 549 deletions(-)
>  delete mode 100644 drivers/gpu/drm/virtio/virtgpu_ttm.c
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index 12168067a874..f8a586029400 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -33,14 +33,11 @@
>
>  #include <drm/drmP.h>
>  #include <drm/drm_gem.h>
> +#include <drm/drm_gem_shmem_helper.h>
>  #include <drm/drm_atomic.h>
>  #include <drm/drm_encoder.h>
>  #include <drm/drm_fb_helper.h>
>  #include <drm/drm_probe_helper.h>
> -#include <drm/ttm/ttm_bo_api.h>
> -#include <drm/ttm/ttm_bo_driver.h>
> -#include <drm/ttm/ttm_placement.h>
> -#include <drm/ttm/ttm_module.h>
>
>  #define DRIVER_NAME "virtio_gpu"
>  #define DRIVER_DESC "virtio GPU"
> @@ -68,21 +65,16 @@ struct virtio_gpu_object_params {
>  };
>
>  struct virtio_gpu_object {
> -       struct drm_gem_object gem_base;
> +       struct drm_gem_shmem_object base;
>         uint32_t hw_res_handle;
>
>         struct sg_table *pages;
>         uint32_t mapped;
> -       void *vmap;
>         bool dumb;
> -       struct ttm_place                placement_code;
> -       struct ttm_placement            placement;
> -       struct ttm_buffer_object        tbo;
> -       struct ttm_bo_kmap_obj          kmap;
>         bool created;
>  };
>  #define gem_to_virtio_gpu_obj(gobj) \
> -       container_of((gobj), struct virtio_gpu_object, gem_base)
> +       container_of((gobj), struct virtio_gpu_object, base.base)
>
>  struct virtio_gpu_object_array {
>         struct ww_acquire_ctx ticket;
> @@ -153,10 +145,6 @@ struct virtio_gpu_framebuffer {
>  #define to_virtio_gpu_framebuffer(x) \
>         container_of(x, struct virtio_gpu_framebuffer, base)
>
> -struct virtio_gpu_mman {
> -       struct ttm_bo_device            bdev;
> -};
> -
>  struct virtio_gpu_queue {
>         struct virtqueue *vq;
>         spinlock_t qlock;
> @@ -185,8 +173,6 @@ struct virtio_gpu_device {
>
>         struct virtio_device *vdev;
>
> -       struct virtio_gpu_mman mman;
> -
>         struct virtio_gpu_output outputs[VIRTIO_GPU_MAX_SCANOUTS];
>         uint32_t num_scanouts;
>
> @@ -357,11 +343,6 @@ struct drm_plane *virtio_gpu_plane_init(struct virtio_gpu_device *vgdev,
>                                         enum drm_plane_type type,
>                                         int index);
>
> -/* virtio_gpu_ttm.c */
> -int virtio_gpu_ttm_init(struct virtio_gpu_device *vgdev);
> -void virtio_gpu_ttm_fini(struct virtio_gpu_device *vgdev);
> -int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma);
> -
>  /* virtio_gpu_fence.c */
>  bool virtio_fence_signaled(struct dma_fence *f);
>  struct virtio_gpu_fence *virtio_gpu_fence_alloc(
> @@ -373,58 +354,47 @@ void virtio_gpu_fence_event_process(struct virtio_gpu_device *vdev,
>                                     u64 last_seq);
>
>  /* virtio_gpu_object */
> +struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
> +                                               size_t size);
>  int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
>                              struct virtio_gpu_object_params *params,
>                              struct virtio_gpu_object **bo_ptr,
>                              struct virtio_gpu_fence *fence);
> -void virtio_gpu_object_kunmap(struct virtio_gpu_object *bo);
> -int virtio_gpu_object_kmap(struct virtio_gpu_object *bo);
> -int virtio_gpu_object_get_sg_table(struct virtio_gpu_device *qdev,
> -                                  struct virtio_gpu_object *bo);
> -void virtio_gpu_object_free_sg_table(struct virtio_gpu_object *bo);
>
>  /* virtgpu_prime.c */
> -struct sg_table *virtgpu_gem_prime_get_sg_table(struct drm_gem_object *obj);
>  struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
>         struct drm_device *dev, struct dma_buf_attachment *attach,
>         struct sg_table *sgt);
> -void *virtgpu_gem_prime_vmap(struct drm_gem_object *obj);
> -void virtgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> -int virtgpu_gem_prime_mmap(struct drm_gem_object *obj,
> -                          struct vm_area_struct *vma);
>
>  static inline struct virtio_gpu_object*
>  virtio_gpu_object_ref(struct virtio_gpu_object *bo)
>  {
> -       ttm_bo_get(&bo->tbo);
> +       drm_gem_object_get(&bo->base.base);
>         return bo;
>  }
>
>  static inline void virtio_gpu_object_unref(struct virtio_gpu_object **bo)
>  {
> -       struct ttm_buffer_object *tbo;
> -
>         if ((*bo) == NULL)
>                 return;
> -       tbo = &((*bo)->tbo);
> -       ttm_bo_put(tbo);
> +       drm_gem_object_put(&(*bo)->base.base);
>         *bo = NULL;
>  }
>
>  static inline u64 virtio_gpu_object_mmap_offset(struct virtio_gpu_object *bo)
>  {
> -       return drm_vma_node_offset_addr(&bo->tbo.vma_node);
> +       return drm_vma_node_offset_addr(&bo->base.base.vma_node);
>  }
>
>  static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo)
>  {
>         int r;
>
> -       r = reservation_object_lock_interruptible(bo->gem_base.resv, NULL);
> +       r = reservation_object_lock_interruptible(bo->base.base.resv, NULL);
>         if (unlikely(r != 0)) {
>                 if (r != -EINTR) {
>                         struct virtio_gpu_device *qdev =
> -                               bo->gem_base.dev->dev_private;
> +                               bo->base.base.dev->dev_private;
>                         dev_err(qdev->dev, "%p reserve failed\n", bo);
>                 }
>                 return r;
> @@ -434,7 +404,7 @@ static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo)
>
>  static inline void virtio_gpu_object_unreserve(struct virtio_gpu_object *bo)
>  {
> -       reservation_object_unlock(bo->gem_base.resv);
> +       reservation_object_unlock(bo->base.base.resv);
>  }
>
>  /* virgl debufs */
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
> index 99bcd290f1fb..8ce193e2800b 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
> @@ -182,17 +182,7 @@ MODULE_AUTHOR("Dave Airlie <airlied@redhat.com>");
>  MODULE_AUTHOR("Gerd Hoffmann <kraxel@redhat.com>");
>  MODULE_AUTHOR("Alon Levy");
>
> -static const struct file_operations virtio_gpu_driver_fops = {
> -       .owner = THIS_MODULE,
> -       .open = drm_open,
> -       .mmap = virtio_gpu_mmap,
> -       .poll = drm_poll,
> -       .read = drm_read,
> -       .unlocked_ioctl = drm_ioctl,
> -       .release = drm_release,
> -       .compat_ioctl = drm_compat_ioctl,
> -       .llseek = noop_llseek,
> -};
> +DEFINE_DRM_GEM_SHMEM_FOPS(virtio_gpu_driver_fops);
We used to have TTM_PL_FLAG_CACHED.  With this, it seems mmap now
calls drm_gem_mmap_obj indirectly and sets the memory type to
writecombine.

>
>  static struct drm_driver driver = {
>         .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_RENDER | DRIVER_ATOMIC,
> @@ -207,15 +197,9 @@ static struct drm_driver driver = {
>  #endif
>         .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>         .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> -       .gem_prime_get_sg_table = virtgpu_gem_prime_get_sg_table,
>         .gem_prime_import_sg_table = virtgpu_gem_prime_import_sg_table,
> -       .gem_prime_vmap = virtgpu_gem_prime_vmap,
> -       .gem_prime_vunmap = virtgpu_gem_prime_vunmap,
> -       .gem_prime_mmap = virtgpu_gem_prime_mmap,
>
> -       .gem_free_object_unlocked = virtio_gpu_gem_free_object,
> -       .gem_open_object = virtio_gpu_gem_object_open,
> -       .gem_close_object = virtio_gpu_gem_object_close,
> +       .gem_create_object = virtio_gpu_create_object,
>         .fops = &virtio_gpu_driver_fops,
>
>         .ioctls = virtio_gpu_ioctls,
> diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
> index e88df5e06d06..8a95864404ca 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_gem.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
> @@ -26,14 +26,6 @@
>  #include <drm/drmP.h>
>  #include "virtgpu_drv.h"
>
> -void virtio_gpu_gem_free_object(struct drm_gem_object *gem_obj)
> -{
> -       struct virtio_gpu_object *obj = gem_to_virtio_gpu_obj(gem_obj);
> -
> -       if (obj)
> -               virtio_gpu_object_unref(&obj);
> -}
> -
>  struct virtio_gpu_object*
>  virtio_gpu_alloc_object(struct drm_device *dev,
>                         struct virtio_gpu_object_params *params,
> @@ -64,16 +56,16 @@ int virtio_gpu_gem_create(struct drm_file *file,
>         if (IS_ERR(obj))
>                 return PTR_ERR(obj);
>
> -       ret = drm_gem_handle_create(file, &obj->gem_base, &handle);
> +       ret = drm_gem_handle_create(file, &obj->base.base, &handle);
>         if (ret) {
> -               drm_gem_object_release(&obj->gem_base);
> +               drm_gem_object_release(&obj->base.base);
>                 return ret;
>         }
>
> -       *obj_p = &obj->gem_base;
> +       *obj_p = &obj->base.base;
>
>         /* drop reference from allocate - handle holds it now */
> -       drm_gem_object_put_unlocked(&obj->gem_base);
> +       drm_gem_object_put_unlocked(&obj->base.base);
>
>         *handle_p = handle;
>         return 0;
> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> index 1fa8aa91d590..107057816e1f 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> @@ -27,7 +27,6 @@
>
>  #include <drm/drmP.h>
>  #include <drm/virtgpu_drm.h>
> -#include <drm/ttm/ttm_execbuf_util.h>
>  #include <linux/sync_file.h>
>
>  #include "virtgpu_drv.h"
> @@ -259,7 +258,7 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
>         dma_fence_put(&fence->f);
>         if (IS_ERR(qobj))
>                 return PTR_ERR(qobj);
> -       obj = &qobj->gem_base;
> +       obj = &qobj->base.base;
>
>         ret = drm_gem_handle_create(file_priv, obj, &handle);
>         if (ret) {
> @@ -286,7 +285,7 @@ static int virtio_gpu_resource_info_ioctl(struct drm_device *dev, void *data,
>
>         qobj = gem_to_virtio_gpu_obj(gobj);
>
> -       ri->size = qobj->gem_base.size;
> +       ri->size = qobj->base.base.size;
>         ri->res_handle = qobj->hw_res_handle;
>         drm_gem_object_put_unlocked(gobj);
>         return 0;
> @@ -299,7 +298,6 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
>         struct virtio_gpu_device *vgdev = dev->dev_private;
>         struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
>         struct drm_virtgpu_3d_transfer_from_host *args = data;
> -       struct ttm_operation_ctx ctx = { true, false };
>         struct drm_gem_object *gobj = NULL;
>         struct virtio_gpu_object *qobj = NULL;
>         struct virtio_gpu_fence *fence;
> @@ -320,10 +318,6 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
>         if (ret)
>                 goto out;
>
> -       ret = ttm_bo_validate(&qobj->tbo, &qobj->placement, &ctx);
> -       if (unlikely(ret))
> -               goto out_unres;
> -
>         convert_to_hw_box(&box, &args->box);
>
>         fence = virtio_gpu_fence_alloc(vgdev);
> @@ -335,7 +329,7 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
>                 (vgdev, qobj->hw_res_handle,
>                  vfpriv->ctx_id, offset, args->level,
>                  &box, fence);
> -       reservation_object_add_excl_fence(qobj->tbo.resv,
> +       reservation_object_add_excl_fence(qobj->base.base.resv,
>                                           &fence->f);
>
>         dma_fence_put(&fence->f);
> @@ -352,7 +346,6 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
>         struct virtio_gpu_device *vgdev = dev->dev_private;
>         struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
>         struct drm_virtgpu_3d_transfer_to_host *args = data;
> -       struct ttm_operation_ctx ctx = { true, false };
>         struct drm_gem_object *gobj = NULL;
>         struct virtio_gpu_object *qobj = NULL;
>         struct virtio_gpu_fence *fence;
> @@ -370,10 +363,6 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
>         if (ret)
>                 goto out;
>
> -       ret = ttm_bo_validate(&qobj->tbo, &qobj->placement, &ctx);
> -       if (unlikely(ret))
> -               goto out_unres;
> -
>         convert_to_hw_box(&box, &args->box);
>         if (!vgdev->has_virgl_3d) {
>                 virtio_gpu_cmd_transfer_to_host_2d
> @@ -389,7 +378,7 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
>                         (vgdev, qobj,
>                          vfpriv ? vfpriv->ctx_id : 0, offset,
>                          args->level, &box, fence);
> -               reservation_object_add_excl_fence(qobj->tbo.resv,
> +               reservation_object_add_excl_fence(qobj->base.base.resv,
>                                                   &fence->f);
>                 dma_fence_put(&fence->f);
>         }
> diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
> index 84b6a6bf00c6..0bc6abaeafca 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_kms.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
> @@ -171,12 +171,6 @@ int virtio_gpu_init(struct drm_device *dev)
>                 goto err_vbufs;
>         }
>
> -       ret = virtio_gpu_ttm_init(vgdev);
> -       if (ret) {
> -               DRM_ERROR("failed to init ttm %d\n", ret);
> -               goto err_ttm;
> -       }
> -
>         /* get display info */
>         virtio_cread(vgdev->vdev, struct virtio_gpu_config,
>                      num_scanouts, &num_scanouts);
> @@ -208,8 +202,6 @@ int virtio_gpu_init(struct drm_device *dev)
>         return 0;
>
>  err_scanouts:
> -       virtio_gpu_ttm_fini(vgdev);
> -err_ttm:
>         virtio_gpu_free_vbufs(vgdev);
>  err_vbufs:
>         vgdev->vdev->config->del_vqs(vgdev->vdev);
> @@ -240,7 +232,6 @@ void virtio_gpu_deinit(struct drm_device *dev)
>         vgdev->vdev->config->del_vqs(vgdev->vdev);
>
>         virtio_gpu_modeset_fini(vgdev);
> -       virtio_gpu_ttm_fini(vgdev);
>         virtio_gpu_free_vbufs(vgdev);
>         virtio_gpu_cleanup_cap_cache(vgdev);
>         kfree(vgdev->capsets);
> diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
> index 2902487f051a..fb9d05a68f4c 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_object.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_object.c
> @@ -23,8 +23,6 @@
>   * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
>   */
>
> -#include <drm/ttm/ttm_execbuf_util.h>
> -
>  #include "virtgpu_drv.h"
>
>  static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
> @@ -57,39 +55,45 @@ static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t
>  #endif
>  }
>
> -static void virtio_gpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
> +static void virtio_gpu_free_object(struct drm_gem_object *obj)
>  {
> -       struct virtio_gpu_object *bo;
> -       struct virtio_gpu_device *vgdev;
> -
> -       bo = container_of(tbo, struct virtio_gpu_object, tbo);
> -       vgdev = (struct virtio_gpu_device *)bo->gem_base.dev->dev_private;
> +       struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
> +       struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
>
> +       if (bo->pages)
> +               virtio_gpu_object_detach(vgdev, bo);
>         if (bo->created)
>                 virtio_gpu_cmd_unref_resource(vgdev, bo->hw_res_handle);
> -       if (bo->pages)
> -               virtio_gpu_object_free_sg_table(bo);
> -       if (bo->vmap)
> -               virtio_gpu_object_kunmap(bo);
> -       drm_gem_object_release(&bo->gem_base);
>         virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
> -       kfree(bo);
> +
> +       drm_gem_shmem_free_object(obj);
>  }
>
> -static void virtio_gpu_init_ttm_placement(struct virtio_gpu_object *vgbo)
> +static const struct drm_gem_object_funcs virtio_gpu_gem_funcs = {
> +       .free = virtio_gpu_free_object,
> +       .open = virtio_gpu_gem_object_open,
> +       .close = virtio_gpu_gem_object_close,
> +
> +       .print_info = drm_gem_shmem_print_info,
> +       .pin = drm_gem_shmem_pin,
> +       .unpin = drm_gem_shmem_unpin,
> +       .get_sg_table = drm_gem_shmem_get_sg_table,
> +       .vmap = drm_gem_shmem_vmap,
> +       .vunmap = drm_gem_shmem_vunmap,
> +       .vm_ops = &drm_gem_shmem_vm_ops,
> +};
> +
> +struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
> +                                               size_t size)
>  {
> -       u32 c = 1;
> +       struct virtio_gpu_object *bo;
>
> -       vgbo->placement.placement = &vgbo->placement_code;
> -       vgbo->placement.busy_placement = &vgbo->placement_code;
> -       vgbo->placement_code.fpfn = 0;
> -       vgbo->placement_code.lpfn = 0;
> -       vgbo->placement_code.flags =
> -               TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT |
> -               TTM_PL_FLAG_NO_EVICT;
> -       vgbo->placement.num_placement = c;
> -       vgbo->placement.num_busy_placement = c;
> +       bo = kzalloc(sizeof(*bo), GFP_KERNEL);
> +       if (!bo)
> +               return NULL;
>
> +       bo->base.base.funcs = &virtio_gpu_gem_funcs;
> +       return &bo->base.base;
>  }
>
>  int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
> @@ -98,27 +102,22 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
>                              struct virtio_gpu_fence *fence)
>  {
>         struct virtio_gpu_object_array *objs = NULL;
> +       struct drm_gem_shmem_object *shmem_obj;
>         struct virtio_gpu_object *bo;
> -       size_t acc_size;
>         int ret;
>
>         *bo_ptr = NULL;
>
> -       acc_size = ttm_bo_dma_acc_size(&vgdev->mman.bdev, params->size,
> -                                      sizeof(struct virtio_gpu_object));
> +       params->size = roundup(params->size, PAGE_SIZE);
> +       shmem_obj = drm_gem_shmem_create(vgdev->ddev, params->size);
> +       if (IS_ERR(shmem_obj))
> +               return PTR_ERR(shmem_obj);
> +       bo = gem_to_virtio_gpu_obj(&shmem_obj->base);
>
> -       bo = kzalloc(sizeof(struct virtio_gpu_object), GFP_KERNEL);
> -       if (bo == NULL)
> -               return -ENOMEM;
>         ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle);
>         if (ret < 0)
>                 goto err_free_gem;
>
> -       params->size = roundup(params->size, PAGE_SIZE);
> -       ret = drm_gem_object_init(vgdev->ddev, &bo->gem_base, params->size);
> -       if (ret != 0)
> -               goto err_put_id;
> -
>         bo->dumb = params->dumb;
>
>         if (fence) {
> @@ -126,7 +125,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
>                 objs = virtio_gpu_array_alloc(1);
>                 if (!objs)
>                         goto err_put_id;
> -               virtio_gpu_array_add_obj(objs, &bo->gem_base);
> +               virtio_gpu_array_add_obj(objs, &bo->base.base);
>
>                 ret = virtio_gpu_array_lock_resv(objs);
>                 if (ret != 0)
> @@ -141,15 +140,11 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
>                                                objs, fence);
>         }
>
> -       virtio_gpu_init_ttm_placement(bo);
> -       ret = ttm_bo_init(&vgdev->mman.bdev, &bo->tbo, params->size,
> -                         ttm_bo_type_device, &bo->placement, 0,
> -                         true, acc_size, NULL,
> -                         bo->gem_base.resv,
> -                         &virtio_gpu_ttm_bo_destroy);
> -       /* ttm_bo_init failure will call the destroy */
> -       if (ret != 0)
> +       ret = virtio_gpu_object_attach(vgdev, bo, NULL);
> +       if (ret != 0) {
> +               virtio_gpu_free_object(&shmem_obj->base);
>                 return ret;
> +       }
>
>         *bo_ptr = bo;
>         return 0;
> @@ -159,65 +154,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
>  err_put_id:
>         virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
>  err_free_gem:
> -       kfree(bo);
> +       drm_gem_shmem_free_object(&shmem_obj->base);
>         return ret;
>  }
> -
> -void virtio_gpu_object_kunmap(struct virtio_gpu_object *bo)
> -{
> -       bo->vmap = NULL;
> -       ttm_bo_kunmap(&bo->kmap);
> -}
> -
> -int virtio_gpu_object_kmap(struct virtio_gpu_object *bo)
> -{
> -       bool is_iomem;
> -       int r;
> -
> -       WARN_ON(bo->vmap);
> -
> -       r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
> -       if (r)
> -               return r;
> -       bo->vmap = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
> -       return 0;
> -}
> -
> -int virtio_gpu_object_get_sg_table(struct virtio_gpu_device *qdev,
> -                                  struct virtio_gpu_object *bo)
> -{
> -       int ret;
> -       struct page **pages = bo->tbo.ttm->pages;
> -       int nr_pages = bo->tbo.num_pages;
> -       struct ttm_operation_ctx ctx = {
> -               .interruptible = false,
> -               .no_wait_gpu = false
> -       };
> -
> -       /* wtf swapping */
> -       if (bo->pages)
> -               return 0;
> -
> -       if (bo->tbo.ttm->state == tt_unpopulated)
> -               bo->tbo.ttm->bdev->driver->ttm_tt_populate(bo->tbo.ttm, &ctx);
> -       bo->pages = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
> -       if (!bo->pages)
> -               goto out;
> -
> -       ret = sg_alloc_table_from_pages(bo->pages, pages, nr_pages, 0,
> -                                       nr_pages << PAGE_SHIFT, GFP_KERNEL);
> -       if (ret)
> -               goto out;
> -       return 0;
> -out:
> -       kfree(bo->pages);
> -       bo->pages = NULL;
> -       return -ENOMEM;
> -}
> -
> -void virtio_gpu_object_free_sg_table(struct virtio_gpu_object *bo)
> -{
> -       sg_free_table(bo->pages);
> -       kfree(bo->pages);
> -       bo->pages = NULL;
> -}
> diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c
> index 8fbf71bd0c5e..18a155cd08d5 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_prime.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c
> @@ -28,46 +28,9 @@
>   * device that might share buffers with virtgpu
>   */
>
> -struct sg_table *virtgpu_gem_prime_get_sg_table(struct drm_gem_object *obj)
> -{
> -       struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
> -
> -       if (!bo->tbo.ttm->pages || !bo->tbo.ttm->num_pages)
> -               /* should not happen */
> -               return ERR_PTR(-EINVAL);
> -
> -       return drm_prime_pages_to_sg(bo->tbo.ttm->pages,
> -                                    bo->tbo.ttm->num_pages);
> -}
> -
>  struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
>         struct drm_device *dev, struct dma_buf_attachment *attach,
>         struct sg_table *table)
>  {
>         return ERR_PTR(-ENODEV);
>  }
> -
> -void *virtgpu_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> -       struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
> -       int ret;
> -
> -       ret = virtio_gpu_object_kmap(bo);
> -       if (ret)
> -               return NULL;
> -       return bo->vmap;
> -}
> -
> -void virtgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> -       virtio_gpu_object_kunmap(gem_to_virtio_gpu_obj(obj));
> -}
> -
> -int virtgpu_gem_prime_mmap(struct drm_gem_object *obj,
> -                          struct vm_area_struct *vma)
> -{
> -       struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
> -
> -       bo->gem_base.vma_node.vm_node.start = bo->tbo.vma_node.vm_node.start;
> -       return drm_gem_prime_mmap(obj, vma);
> -}
> diff --git a/drivers/gpu/drm/virtio/virtgpu_ttm.c b/drivers/gpu/drm/virtio/virtgpu_ttm.c
> deleted file mode 100644
> index 300ef3a83538..000000000000
> --- a/drivers/gpu/drm/virtio/virtgpu_ttm.c
> +++ /dev/null
> @@ -1,304 +0,0 @@
> -/*
> - * Copyright (C) 2015 Red Hat, Inc.
> - * All Rights Reserved.
> - *
> - * Authors:
> - *    Dave Airlie
> - *    Alon Levy
> - *
> - * Permission is hereby granted, free of charge, to any person obtaining a
> - * copy of this software and associated documentation files (the "Software"),
> - * to deal in the Software without restriction, including without limitation
> - * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> - * and/or sell copies of the Software, and to permit persons to whom the
> - * Software is furnished to do so, subject to the following conditions:
> - *
> - * The above copyright notice and this permission notice shall be included in
> - * all copies or substantial portions of the Software.
> - *
> - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> - * OTHER DEALINGS IN THE SOFTWARE.
> - */
> -
> -#include <drm/ttm/ttm_bo_api.h>
> -#include <drm/ttm/ttm_bo_driver.h>
> -#include <drm/ttm/ttm_placement.h>
> -#include <drm/ttm/ttm_page_alloc.h>
> -#include <drm/ttm/ttm_module.h>
> -#include <drm/drmP.h>
> -#include <drm/drm.h>
> -#include <drm/virtgpu_drm.h>
> -#include "virtgpu_drv.h"
> -
> -#include <linux/delay.h>
> -
> -static struct
> -virtio_gpu_device *virtio_gpu_get_vgdev(struct ttm_bo_device *bdev)
> -{
> -       struct virtio_gpu_mman *mman;
> -       struct virtio_gpu_device *vgdev;
> -
> -       mman = container_of(bdev, struct virtio_gpu_mman, bdev);
> -       vgdev = container_of(mman, struct virtio_gpu_device, mman);
> -       return vgdev;
> -}
> -
> -int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma)
> -{
> -       struct drm_file *file_priv;
> -       struct virtio_gpu_device *vgdev;
> -       int r;
> -
> -       file_priv = filp->private_data;
> -       vgdev = file_priv->minor->dev->dev_private;
> -       if (vgdev == NULL) {
> -               DRM_ERROR(
> -                "filp->private_data->minor->dev->dev_private == NULL\n");
> -               return -EINVAL;
> -       }
> -       r = ttm_bo_mmap(filp, vma, &vgdev->mman.bdev);
> -
> -       return r;
> -}
> -
> -static int virtio_gpu_invalidate_caches(struct ttm_bo_device *bdev,
> -                                       uint32_t flags)
> -{
> -       return 0;
> -}
> -
> -static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man,
> -                              struct ttm_buffer_object *bo,
> -                              const struct ttm_place *place,
> -                              struct ttm_mem_reg *mem)
> -{
> -       mem->mm_node = (void *)1;
> -       return 0;
> -}
> -
> -static void ttm_bo_man_put_node(struct ttm_mem_type_manager *man,
> -                               struct ttm_mem_reg *mem)
> -{
> -       mem->mm_node = (void *)NULL;
> -}
> -
> -static int ttm_bo_man_init(struct ttm_mem_type_manager *man,
> -                          unsigned long p_size)
> -{
> -       return 0;
> -}
> -
> -static int ttm_bo_man_takedown(struct ttm_mem_type_manager *man)
> -{
> -       return 0;
> -}
> -
> -static void ttm_bo_man_debug(struct ttm_mem_type_manager *man,
> -                            struct drm_printer *printer)
> -{
> -}
> -
> -static const struct ttm_mem_type_manager_func virtio_gpu_bo_manager_func = {
> -       .init = ttm_bo_man_init,
> -       .takedown = ttm_bo_man_takedown,
> -       .get_node = ttm_bo_man_get_node,
> -       .put_node = ttm_bo_man_put_node,
> -       .debug = ttm_bo_man_debug
> -};
> -
> -static int virtio_gpu_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
> -                                   struct ttm_mem_type_manager *man)
> -{
> -       switch (type) {
> -       case TTM_PL_SYSTEM:
> -               /* System memory */
> -               man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
> -               man->available_caching = TTM_PL_MASK_CACHING;
> -               man->default_caching = TTM_PL_FLAG_CACHED;
> -               break;
> -       case TTM_PL_TT:
> -               man->func = &virtio_gpu_bo_manager_func;
> -               man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
> -               man->available_caching = TTM_PL_MASK_CACHING;
> -               man->default_caching = TTM_PL_FLAG_CACHED;
> -               break;
> -       default:
> -               DRM_ERROR("Unsupported memory type %u\n", (unsigned int)type);
> -               return -EINVAL;
> -       }
> -       return 0;
> -}
> -
> -static void virtio_gpu_evict_flags(struct ttm_buffer_object *bo,
> -                               struct ttm_placement *placement)
> -{
> -       static const struct ttm_place placements = {
> -               .fpfn  = 0,
> -               .lpfn  = 0,
> -               .flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM,
> -       };
> -
> -       placement->placement = &placements;
> -       placement->busy_placement = &placements;
> -       placement->num_placement = 1;
> -       placement->num_busy_placement = 1;
> -}
> -
> -static int virtio_gpu_verify_access(struct ttm_buffer_object *bo,
> -                                   struct file *filp)
> -{
> -       return 0;
> -}
> -
> -static int virtio_gpu_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
> -                                        struct ttm_mem_reg *mem)
> -{
> -       struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
> -
> -       mem->bus.addr = NULL;
> -       mem->bus.offset = 0;
> -       mem->bus.size = mem->num_pages << PAGE_SHIFT;
> -       mem->bus.base = 0;
> -       mem->bus.is_iomem = false;
> -       if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
> -               return -EINVAL;
> -       switch (mem->mem_type) {
> -       case TTM_PL_SYSTEM:
> -       case TTM_PL_TT:
> -               /* system memory */
> -               return 0;
> -       default:
> -               return -EINVAL;
> -       }
> -       return 0;
> -}
> -
> -static void virtio_gpu_ttm_io_mem_free(struct ttm_bo_device *bdev,
> -                                      struct ttm_mem_reg *mem)
> -{
> -}
> -
> -/*
> - * TTM backend functions.
> - */
> -struct virtio_gpu_ttm_tt {
> -       struct ttm_dma_tt               ttm;
> -       struct virtio_gpu_object        *obj;
> -};
> -
> -static int virtio_gpu_ttm_tt_bind(struct ttm_tt *ttm,
> -                                 struct ttm_mem_reg *bo_mem)
> -{
> -       struct virtio_gpu_ttm_tt *gtt =
> -               container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
> -       struct virtio_gpu_device *vgdev =
> -               virtio_gpu_get_vgdev(gtt->obj->tbo.bdev);
> -
> -       virtio_gpu_object_attach(vgdev, gtt->obj, NULL);
> -       return 0;
> -}
> -
> -static int virtio_gpu_ttm_tt_unbind(struct ttm_tt *ttm)
> -{
> -       struct virtio_gpu_ttm_tt *gtt =
> -               container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
> -       struct virtio_gpu_device *vgdev =
> -               virtio_gpu_get_vgdev(gtt->obj->tbo.bdev);
> -
> -       virtio_gpu_object_detach(vgdev, gtt->obj);
> -       return 0;
> -}
> -
> -static void virtio_gpu_ttm_tt_destroy(struct ttm_tt *ttm)
> -{
> -       struct virtio_gpu_ttm_tt *gtt =
> -               container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
> -
> -       ttm_dma_tt_fini(&gtt->ttm);
> -       kfree(gtt);
> -}
> -
> -static struct ttm_backend_func virtio_gpu_tt_func = {
> -       .bind = &virtio_gpu_ttm_tt_bind,
> -       .unbind = &virtio_gpu_ttm_tt_unbind,
> -       .destroy = &virtio_gpu_ttm_tt_destroy,
> -};
> -
> -static struct ttm_tt *virtio_gpu_ttm_tt_create(struct ttm_buffer_object *bo,
> -                                              uint32_t page_flags)
> -{
> -       struct virtio_gpu_device *vgdev;
> -       struct virtio_gpu_ttm_tt *gtt;
> -
> -       vgdev = virtio_gpu_get_vgdev(bo->bdev);
> -       gtt = kzalloc(sizeof(struct virtio_gpu_ttm_tt), GFP_KERNEL);
> -       if (gtt == NULL)
> -               return NULL;
> -       gtt->ttm.ttm.func = &virtio_gpu_tt_func;
> -       gtt->obj = container_of(bo, struct virtio_gpu_object, tbo);
> -       if (ttm_dma_tt_init(&gtt->ttm, bo, page_flags)) {
> -               kfree(gtt);
> -               return NULL;
> -       }
> -       return &gtt->ttm.ttm;
> -}
> -
> -static void virtio_gpu_bo_swap_notify(struct ttm_buffer_object *tbo)
> -{
> -       struct virtio_gpu_object *bo;
> -
> -       bo = container_of(tbo, struct virtio_gpu_object, tbo);
> -
> -       if (bo->pages)
> -               virtio_gpu_object_free_sg_table(bo);
> -}
> -
> -static struct ttm_bo_driver virtio_gpu_bo_driver = {
> -       .ttm_tt_create = &virtio_gpu_ttm_tt_create,
> -       .invalidate_caches = &virtio_gpu_invalidate_caches,
> -       .init_mem_type = &virtio_gpu_init_mem_type,
> -       .eviction_valuable = ttm_bo_eviction_valuable,
> -       .evict_flags = &virtio_gpu_evict_flags,
> -       .verify_access = &virtio_gpu_verify_access,
> -       .io_mem_reserve = &virtio_gpu_ttm_io_mem_reserve,
> -       .io_mem_free = &virtio_gpu_ttm_io_mem_free,
> -       .swap_notify = &virtio_gpu_bo_swap_notify,
> -};
> -
> -int virtio_gpu_ttm_init(struct virtio_gpu_device *vgdev)
> -{
> -       int r;
> -
> -       /* No others user of address space so set it to 0 */
> -       r = ttm_bo_device_init(&vgdev->mman.bdev,
> -                              &virtio_gpu_bo_driver,
> -                              vgdev->ddev->anon_inode->i_mapping,
> -                              false);
> -       if (r) {
> -               DRM_ERROR("failed initializing buffer object driver(%d).\n", r);
> -               goto err_dev_init;
> -       }
> -
> -       r = ttm_bo_init_mm(&vgdev->mman.bdev, TTM_PL_TT, 0);
> -       if (r) {
> -               DRM_ERROR("Failed initializing GTT heap.\n");
> -               goto err_mm_init;
> -       }
> -       return 0;
> -
> -err_mm_init:
> -       ttm_bo_device_release(&vgdev->mman.bdev);
> -err_dev_init:
> -       return r;
> -}
> -
> -void virtio_gpu_ttm_fini(struct virtio_gpu_device *vgdev)
> -{
> -       ttm_bo_device_release(&vgdev->mman.bdev);
> -       DRM_INFO("virtio_gpu: ttm finalized\n");
> -}
> diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
> index 03b7e0845d98..fc908d5cb217 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_vq.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
> @@ -979,17 +979,21 @@ int virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
>         bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
>         struct virtio_gpu_mem_entry *ents;
>         struct scatterlist *sg;
> -       int si, nents;
> +       int si, nents, ret;
>
>         if (WARN_ON_ONCE(!obj->created))
>                 return -EINVAL;
> +       if (WARN_ON_ONCE(obj->pages))
> +               return -EINVAL;
>
> -       if (!obj->pages) {
> -               int ret;
> +       ret = drm_gem_shmem_pin(&obj->base.base);
> +       if (ret < 0)
> +               return -EINVAL;
>
> -               ret = virtio_gpu_object_get_sg_table(vgdev, obj);
> -               if (ret)
> -                       return ret;
> +       obj->pages = drm_gem_shmem_get_sg_table(&obj->base.base);
> +       if (obj->pages == NULL) {
> +               drm_gem_shmem_unpin(&obj->base.base);
> +               return -EINVAL;
>         }
>
>         if (use_dma_api) {
> @@ -1028,6 +1032,9 @@ void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
>  {
>         bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
>
> +       if (WARN_ON_ONCE(!obj->pages))
> +               return;
> +
>         if (use_dma_api && obj->mapped) {
>                 struct virtio_gpu_fence *fence = virtio_gpu_fence_alloc(vgdev);
>                 /* detach backing and wait for the host process it ... */
> @@ -1043,6 +1050,11 @@ void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
>         } else {
>                 virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle, NULL);
>         }
> +
> +       sg_free_table(obj->pages);
> +       obj->pages = NULL;
> +
> +       drm_gem_shmem_unpin(&obj->base.base);
>  }
>
>  void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
> diff --git a/drivers/gpu/drm/virtio/Kconfig b/drivers/gpu/drm/virtio/Kconfig
> index ba36e933bb49..eff3047052d4 100644
> --- a/drivers/gpu/drm/virtio/Kconfig
> +++ b/drivers/gpu/drm/virtio/Kconfig
> @@ -3,7 +3,7 @@ config DRM_VIRTIO_GPU
>         tristate "Virtio GPU driver"
>         depends on DRM && VIRTIO && MMU
>         select DRM_KMS_HELPER
> -       select DRM_TTM
> +       select DRM_GEM_SHMEM_HELPER
>         help
>            This is the virtual GPU driver for virtio.  It can be used with
>            QEMU based VMMs (like KVM or Xen).
> diff --git a/drivers/gpu/drm/virtio/Makefile b/drivers/gpu/drm/virtio/Makefile
> index 458e606a936f..92aa2b3d349d 100644
> --- a/drivers/gpu/drm/virtio/Makefile
> +++ b/drivers/gpu/drm/virtio/Makefile
> @@ -4,7 +4,7 @@
>  # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
>
>  virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_gem.o \
> -       virtgpu_display.o virtgpu_vq.o virtgpu_ttm.o \
> +       virtgpu_display.o virtgpu_vq.o \
>         virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \
>         virtgpu_ioctl.o virtgpu_prime.o virtgpu_trace_points.o
>
> --
> 2.18.1
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2019-07-17  6:04 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20190702141903.1131-1-kraxel@redhat.com>
2019-07-02 14:18 ` [PATCH v6 01/18] drm/virtio: pass gem reservation object to ttm init Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 02/18] drm/virtio: switch virtio_gpu_wait_ioctl() to gem helper Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 03/18] drm/virtio: simplify cursor updates Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 04/18] drm/virtio: remove virtio_gpu_object_wait Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 05/18] drm/virtio: drop no_wait argument from virtio_gpu_object_reserve Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve} Gerd Hoffmann
2019-07-03 18:02   ` Chia-I Wu
2019-07-04 11:10     ` Gerd Hoffmann
2019-07-04 19:17       ` Chia-I Wu
2019-07-05  8:53         ` Gerd Hoffmann
2019-07-07  5:30           ` Chia-I Wu
2019-07-02 14:18 ` [PATCH v6 07/18] drm/virtio: add virtio_gpu_object_array & helpers Gerd Hoffmann
2019-07-03 18:31   ` Chia-I Wu
2019-07-03 19:52     ` Chia-I Wu
2019-07-04 11:19       ` Gerd Hoffmann
2019-07-04 11:11     ` Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 08/18] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing Gerd Hoffmann
2019-07-03 18:49   ` Chia-I Wu
2019-07-04 11:25     ` Gerd Hoffmann
2019-07-04 18:46       ` Chia-I Wu
2019-07-11  2:35         ` Chia-I Wu
2019-07-02 14:18 ` [PATCH v6 09/18] drm/virtio: rework virtio_gpu_object_create fencing Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 10/18] drm/virtio: drop virtio_gpu_object_list_validate/virtio_gpu_unref_list Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 11/18] drm/virtio: switch from ttm to gem shmem helpers Gerd Hoffmann
2019-07-04 13:33   ` Emil Velikov
2019-07-17  6:04   ` Chia-I Wu
2019-07-02 14:18 ` [PATCH v6 12/18] drm/virtio: remove virtio_gpu_alloc_object Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 13/18] drm/virtio: drop virtio_gpu_object_{ref,unref} Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 14/18] drm/virtio: rework virtio_gpu_transfer_from_host_ioctl fencing Gerd Hoffmann
2019-07-03 20:05   ` Chia-I Wu
2019-07-04 11:47     ` Gerd Hoffmann
2019-07-04 18:55       ` Chia-I Wu
2019-07-05  9:01         ` Gerd Hoffmann
2019-07-02 14:19 ` [PATCH v6 15/18] drm/virtio: rework virtio_gpu_transfer_to_host_ioctl fencing Gerd Hoffmann
2019-07-03 19:55   ` Chia-I Wu
2019-07-04 11:51     ` Gerd Hoffmann
2019-07-04 19:08       ` Chia-I Wu
2019-07-05  9:05         ` Gerd Hoffmann
2019-07-05 14:07           ` Gerd Hoffmann
2019-07-02 14:19 ` [PATCH v6 16/18] drm/virtio: rework virtio_gpu_cmd_context_{attach,detach}_resource Gerd Hoffmann
     [not found]   ` <CAAfnVBmKotCfkrM4hph4++FDrVUYR8WKpomP7Y0-aergqHTSyA@mail.gmail.com>
2019-07-04 12:00     ` Gerd Hoffmann
2019-07-02 14:19 ` [PATCH v6 17/18] drm/virtio: drop virtio_gpu_object_{reserve,unreserve} Gerd Hoffmann
2019-07-02 14:19 ` [PATCH v6 18/18] drm/virtio: add fence sanity check Gerd Hoffmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).