All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH WW 00/13] Convert TTM to Wound/wait mutexes.
@ 2013-06-27 11:48 Maarten Lankhorst
  2013-06-27 11:48 ` [PATCH WW 01/13] reservation: cross-device reservation support, v4 Maarten Lankhorst
                   ` (12 more replies)
  0 siblings, 13 replies; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

With all the previous fixes in place, and my previous patch series applied
to prevent fallout, it's time to throw the switch!

Thanks to Deveryone who made this possible, in particular danvet,
robclark, airlied and peterz.

The first 4 patches are the real meat, the rest is just some cleanups.

Maarten Lankhorst (13):
  reservation: cross-device reservation support, v4
  drm/ttm: make ttm reservation calls behave like reservation calls
  drm/nouveau: make flipping lockdep safe
  drm/ttm: convert to the reservation api
  drm/ast: inline reservations
  drm/cirrus: inline reservations
  drm/mgag200: inline reservations
  drm/radeon: inline reservations
  drm/ttm: inline ttm_bo_reserve and related calls
  drm/ttm: get rid of ttm_bo_is_reserved usage
  drm/radeon: get rid of ttm_bo_is_reserved usage
  drm/vmwgfx: get rid of ttm_bo_is_reserved usage
  drm/ttm: get rid of ttm_bo_is_reserved

 Documentation/DocBook/device-drivers.tmpl |   2 +
 drivers/base/Makefile                     |   2 +-
 drivers/base/reservation.c                |  39 +++++
 drivers/gpu/drm/ast/ast_drv.h             |  20 ++-
 drivers/gpu/drm/ast/ast_ttm.c             |  18 ---
 drivers/gpu/drm/cirrus/cirrus_drv.h       |  21 ++-
 drivers/gpu/drm/cirrus/cirrus_ttm.c       |  18 ---
 drivers/gpu/drm/mgag200/mgag200_drv.h     |  20 ++-
 drivers/gpu/drm/mgag200/mgag200_ttm.c     |  18 ---
 drivers/gpu/drm/nouveau/nouveau_display.c | 103 ++++++-------
 drivers/gpu/drm/nouveau/nouveau_gem.c     |  40 +++--
 drivers/gpu/drm/qxl/qxl_object.h          |   5 -
 drivers/gpu/drm/radeon/radeon.h           |   1 +
 drivers/gpu/drm/radeon/radeon_cs.c        |  18 ++-
 drivers/gpu/drm/radeon/radeon_object.c    |  36 +----
 drivers/gpu/drm/radeon/radeon_object.h    |  30 +++-
 drivers/gpu/drm/radeon/radeon_test.c      |  75 +++++-----
 drivers/gpu/drm/radeon/radeon_uvd.c       |  27 ++--
 drivers/gpu/drm/ttm/ttm_bo.c              | 233 +++++-------------------------
 drivers/gpu/drm/ttm/ttm_bo_util.c         |   6 +-
 drivers/gpu/drm/ttm/ttm_execbuf_util.c    |  86 +++++------
 drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c    |   2 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c   |  14 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_resource.c  |  27 ++--
 include/drm/ttm/ttm_bo_api.h              |  37 +----
 include/drm/ttm/ttm_bo_driver.h           | 169 ++++++++++++++--------
 include/drm/ttm/ttm_execbuf_util.h        |  12 +-
 include/linux/reservation.h               |  62 ++++++++
 28 files changed, 549 insertions(+), 592 deletions(-)
 create mode 100644 drivers/base/reservation.c
 create mode 100644 include/linux/reservation.h

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH WW 01/13] reservation: cross-device reservation support, v4
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 21:45   ` Jerome Glisse
  2013-06-27 11:48 ` [PATCH WW 02/13] drm/ttm: make ttm reservation calls behave like reservation calls Maarten Lankhorst
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

This adds support for a generic reservations framework that can be
hooked up to ttm and dma-buf and allows easy sharing of reservations
across devices.

The idea is that a dma-buf and ttm object both will get a pointer
to a struct reservation_object, which has to be reserved before
anything is done with the contents of the dma-buf.

Changes since v1:
 - Fix locking issue in ticket_reserve, which could cause mutex_unlock
   to be called too many times.
Changes since v2:
 - All fence related calls and members have been taken out for now,
   what's left is the bare minimum to be useful for ttm locking conversion.
Changes since v3:
 - Removed helper functions too. The documentation has an example
   implementation for locking. With the move to ww_mutex there is no
   need to have much logic any more.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 Documentation/DocBook/device-drivers.tmpl |  2 +
 drivers/base/Makefile                     |  2 +-
 drivers/base/reservation.c                | 39 +++++++++++++++++++
 include/linux/reservation.h               | 62 +++++++++++++++++++++++++++++++
 4 files changed, 104 insertions(+), 1 deletion(-)
 create mode 100644 drivers/base/reservation.c
 create mode 100644 include/linux/reservation.h

diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
index c36892c..f0648a8 100644
--- a/Documentation/DocBook/device-drivers.tmpl
+++ b/Documentation/DocBook/device-drivers.tmpl
@@ -126,6 +126,8 @@ X!Edrivers/base/interface.c
      </sect1>
      <sect1><title>Device Drivers DMA Management</title>
 !Edrivers/base/dma-buf.c
+!Edrivers/base/reservation.c
+!Iinclude/linux/reservation.h
 !Edrivers/base/dma-coherent.c
 !Edrivers/base/dma-mapping.c
      </sect1>
diff --git a/drivers/base/Makefile b/drivers/base/Makefile
index 4e22ce3..48029aa 100644
--- a/drivers/base/Makefile
+++ b/drivers/base/Makefile
@@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o
 obj-y			+= power/
 obj-$(CONFIG_HAS_DMA)	+= dma-mapping.o
 obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
-obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o
+obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o reservation.o
 obj-$(CONFIG_ISA)	+= isa.o
 obj-$(CONFIG_FW_LOADER)	+= firmware_class.o
 obj-$(CONFIG_NUMA)	+= node.o
diff --git a/drivers/base/reservation.c b/drivers/base/reservation.c
new file mode 100644
index 0000000..a73fbf3
--- /dev/null
+++ b/drivers/base/reservation.c
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2012-2013 Canonical Ltd
+ *
+ * Based on bo.c which bears the following copyright notice,
+ * but is dual licensed:
+ *
+ * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ **************************************************************************/
+/*
+ * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
+ */
+
+#include <linux/reservation.h>
+#include <linux/export.h>
+
+DEFINE_WW_CLASS(reservation_ww_class);
+EXPORT_SYMBOL(reservation_ww_class);
diff --git a/include/linux/reservation.h b/include/linux/reservation.h
new file mode 100644
index 0000000..e9ee806
--- /dev/null
+++ b/include/linux/reservation.h
@@ -0,0 +1,62 @@
+/*
+ * Header file for reservations for dma-buf and ttm
+ *
+ * Copyright(C) 2011 Linaro Limited. All rights reserved.
+ * Copyright (C) 2012-2013 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark <rob.clark@linaro.org>
+ * Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ * Thomas Hellstrom <thellstrom-at-vmware-dot-com>
+ *
+ * Based on bo.c which bears the following copyright notice,
+ * but is dual licensed:
+ *
+ * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+#ifndef _LINUX_RESERVATION_H
+#define _LINUX_RESERVATION_H
+
+#include <linux/mutex.h>
+
+extern struct ww_class reservation_ww_class;
+
+struct reservation_object {
+	struct ww_mutex lock;
+};
+
+static inline void
+reservation_object_init(struct reservation_object *obj)
+{
+	ww_mutex_init(&obj->lock, &reservation_ww_class);
+}
+
+static inline void
+reservation_object_fini(struct reservation_object *obj)
+{
+	ww_mutex_destroy(&obj->lock);
+}
+
+#endif /* _LINUX_RESERVATION_H */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 02/13] drm/ttm: make ttm reservation calls behave like reservation calls
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
  2013-06-27 11:48 ` [PATCH WW 01/13] reservation: cross-device reservation support, v4 Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 21:47   ` Jerome Glisse
  2013-06-27 11:48 ` [PATCH WW 03/13] drm/nouveau: make flipping lockdep safe Maarten Lankhorst
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

This commit converts the source of the val_seq counter to
the ww_mutex api. The reservation objects are converted later,
because there is still a lockdep splat in nouveau that has to
resolved first.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/gpu/drm/nouveau/nouveau_gem.c    | 38 ++++++++++++++-------
 drivers/gpu/drm/radeon/radeon.h          |  1 +
 drivers/gpu/drm/radeon/radeon_cs.c       | 18 +++++-----
 drivers/gpu/drm/radeon/radeon_object.c   |  5 +--
 drivers/gpu/drm/radeon/radeon_object.h   |  3 +-
 drivers/gpu/drm/radeon/radeon_uvd.c      | 27 +++++++--------
 drivers/gpu/drm/ttm/ttm_bo.c             | 50 +++++++++++++++++----------
 drivers/gpu/drm/ttm/ttm_execbuf_util.c   | 58 +++++++++++++++++---------------
 drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c  | 14 ++++----
 drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 23 ++++++++-----
 include/drm/ttm/ttm_bo_api.h             |  2 +-
 include/drm/ttm/ttm_bo_driver.h          | 32 +++++++++++++-----
 include/drm/ttm/ttm_execbuf_util.h       | 13 +++++--
 13 files changed, 172 insertions(+), 112 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 7054706..e35d468 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -277,10 +277,12 @@ struct validate_op {
 	struct list_head vram_list;
 	struct list_head gart_list;
 	struct list_head both_list;
+	struct ww_acquire_ctx ticket;
 };
 
 static void
-validate_fini_list(struct list_head *list, struct nouveau_fence *fence)
+validate_fini_list(struct list_head *list, struct nouveau_fence *fence,
+		   struct ww_acquire_ctx *ticket)
 {
 	struct list_head *entry, *tmp;
 	struct nouveau_bo *nvbo;
@@ -297,17 +299,24 @@ validate_fini_list(struct list_head *list, struct nouveau_fence *fence)
 
 		list_del(&nvbo->entry);
 		nvbo->reserved_by = NULL;
-		ttm_bo_unreserve(&nvbo->bo);
+		ttm_bo_unreserve_ticket(&nvbo->bo, ticket);
 		drm_gem_object_unreference_unlocked(nvbo->gem);
 	}
 }
 
 static void
-validate_fini(struct validate_op *op, struct nouveau_fence* fence)
+validate_fini_no_ticket(struct validate_op *op, struct nouveau_fence *fence)
 {
-	validate_fini_list(&op->vram_list, fence);
-	validate_fini_list(&op->gart_list, fence);
-	validate_fini_list(&op->both_list, fence);
+	validate_fini_list(&op->vram_list, fence, &op->ticket);
+	validate_fini_list(&op->gart_list, fence, &op->ticket);
+	validate_fini_list(&op->both_list, fence, &op->ticket);
+}
+
+static void
+validate_fini(struct validate_op *op, struct nouveau_fence *fence)
+{
+	validate_fini_no_ticket(op, fence);
+	ww_acquire_fini(&op->ticket);
 }
 
 static int
@@ -317,13 +326,11 @@ validate_init(struct nouveau_channel *chan, struct drm_file *file_priv,
 {
 	struct nouveau_cli *cli = nouveau_cli(file_priv);
 	struct drm_device *dev = chan->drm->dev;
-	struct nouveau_drm *drm = nouveau_drm(dev);
-	uint32_t sequence;
 	int trycnt = 0;
 	int ret, i;
 	struct nouveau_bo *res_bo = NULL;
 
-	sequence = atomic_add_return(1, &drm->ttm.validate_sequence);
+	ww_acquire_init(&op->ticket, &reservation_ww_class);
 retry:
 	if (++trycnt > 100000) {
 		NV_ERROR(cli, "%s failed and gave up.\n", __func__);
@@ -338,6 +345,7 @@ retry:
 		gem = drm_gem_object_lookup(dev, file_priv, b->handle);
 		if (!gem) {
 			NV_ERROR(cli, "Unknown handle 0x%08x\n", b->handle);
+			ww_acquire_done(&op->ticket);
 			validate_fini(op, NULL);
 			return -ENOENT;
 		}
@@ -352,21 +360,23 @@ retry:
 			NV_ERROR(cli, "multiple instances of buffer %d on "
 				      "validation list\n", b->handle);
 			drm_gem_object_unreference_unlocked(gem);
+			ww_acquire_done(&op->ticket);
 			validate_fini(op, NULL);
 			return -EINVAL;
 		}
 
-		ret = ttm_bo_reserve(&nvbo->bo, true, false, true, sequence);
+		ret = ttm_bo_reserve(&nvbo->bo, true, false, true, &op->ticket);
 		if (ret) {
-			validate_fini(op, NULL);
+			validate_fini_no_ticket(op, NULL);
 			if (unlikely(ret == -EAGAIN)) {
-				sequence = atomic_add_return(1, &drm->ttm.validate_sequence);
 				ret = ttm_bo_reserve_slowpath(&nvbo->bo, true,
-							      sequence);
+							      &op->ticket);
 				if (!ret)
 					res_bo = nvbo;
 			}
 			if (unlikely(ret)) {
+				ww_acquire_done(&op->ticket);
+				ww_acquire_fini(&op->ticket);
 				drm_gem_object_unreference_unlocked(gem);
 				if (ret != -ERESTARTSYS)
 					NV_ERROR(cli, "fail reserve\n");
@@ -390,6 +400,7 @@ retry:
 			NV_ERROR(cli, "invalid valid domains: 0x%08x\n",
 				 b->valid_domains);
 			list_add_tail(&nvbo->entry, &op->both_list);
+			ww_acquire_done(&op->ticket);
 			validate_fini(op, NULL);
 			return -EINVAL;
 		}
@@ -397,6 +408,7 @@ retry:
 			goto retry;
 	}
 
+	ww_acquire_done(&op->ticket);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 142ce6c..eaf4ff5 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -883,6 +883,7 @@ struct radeon_cs_parser {
 	u32			cs_flags;
 	u32			ring;
 	s32			priority;
+	struct ww_acquire_ctx	ticket;
 };
 
 extern int radeon_cs_finish_pages(struct radeon_cs_parser *p);
diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c
index 7e265a5..8a91901 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -106,7 +106,7 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser *p)
 		radeon_bo_list_add_object(&p->relocs[i].lobj,
 					  &p->validated);
 	}
-	return radeon_bo_list_validate(&p->validated, p->ring);
+	return radeon_bo_list_validate(&p->ticket, &p->validated, p->ring);
 }
 
 static int radeon_cs_get_ring(struct radeon_cs_parser *p, u32 ring, s32 priority)
@@ -314,15 +314,17 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data)
  * If error is set than unvalidate buffer, otherwise just free memory
  * used by parsing context.
  **/
-static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error)
+static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error, bool backoff)
 {
 	unsigned i;
 
 	if (!error) {
-		ttm_eu_fence_buffer_objects(&parser->validated,
+		ttm_eu_fence_buffer_objects(&parser->ticket,
+					    &parser->validated,
 					    parser->ib.fence);
-	} else {
-		ttm_eu_backoff_reservation(&parser->validated);
+	} else if (backoff) {
+		ttm_eu_backoff_reservation(&parser->ticket,
+					   &parser->validated);
 	}
 
 	if (parser->relocs != NULL) {
@@ -535,7 +537,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 	r = radeon_cs_parser_init(&parser, data);
 	if (r) {
 		DRM_ERROR("Failed to initialize parser !\n");
-		radeon_cs_parser_fini(&parser, r);
+		radeon_cs_parser_fini(&parser, r, false);
 		up_read(&rdev->exclusive_lock);
 		r = radeon_cs_handle_lockup(rdev, r);
 		return r;
@@ -544,7 +546,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 	if (r) {
 		if (r != -ERESTARTSYS)
 			DRM_ERROR("Failed to parse relocation %d!\n", r);
-		radeon_cs_parser_fini(&parser, r);
+		radeon_cs_parser_fini(&parser, r, false);
 		up_read(&rdev->exclusive_lock);
 		r = radeon_cs_handle_lockup(rdev, r);
 		return r;
@@ -562,7 +564,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 		goto out;
 	}
 out:
-	radeon_cs_parser_fini(&parser, r);
+	radeon_cs_parser_fini(&parser, r, true);
 	up_read(&rdev->exclusive_lock);
 	r = radeon_cs_handle_lockup(rdev, r);
 	return r;
diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
index 07af5a9..71287bb 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -349,14 +349,15 @@ void radeon_bo_list_add_object(struct radeon_bo_list *lobj,
 	}
 }
 
-int radeon_bo_list_validate(struct list_head *head, int ring)
+int radeon_bo_list_validate(struct ww_acquire_ctx *ticket,
+			    struct list_head *head, int ring)
 {
 	struct radeon_bo_list *lobj;
 	struct radeon_bo *bo;
 	u32 domain;
 	int r;
 
-	r = ttm_eu_reserve_buffers(head);
+	r = ttm_eu_reserve_buffers(ticket, head);
 	if (unlikely(r != 0)) {
 		return r;
 	}
diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h
index e2cb80a..3e62a3a 100644
--- a/drivers/gpu/drm/radeon/radeon_object.h
+++ b/drivers/gpu/drm/radeon/radeon_object.h
@@ -128,7 +128,8 @@ extern int radeon_bo_init(struct radeon_device *rdev);
 extern void radeon_bo_fini(struct radeon_device *rdev);
 extern void radeon_bo_list_add_object(struct radeon_bo_list *lobj,
 				struct list_head *head);
-extern int radeon_bo_list_validate(struct list_head *head, int ring);
+extern int radeon_bo_list_validate(struct ww_acquire_ctx *ticket,
+				   struct list_head *head, int ring);
 extern int radeon_bo_fbdev_mmap(struct radeon_bo *bo,
 				struct vm_area_struct *vma);
 extern int radeon_bo_set_tiling_flags(struct radeon_bo *bo,
diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c
index cad735d..0078cdf 100644
--- a/drivers/gpu/drm/radeon/radeon_uvd.c
+++ b/drivers/gpu/drm/radeon/radeon_uvd.c
@@ -542,6 +542,7 @@ static int radeon_uvd_send_msg(struct radeon_device *rdev,
 			       struct radeon_fence **fence)
 {
 	struct ttm_validate_buffer tv;
+	struct ww_acquire_ctx ticket;
 	struct list_head head;
 	struct radeon_ib ib;
 	uint64_t addr;
@@ -553,7 +554,7 @@ static int radeon_uvd_send_msg(struct radeon_device *rdev,
 	INIT_LIST_HEAD(&head);
 	list_add(&tv.head, &head);
 
-	r = ttm_eu_reserve_buffers(&head);
+	r = ttm_eu_reserve_buffers(&ticket, &head);
 	if (r)
 		return r;
 
@@ -561,16 +562,12 @@ static int radeon_uvd_send_msg(struct radeon_device *rdev,
 	radeon_uvd_force_into_uvd_segment(bo);
 
 	r = ttm_bo_validate(&bo->tbo, &bo->placement, true, false);
-	if (r) {
-		ttm_eu_backoff_reservation(&head);
-		return r;
-	}
+	if (r) 
+		goto err;
 
 	r = radeon_ib_get(rdev, ring, &ib, NULL, 16);
-	if (r) {
-		ttm_eu_backoff_reservation(&head);
-		return r;
-	}
+	if (r)
+		goto err;
 
 	addr = radeon_bo_gpu_offset(bo);
 	ib.ptr[0] = PACKET0(UVD_GPCOM_VCPU_DATA0, 0);
@@ -584,11 +581,9 @@ static int radeon_uvd_send_msg(struct radeon_device *rdev,
 	ib.length_dw = 16;
 
 	r = radeon_ib_schedule(rdev, &ib, NULL);
-	if (r) {
-		ttm_eu_backoff_reservation(&head);
-		return r;
-	}
-	ttm_eu_fence_buffer_objects(&head, ib.fence);
+	if (r)
+		goto err;
+	ttm_eu_fence_buffer_objects(&ticket, &head, ib.fence);
 
 	if (fence)
 		*fence = radeon_fence_ref(ib.fence);
@@ -596,6 +591,10 @@ static int radeon_uvd_send_msg(struct radeon_device *rdev,
 	radeon_ib_free(rdev, &ib);
 	radeon_bo_unref(&bo);
 	return 0;
+
+err:
+	ttm_eu_backoff_reservation(&ticket, &head);
+	return r;
 }
 
 /* multiple fence commands without any stream commands in between can
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 9b07b7d..b912375 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -215,7 +215,8 @@ int ttm_bo_del_from_lru(struct ttm_buffer_object *bo)
 
 int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
 			  bool interruptible,
-			  bool no_wait, bool use_sequence, uint32_t sequence)
+			  bool no_wait, bool use_ticket,
+			  struct ww_acquire_ctx *ticket)
 {
 	int ret;
 
@@ -223,17 +224,17 @@ int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
 		/**
 		 * Deadlock avoidance for multi-bo reserving.
 		 */
-		if (use_sequence && bo->seq_valid) {
+		if (use_ticket && bo->seq_valid) {
 			/**
 			 * We've already reserved this one.
 			 */
-			if (unlikely(sequence == bo->val_seq))
+			if (unlikely(ticket->stamp == bo->val_seq))
 				return -EDEADLK;
 			/**
 			 * Already reserved by a thread that will not back
 			 * off for us. We need to back off.
 			 */
-			if (unlikely(sequence - bo->val_seq < (1 << 31)))
+			if (unlikely(ticket->stamp - bo->val_seq <= LONG_MAX))
 				return -EAGAIN;
 		}
 
@@ -246,13 +247,14 @@ int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
 			return ret;
 	}
 
-	if (use_sequence) {
+	if (use_ticket) {
 		bool wake_up = false;
+
 		/**
 		 * Wake up waiters that may need to recheck for deadlock,
 		 * if we decreased the sequence number.
 		 */
-		if (unlikely((bo->val_seq - sequence < (1 << 31))
+		if (unlikely((bo->val_seq - ticket->stamp <= LONG_MAX)
 			     || !bo->seq_valid))
 			wake_up = true;
 
@@ -266,7 +268,7 @@ int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
 		 * written before val_seq was, and just means some slightly
 		 * increased cpu usage
 		 */
-		bo->val_seq = sequence;
+		bo->val_seq = ticket->stamp;
 		bo->seq_valid = true;
 		if (wake_up)
 			wake_up_all(&bo->event_queue);
@@ -292,14 +294,15 @@ void ttm_bo_list_ref_sub(struct ttm_buffer_object *bo, int count,
 
 int ttm_bo_reserve(struct ttm_buffer_object *bo,
 		   bool interruptible,
-		   bool no_wait, bool use_sequence, uint32_t sequence)
+		   bool no_wait, bool use_ticket,
+		   struct ww_acquire_ctx *ticket)
 {
 	struct ttm_bo_global *glob = bo->glob;
 	int put_count = 0;
 	int ret;
 
-	ret = ttm_bo_reserve_nolru(bo, interruptible, no_wait, use_sequence,
-				   sequence);
+	ret = ttm_bo_reserve_nolru(bo, interruptible, no_wait, use_ticket,
+				    ticket);
 	if (likely(ret == 0)) {
 		spin_lock(&glob->lru_lock);
 		put_count = ttm_bo_del_from_lru(bo);
@@ -311,13 +314,14 @@ int ttm_bo_reserve(struct ttm_buffer_object *bo,
 }
 
 int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
-				  bool interruptible, uint32_t sequence)
+				  bool interruptible,
+				  struct ww_acquire_ctx *ticket)
 {
 	bool wake_up = false;
 	int ret;
 
 	while (unlikely(atomic_xchg(&bo->reserved, 1) != 0)) {
-		WARN_ON(bo->seq_valid && sequence == bo->val_seq);
+		WARN_ON(bo->seq_valid && ticket->stamp == bo->val_seq);
 
 		ret = ttm_bo_wait_unreserved(bo, interruptible);
 
@@ -325,14 +329,14 @@ int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
 			return ret;
 	}
 
-	if ((bo->val_seq - sequence < (1 << 31)) || !bo->seq_valid)
+	if (bo->val_seq - ticket->stamp < LONG_MAX || !bo->seq_valid)
 		wake_up = true;
 
 	/**
 	 * Wake up waiters that may need to recheck for deadlock,
 	 * if we decreased the sequence number.
 	 */
-	bo->val_seq = sequence;
+	bo->val_seq = ticket->stamp;
 	bo->seq_valid = true;
 	if (wake_up)
 		wake_up_all(&bo->event_queue);
@@ -341,12 +345,12 @@ int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
 }
 
 int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
-			    bool interruptible, uint32_t sequence)
+			    bool interruptible, struct ww_acquire_ctx *ticket)
 {
 	struct ttm_bo_global *glob = bo->glob;
 	int put_count, ret;
 
-	ret = ttm_bo_reserve_slowpath_nolru(bo, interruptible, sequence);
+	ret = ttm_bo_reserve_slowpath_nolru(bo, interruptible, ticket);
 	if (likely(!ret)) {
 		spin_lock(&glob->lru_lock);
 		put_count = ttm_bo_del_from_lru(bo);
@@ -357,7 +361,7 @@ int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
 }
 EXPORT_SYMBOL(ttm_bo_reserve_slowpath);
 
-void ttm_bo_unreserve_locked(struct ttm_buffer_object *bo)
+void ttm_bo_unreserve_ticket_locked(struct ttm_buffer_object *bo, struct ww_acquire_ctx *ticket)
 {
 	ttm_bo_add_to_lru(bo);
 	atomic_set(&bo->reserved, 0);
@@ -369,11 +373,21 @@ void ttm_bo_unreserve(struct ttm_buffer_object *bo)
 	struct ttm_bo_global *glob = bo->glob;
 
 	spin_lock(&glob->lru_lock);
-	ttm_bo_unreserve_locked(bo);
+	ttm_bo_unreserve_ticket_locked(bo, NULL);
 	spin_unlock(&glob->lru_lock);
 }
 EXPORT_SYMBOL(ttm_bo_unreserve);
 
+void ttm_bo_unreserve_ticket(struct ttm_buffer_object *bo, struct ww_acquire_ctx *ticket)
+{
+	struct ttm_bo_global *glob = bo->glob;
+
+	spin_lock(&glob->lru_lock);
+	ttm_bo_unreserve_ticket_locked(bo, ticket);
+	spin_unlock(&glob->lru_lock);
+}
+EXPORT_SYMBOL(ttm_bo_unreserve_ticket);
+
 /*
  * Call bo->mutex locked.
  */
diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
index 7b90def..efcb734 100644
--- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c
+++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
@@ -32,7 +32,8 @@
 #include <linux/sched.h>
 #include <linux/module.h>
 
-static void ttm_eu_backoff_reservation_locked(struct list_head *list)
+static void ttm_eu_backoff_reservation_locked(struct list_head *list,
+					      struct ww_acquire_ctx *ticket)
 {
 	struct ttm_validate_buffer *entry;
 
@@ -41,14 +42,15 @@ static void ttm_eu_backoff_reservation_locked(struct list_head *list)
 		if (!entry->reserved)
 			continue;
 
+		entry->reserved = false;
 		if (entry->removed) {
-			ttm_bo_add_to_lru(bo);
+			ttm_bo_unreserve_ticket_locked(bo, ticket);
 			entry->removed = false;
 
+		} else {
+			atomic_set(&bo->reserved, 0);
+			wake_up_all(&bo->event_queue);
 		}
-		entry->reserved = false;
-		atomic_set(&bo->reserved, 0);
-		wake_up_all(&bo->event_queue);
 	}
 }
 
@@ -82,7 +84,8 @@ static void ttm_eu_list_ref_sub(struct list_head *list)
 	}
 }
 
-void ttm_eu_backoff_reservation(struct list_head *list)
+void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket,
+				struct list_head *list)
 {
 	struct ttm_validate_buffer *entry;
 	struct ttm_bo_global *glob;
@@ -93,7 +96,8 @@ void ttm_eu_backoff_reservation(struct list_head *list)
 	entry = list_first_entry(list, struct ttm_validate_buffer, head);
 	glob = entry->bo->glob;
 	spin_lock(&glob->lru_lock);
-	ttm_eu_backoff_reservation_locked(list);
+	ttm_eu_backoff_reservation_locked(list, ticket);
+	ww_acquire_fini(ticket);
 	spin_unlock(&glob->lru_lock);
 }
 EXPORT_SYMBOL(ttm_eu_backoff_reservation);
@@ -110,12 +114,12 @@ EXPORT_SYMBOL(ttm_eu_backoff_reservation);
  * buffers in different orders.
  */
 
-int ttm_eu_reserve_buffers(struct list_head *list)
+int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket,
+			   struct list_head *list)
 {
 	struct ttm_bo_global *glob;
 	struct ttm_validate_buffer *entry;
 	int ret;
-	uint32_t val_seq;
 
 	if (list_empty(list))
 		return 0;
@@ -129,8 +133,8 @@ int ttm_eu_reserve_buffers(struct list_head *list)
 	entry = list_first_entry(list, struct ttm_validate_buffer, head);
 	glob = entry->bo->glob;
 
+	ww_acquire_init(ticket, &reservation_ww_class);
 	spin_lock(&glob->lru_lock);
-	val_seq = entry->bo->bdev->val_seq++;
 
 retry:
 	list_for_each_entry(entry, list, head) {
@@ -140,7 +144,7 @@ retry:
 		if (entry->reserved)
 			continue;
 
-		ret = ttm_bo_reserve_nolru(bo, true, true, true, val_seq);
+		ret = ttm_bo_reserve_nolru(bo, true, true, true, ticket);
 		switch (ret) {
 		case 0:
 			break;
@@ -148,8 +152,9 @@ retry:
 			ttm_eu_del_from_lru_locked(list);
 			spin_unlock(&glob->lru_lock);
 			ret = ttm_bo_reserve_nolru(bo, true, false,
-						   true, val_seq);
+						   true, ticket);
 			spin_lock(&glob->lru_lock);
+
 			if (!ret)
 				break;
 
@@ -158,21 +163,13 @@ retry:
 
 			/* fallthrough */
 		case -EAGAIN:
-			ttm_eu_backoff_reservation_locked(list);
-
-			/*
-			 * temporarily increase sequence number every retry,
-			 * to prevent us from seeing our old reservation
-			 * sequence when someone else reserved the buffer,
-			 * but hasn't updated the seq_valid/seqno members yet.
-			 */
-			val_seq = entry->bo->bdev->val_seq++;
-
+			ttm_eu_backoff_reservation_locked(list, ticket);
 			spin_unlock(&glob->lru_lock);
 			ttm_eu_list_ref_sub(list);
-			ret = ttm_bo_reserve_slowpath_nolru(bo, true, val_seq);
+			ret = ttm_bo_reserve_slowpath_nolru(bo, true, ticket);
 			if (unlikely(ret != 0))
-				return ret;
+				goto err_fini;
+
 			spin_lock(&glob->lru_lock);
 			entry->reserved = true;
 			if (unlikely(atomic_read(&bo->cpu_writers) > 0)) {
@@ -191,21 +188,25 @@ retry:
 		}
 	}
 
+	ww_acquire_done(ticket);
 	ttm_eu_del_from_lru_locked(list);
 	spin_unlock(&glob->lru_lock);
 	ttm_eu_list_ref_sub(list);
-
 	return 0;
 
 err:
-	ttm_eu_backoff_reservation_locked(list);
+	ttm_eu_backoff_reservation_locked(list, ticket);
 	spin_unlock(&glob->lru_lock);
 	ttm_eu_list_ref_sub(list);
+err_fini:
+	ww_acquire_done(ticket);
+	ww_acquire_fini(ticket);
 	return ret;
 }
 EXPORT_SYMBOL(ttm_eu_reserve_buffers);
 
-void ttm_eu_fence_buffer_objects(struct list_head *list, void *sync_obj)
+void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket,
+				 struct list_head *list, void *sync_obj)
 {
 	struct ttm_validate_buffer *entry;
 	struct ttm_buffer_object *bo;
@@ -228,11 +229,12 @@ void ttm_eu_fence_buffer_objects(struct list_head *list, void *sync_obj)
 		bo = entry->bo;
 		entry->old_sync_obj = bo->sync_obj;
 		bo->sync_obj = driver->sync_obj_ref(sync_obj);
-		ttm_bo_unreserve_locked(bo);
+		ttm_bo_unreserve_ticket_locked(bo, ticket);
 		entry->reserved = false;
 	}
 	spin_unlock(&bdev->fence_lock);
 	spin_unlock(&glob->lru_lock);
+	ww_acquire_fini(ticket);
 
 	list_for_each_entry(entry, list, head) {
 		if (entry->old_sync_obj)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
index 394e647..599f646 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
@@ -1432,6 +1432,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
 	struct vmw_fence_obj *fence = NULL;
 	struct vmw_resource *error_resource;
 	struct list_head resource_list;
+	struct ww_acquire_ctx ticket;
 	uint32_t handle;
 	void *cmd;
 	int ret;
@@ -1488,7 +1489,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
 	if (unlikely(ret != 0))
 		goto out_err;
 
-	ret = ttm_eu_reserve_buffers(&sw_context->validate_nodes);
+	ret = ttm_eu_reserve_buffers(&ticket, &sw_context->validate_nodes);
 	if (unlikely(ret != 0))
 		goto out_err;
 
@@ -1537,7 +1538,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
 		DRM_ERROR("Fence submission error. Syncing.\n");
 
 	vmw_resource_list_unreserve(&sw_context->resource_list, false);
-	ttm_eu_fence_buffer_objects(&sw_context->validate_nodes,
+	ttm_eu_fence_buffer_objects(&ticket, &sw_context->validate_nodes,
 				    (void *) fence);
 
 	if (unlikely(dev_priv->pinned_bo != NULL &&
@@ -1570,7 +1571,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
 out_err:
 	vmw_resource_relocations_free(&sw_context->res_relocations);
 	vmw_free_relocations(sw_context);
-	ttm_eu_backoff_reservation(&sw_context->validate_nodes);
+	ttm_eu_backoff_reservation(&ticket, &sw_context->validate_nodes);
 	vmw_resource_list_unreserve(&sw_context->resource_list, true);
 	vmw_clear_validations(sw_context);
 	if (unlikely(dev_priv->pinned_bo != NULL &&
@@ -1644,6 +1645,7 @@ void __vmw_execbuf_release_pinned_bo(struct vmw_private *dev_priv,
 	struct list_head validate_list;
 	struct ttm_validate_buffer pinned_val, query_val;
 	struct vmw_fence_obj *lfence = NULL;
+	struct ww_acquire_ctx ticket;
 
 	if (dev_priv->pinned_bo == NULL)
 		goto out_unlock;
@@ -1657,7 +1659,7 @@ void __vmw_execbuf_release_pinned_bo(struct vmw_private *dev_priv,
 	list_add_tail(&query_val.head, &validate_list);
 
 	do {
-		ret = ttm_eu_reserve_buffers(&validate_list);
+		ret = ttm_eu_reserve_buffers(&ticket, &validate_list);
 	} while (ret == -ERESTARTSYS);
 
 	if (unlikely(ret != 0)) {
@@ -1684,7 +1686,7 @@ void __vmw_execbuf_release_pinned_bo(struct vmw_private *dev_priv,
 						  NULL);
 		fence = lfence;
 	}
-	ttm_eu_fence_buffer_objects(&validate_list, (void *) fence);
+	ttm_eu_fence_buffer_objects(&ticket, &validate_list, (void *) fence);
 	if (lfence != NULL)
 		vmw_fence_obj_unreference(&lfence);
 
@@ -1696,7 +1698,7 @@ out_unlock:
 	return;
 
 out_no_emit:
-	ttm_eu_backoff_reservation(&validate_list);
+	ttm_eu_backoff_reservation(&ticket, &validate_list);
 out_no_reserve:
 	ttm_bo_unref(&query_val.bo);
 	ttm_bo_unref(&pinned_val.bo);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
index bc78425..ced7946 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
@@ -990,9 +990,11 @@ void vmw_resource_unreserve(struct vmw_resource *res,
  * @val_buf:        On successful return contains data about the
  *                  reserved and validated backup buffer.
  */
-int vmw_resource_check_buffer(struct vmw_resource *res,
-			      bool interruptible,
-			      struct ttm_validate_buffer *val_buf)
+static int
+vmw_resource_check_buffer(struct vmw_resource *res,
+			  struct ww_acquire_ctx *ticket,
+			  bool interruptible,
+			  struct ttm_validate_buffer *val_buf)
 {
 	struct list_head val_list;
 	bool backup_dirty = false;
@@ -1007,7 +1009,7 @@ int vmw_resource_check_buffer(struct vmw_resource *res,
 	INIT_LIST_HEAD(&val_list);
 	val_buf->bo = ttm_bo_reference(&res->backup->base);
 	list_add_tail(&val_buf->head, &val_list);
-	ret = ttm_eu_reserve_buffers(&val_list);
+	ret = ttm_eu_reserve_buffers(ticket, &val_list);
 	if (unlikely(ret != 0))
 		goto out_no_reserve;
 
@@ -1025,7 +1027,7 @@ int vmw_resource_check_buffer(struct vmw_resource *res,
 	return 0;
 
 out_no_validate:
-	ttm_eu_backoff_reservation(&val_list);
+	ttm_eu_backoff_reservation(ticket, &val_list);
 out_no_reserve:
 	ttm_bo_unref(&val_buf->bo);
 	if (backup_dirty)
@@ -1069,7 +1071,9 @@ int vmw_resource_reserve(struct vmw_resource *res, bool no_backup)
  *.
  * @val_buf:        Backup buffer information.
  */
-void vmw_resource_backoff_reservation(struct ttm_validate_buffer *val_buf)
+static void
+vmw_resource_backoff_reservation(struct ww_acquire_ctx *ticket,
+				 struct ttm_validate_buffer *val_buf)
 {
 	struct list_head val_list;
 
@@ -1078,7 +1082,7 @@ void vmw_resource_backoff_reservation(struct ttm_validate_buffer *val_buf)
 
 	INIT_LIST_HEAD(&val_list);
 	list_add_tail(&val_buf->head, &val_list);
-	ttm_eu_backoff_reservation(&val_list);
+	ttm_eu_backoff_reservation(ticket, &val_list);
 	ttm_bo_unref(&val_buf->bo);
 }
 
@@ -1092,12 +1096,13 @@ int vmw_resource_do_evict(struct vmw_resource *res)
 {
 	struct ttm_validate_buffer val_buf;
 	const struct vmw_res_func *func = res->func;
+	struct ww_acquire_ctx ticket;
 	int ret;
 
 	BUG_ON(!func->may_evict);
 
 	val_buf.bo = NULL;
-	ret = vmw_resource_check_buffer(res, true, &val_buf);
+	ret = vmw_resource_check_buffer(res, &ticket, true, &val_buf);
 	if (unlikely(ret != 0))
 		return ret;
 
@@ -1112,7 +1117,7 @@ int vmw_resource_do_evict(struct vmw_resource *res)
 	res->backup_dirty = true;
 	res->res_dirty = false;
 out_no_unbind:
-	vmw_resource_backoff_reservation(&val_buf);
+	vmw_resource_backoff_reservation(&ticket, &val_buf);
 
 	return ret;
 }
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 3cb5d84..0a992b0 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -234,7 +234,7 @@ struct ttm_buffer_object {
 	struct list_head ddestroy;
 	struct list_head swap;
 	struct list_head io_reserve_lru;
-	uint32_t val_seq;
+	unsigned long val_seq;
 	bool seq_valid;
 
 	/**
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
index 9c8dca7..ec18c5f 100644
--- a/include/drm/ttm/ttm_bo_driver.h
+++ b/include/drm/ttm/ttm_bo_driver.h
@@ -38,6 +38,7 @@
 #include <linux/workqueue.h>
 #include <linux/fs.h>
 #include <linux/spinlock.h>
+#include <linux/reservation.h>
 
 struct ttm_backend_func {
 	/**
@@ -778,7 +779,7 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
  * @bo: A pointer to a struct ttm_buffer_object.
  * @interruptible: Sleep interruptible if waiting.
  * @no_wait: Don't sleep while trying to reserve, rather return -EBUSY.
- * @use_sequence: If @bo is already reserved, Only sleep waiting for
+ * @use_ticket: If @bo is already reserved, Only sleep waiting for
  * it to become unreserved if @sequence < (@bo)->sequence.
  *
  * Locks a buffer object for validation. (Or prevents other processes from
@@ -819,7 +820,8 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
  */
 extern int ttm_bo_reserve(struct ttm_buffer_object *bo,
 			  bool interruptible,
-			  bool no_wait, bool use_sequence, uint32_t sequence);
+			  bool no_wait, bool use_ticket,
+			  struct ww_acquire_ctx *ticket);
 
 /**
  * ttm_bo_reserve_slowpath_nolru:
@@ -836,7 +838,7 @@ extern int ttm_bo_reserve(struct ttm_buffer_object *bo,
  */
 extern int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
 					 bool interruptible,
-					 uint32_t sequence);
+					 struct ww_acquire_ctx *ticket);
 
 
 /**
@@ -850,7 +852,8 @@ extern int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
  * held by us, this function cannot deadlock any more.
  */
 extern int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
-				   bool interruptible, uint32_t sequence);
+				   bool interruptible,
+				   struct ww_acquire_ctx *ticket);
 
 /**
  * ttm_bo_reserve_nolru:
@@ -876,8 +879,8 @@ extern int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
  */
 extern int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
 				 bool interruptible,
-				 bool no_wait, bool use_sequence,
-				 uint32_t sequence);
+				 bool no_wait, bool use_ticket,
+				 struct ww_acquire_ctx *ticket);
 
 /**
  * ttm_bo_unreserve
@@ -889,14 +892,25 @@ extern int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
 extern void ttm_bo_unreserve(struct ttm_buffer_object *bo);
 
 /**
- * ttm_bo_unreserve_locked
+ * ttm_bo_unreserve_ticket
+ * @bo: A pointer to a struct ttm_buffer_object.
+ * @ticket: ww_acquire_ctx used for reserving
  *
+ * Unreserve a previous reservation of @bo made with @ticket.
+ */
+extern void ttm_bo_unreserve_ticket(struct ttm_buffer_object *bo,
+				    struct ww_acquire_ctx *ticket);
+
+/**
+ * ttm_bo_unreserve_locked
  * @bo: A pointer to a struct ttm_buffer_object.
+ * @ticket: ww_acquire_ctx used for reserving, or NULL
  *
- * Unreserve a previous reservation of @bo.
+ * Unreserve a previous reservation of @bo made with @ticket.
  * Needs to be called with struct ttm_bo_global::lru_lock held.
  */
-extern void ttm_bo_unreserve_locked(struct ttm_buffer_object *bo);
+extern void ttm_bo_unreserve_ticket_locked(struct ttm_buffer_object *bo,
+					   struct ww_acquire_ctx *ticket);
 
 /*
  * ttm_bo_util.c
diff --git a/include/drm/ttm/ttm_execbuf_util.h b/include/drm/ttm/ttm_execbuf_util.h
index 547e19f..ba71ef9 100644
--- a/include/drm/ttm/ttm_execbuf_util.h
+++ b/include/drm/ttm/ttm_execbuf_util.h
@@ -33,6 +33,7 @@
 
 #include <ttm/ttm_bo_api.h>
 #include <linux/list.h>
+#include <linux/reservation.h>
 
 /**
  * struct ttm_validate_buffer
@@ -57,17 +58,20 @@ struct ttm_validate_buffer {
 /**
  * function ttm_eu_backoff_reservation
  *
+ * @ticket:   ww_acquire_ctx from reserve call
  * @list:     thread private list of ttm_validate_buffer structs.
  *
  * Undoes all buffer validation reservations for bos pointed to by
  * the list entries.
  */
 
-extern void ttm_eu_backoff_reservation(struct list_head *list);
+extern void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket,
+				       struct list_head *list);
 
 /**
  * function ttm_eu_reserve_buffers
  *
+ * @ticket:  [out] ww_acquire_ctx returned by call.
  * @list:    thread private list of ttm_validate_buffer structs.
  *
  * Tries to reserve bos pointed to by the list entries for validation.
@@ -90,11 +94,13 @@ extern void ttm_eu_backoff_reservation(struct list_head *list);
  * has failed.
  */
 
-extern int ttm_eu_reserve_buffers(struct list_head *list);
+extern int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket,
+				  struct list_head *list);
 
 /**
  * function ttm_eu_fence_buffer_objects.
  *
+ * @ticket:      ww_acquire_ctx from reserve call
  * @list:        thread private list of ttm_validate_buffer structs.
  * @sync_obj:    The new sync object for the buffers.
  *
@@ -104,6 +110,7 @@ extern int ttm_eu_reserve_buffers(struct list_head *list);
  *
  */
 
-extern void ttm_eu_fence_buffer_objects(struct list_head *list, void *sync_obj);
+extern void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket,
+					struct list_head *list, void *sync_obj);
 
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 03/13] drm/nouveau: make flipping lockdep safe
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
  2013-06-27 11:48 ` [PATCH WW 01/13] reservation: cross-device reservation support, v4 Maarten Lankhorst
  2013-06-27 11:48 ` [PATCH WW 02/13] drm/ttm: make ttm reservation calls behave like reservation calls Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 11:48 ` [PATCH WW 04/13] drm/ttm: convert to the reservation api Maarten Lankhorst
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

cli->mutex was inverted with reservations, and multiple reservations were
used without a ticket, fix both. This commit had to be done after the previous
commit, because otherwise ttm_eu_* calls would use a different seqno counter..

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/gpu/drm/nouveau/nouveau_display.c | 103 +++++++++++++-----------------
 1 file changed, 45 insertions(+), 58 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
index f17dc2a..87afb0c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_display.c
+++ b/drivers/gpu/drm/nouveau/nouveau_display.c
@@ -26,6 +26,7 @@
 
 #include <drm/drmP.h>
 #include <drm/drm_crtc_helper.h>
+#include <drm/ttm/ttm_execbuf_util.h>
 
 #include "nouveau_fbcon.h"
 #include "dispnv04/hw.h"
@@ -457,51 +458,6 @@ nouveau_display_resume(struct drm_device *dev)
 }
 
 static int
-nouveau_page_flip_reserve(struct nouveau_bo *old_bo,
-			  struct nouveau_bo *new_bo)
-{
-	int ret;
-
-	ret = nouveau_bo_pin(new_bo, TTM_PL_FLAG_VRAM);
-	if (ret)
-		return ret;
-
-	ret = ttm_bo_reserve(&new_bo->bo, false, false, false, 0);
-	if (ret)
-		goto fail;
-
-	if (likely(old_bo != new_bo)) {
-		ret = ttm_bo_reserve(&old_bo->bo, false, false, false, 0);
-		if (ret)
-			goto fail_unreserve;
-	}
-
-	return 0;
-
-fail_unreserve:
-	ttm_bo_unreserve(&new_bo->bo);
-fail:
-	nouveau_bo_unpin(new_bo);
-	return ret;
-}
-
-static void
-nouveau_page_flip_unreserve(struct nouveau_bo *old_bo,
-			    struct nouveau_bo *new_bo,
-			    struct nouveau_fence *fence)
-{
-	nouveau_bo_fence(new_bo, fence);
-	ttm_bo_unreserve(&new_bo->bo);
-
-	if (likely(old_bo != new_bo)) {
-		nouveau_bo_fence(old_bo, fence);
-		ttm_bo_unreserve(&old_bo->bo);
-	}
-
-	nouveau_bo_unpin(old_bo);
-}
-
-static int
 nouveau_page_flip_emit(struct nouveau_channel *chan,
 		       struct nouveau_bo *old_bo,
 		       struct nouveau_bo *new_bo,
@@ -563,6 +519,9 @@ nouveau_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb,
 	struct nouveau_page_flip_state *s;
 	struct nouveau_channel *chan = NULL;
 	struct nouveau_fence *fence;
+	struct list_head res;
+	struct ttm_validate_buffer res_val[2];
+	struct ww_acquire_ctx ticket;
 	int ret;
 
 	if (!drm->channel)
@@ -572,25 +531,43 @@ nouveau_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb,
 	if (!s)
 		return -ENOMEM;
 
-	/* Don't let the buffers go away while we flip */
-	ret = nouveau_page_flip_reserve(old_bo, new_bo);
-	if (ret)
-		goto fail_free;
-
-	/* Initialize a page flip struct */
-	*s = (struct nouveau_page_flip_state)
-		{ { }, event, nouveau_crtc(crtc)->index,
-		  fb->bits_per_pixel, fb->pitches[0], crtc->x, crtc->y,
-		  new_bo->bo.offset };
-
 	/* Choose the channel the flip will be handled in */
+	spin_lock(&old_bo->bo.bdev->fence_lock);
 	fence = new_bo->bo.sync_obj;
 	if (fence)
 		chan = fence->channel;
 	if (!chan)
 		chan = drm->channel;
+	spin_unlock(&old_bo->bo.bdev->fence_lock);
+
 	mutex_lock(&chan->cli->mutex);
 
+	if (new_bo != old_bo) {
+		ret = nouveau_bo_pin(new_bo, TTM_PL_FLAG_VRAM);
+		if (likely(!ret)) {
+			res_val[0].bo = &old_bo->bo;
+			res_val[1].bo = &new_bo->bo;
+			INIT_LIST_HEAD(&res);
+			list_add_tail(&res_val[0].head, &res);
+			list_add_tail(&res_val[1].head, &res);
+			ret = ttm_eu_reserve_buffers(&ticket, &res);
+			if (ret)
+				nouveau_bo_unpin(new_bo);
+		}
+	} else
+		ret = ttm_bo_reserve(&new_bo->bo, false, false, false, 0);
+
+	if (ret) {
+		mutex_unlock(&chan->cli->mutex);
+		goto fail_free;
+	}
+
+	/* Initialize a page flip struct */
+	*s = (struct nouveau_page_flip_state)
+		{ { }, event, nouveau_crtc(crtc)->index,
+		  fb->bits_per_pixel, fb->pitches[0], crtc->x, crtc->y,
+		  new_bo->bo.offset };
+
 	/* Emit a page flip */
 	if (nv_device(drm->device)->card_type >= NV_50) {
 		ret = nv50_display_flip_next(crtc, fb, chan, 0);
@@ -608,12 +585,22 @@ nouveau_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb,
 	/* Update the crtc struct and cleanup */
 	crtc->fb = fb;
 
-	nouveau_page_flip_unreserve(old_bo, new_bo, fence);
+	if (old_bo != new_bo) {
+		ttm_eu_fence_buffer_objects(&ticket, &res, fence);
+		nouveau_bo_unpin(old_bo);
+	} else {
+		nouveau_bo_fence(new_bo, fence);
+		ttm_bo_unreserve(&new_bo->bo);
+	}
 	nouveau_fence_unref(&fence);
 	return 0;
 
 fail_unreserve:
-	nouveau_page_flip_unreserve(old_bo, new_bo, NULL);
+	if (old_bo != new_bo) {
+		ttm_eu_backoff_reservation(&ticket, &res);
+		nouveau_bo_unpin(new_bo);
+	} else
+		ttm_bo_unreserve(&new_bo->bo);
 fail_free:
 	kfree(s);
 	return ret;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 04/13] drm/ttm: convert to the reservation api
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
                   ` (2 preceding siblings ...)
  2013-06-27 11:48 ` [PATCH WW 03/13] drm/nouveau: make flipping lockdep safe Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 22:03   ` Jerome Glisse
  2013-06-27 11:48 ` [PATCH WW 05/13] drm/ast: inline reservations Maarten Lankhorst
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

Now that the code is compatible in semantics, flip the switch.
Use ww_mutex instead of the homegrown implementation.

ww_mutex uses -EDEADLK to signal that the caller has to back off,
and -EALREADY to indicate this buffer is already held by the caller.

ttm used -EAGAIN and -EDEADLK for those, respectively. So some changes
were needed to handle this correctly.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/gpu/drm/nouveau/nouveau_gem.c  |   2 +-
 drivers/gpu/drm/qxl/qxl_object.h       |   5 -
 drivers/gpu/drm/ttm/ttm_bo.c           | 190 +++++++++------------------------
 drivers/gpu/drm/ttm/ttm_bo_util.c      |   6 +-
 drivers/gpu/drm/ttm/ttm_execbuf_util.c |  43 +++-----
 include/drm/ttm/ttm_bo_api.h           |  25 ++---
 include/drm/ttm/ttm_execbuf_util.h     |   1 -
 7 files changed, 79 insertions(+), 193 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index e35d468..2b2077d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -368,7 +368,7 @@ retry:
 		ret = ttm_bo_reserve(&nvbo->bo, true, false, true, &op->ticket);
 		if (ret) {
 			validate_fini_no_ticket(op, NULL);
-			if (unlikely(ret == -EAGAIN)) {
+			if (unlikely(ret == -EDEADLK)) {
 				ret = ttm_bo_reserve_slowpath(&nvbo->bo, true,
 							      &op->ticket);
 				if (!ret)
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index b4fd89f..ee7ad79 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -57,11 +57,6 @@ static inline unsigned long qxl_bo_size(struct qxl_bo *bo)
 	return bo->tbo.num_pages << PAGE_SHIFT;
 }
 
-static inline bool qxl_bo_is_reserved(struct qxl_bo *bo)
-{
-	return !!atomic_read(&bo->tbo.reserved);
-}
-
 static inline u64 qxl_bo_mmap_offset(struct qxl_bo *bo)
 {
 	return bo->tbo.addr_space_offset;
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index b912375..5f9fe80 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -150,6 +150,9 @@ static void ttm_bo_release_list(struct kref *list_kref)
 	if (bo->ttm)
 		ttm_tt_destroy(bo->ttm);
 	atomic_dec(&bo->glob->bo_count);
+	if (bo->resv == &bo->ttm_resv)
+		reservation_object_fini(&bo->ttm_resv);
+
 	if (bo->destroy)
 		bo->destroy(bo);
 	else {
@@ -158,18 +161,6 @@ static void ttm_bo_release_list(struct kref *list_kref)
 	ttm_mem_global_free(bdev->glob->mem_glob, acc_size);
 }
 
-static int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo,
-				  bool interruptible)
-{
-	if (interruptible) {
-		return wait_event_interruptible(bo->event_queue,
-					       !ttm_bo_is_reserved(bo));
-	} else {
-		wait_event(bo->event_queue, !ttm_bo_is_reserved(bo));
-		return 0;
-	}
-}
-
 void ttm_bo_add_to_lru(struct ttm_buffer_object *bo)
 {
 	struct ttm_bo_device *bdev = bo->bdev;
@@ -218,65 +209,27 @@ int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
 			  bool no_wait, bool use_ticket,
 			  struct ww_acquire_ctx *ticket)
 {
-	int ret;
+	int ret = 0;
 
-	while (unlikely(atomic_xchg(&bo->reserved, 1) != 0)) {
-		/**
-		 * Deadlock avoidance for multi-bo reserving.
-		 */
-		if (use_ticket && bo->seq_valid) {
-			/**
-			 * We've already reserved this one.
-			 */
-			if (unlikely(ticket->stamp == bo->val_seq))
-				return -EDEADLK;
-			/**
-			 * Already reserved by a thread that will not back
-			 * off for us. We need to back off.
-			 */
-			if (unlikely(ticket->stamp - bo->val_seq <= LONG_MAX))
-				return -EAGAIN;
-		}
+	if (no_wait) {
+		bool success;
 
-		if (no_wait)
+		/* not valid any more, fix your locking! */
+		if (WARN_ON(ticket))
 			return -EBUSY;
 
-		ret = ttm_bo_wait_unreserved(bo, interruptible);
-
-		if (unlikely(ret))
-			return ret;
-	}
-
-	if (use_ticket) {
-		bool wake_up = false;
-
-		/**
-		 * Wake up waiters that may need to recheck for deadlock,
-		 * if we decreased the sequence number.
-		 */
-		if (unlikely((bo->val_seq - ticket->stamp <= LONG_MAX)
-			     || !bo->seq_valid))
-			wake_up = true;
-
-		/*
-		 * In the worst case with memory ordering these values can be
-		 * seen in the wrong order. However since we call wake_up_all
-		 * in that case, this will hopefully not pose a problem,
-		 * and the worst case would only cause someone to accidentally
-		 * hit -EAGAIN in ttm_bo_reserve when they see old value of
-		 * val_seq. However this would only happen if seq_valid was
-		 * written before val_seq was, and just means some slightly
-		 * increased cpu usage
-		 */
-		bo->val_seq = ticket->stamp;
-		bo->seq_valid = true;
-		if (wake_up)
-			wake_up_all(&bo->event_queue);
-	} else {
-		bo->seq_valid = false;
+		success = ww_mutex_trylock(&bo->resv->lock);
+		return success ? 0 : -EBUSY;
 	}
 
-	return 0;
+	if (interruptible)
+		ret = ww_mutex_lock_interruptible(&bo->resv->lock,
+						  ticket);
+	else
+		ret = ww_mutex_lock(&bo->resv->lock, ticket);
+	if (ret == -EINTR)
+		return -ERESTARTSYS;
+	return ret;
 }
 EXPORT_SYMBOL(ttm_bo_reserve);
 
@@ -313,50 +266,27 @@ int ttm_bo_reserve(struct ttm_buffer_object *bo,
 	return ret;
 }
 
-int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
-				  bool interruptible,
-				  struct ww_acquire_ctx *ticket)
-{
-	bool wake_up = false;
-	int ret;
-
-	while (unlikely(atomic_xchg(&bo->reserved, 1) != 0)) {
-		WARN_ON(bo->seq_valid && ticket->stamp == bo->val_seq);
-
-		ret = ttm_bo_wait_unreserved(bo, interruptible);
-
-		if (unlikely(ret))
-			return ret;
-	}
-
-	if (bo->val_seq - ticket->stamp < LONG_MAX || !bo->seq_valid)
-		wake_up = true;
-
-	/**
-	 * Wake up waiters that may need to recheck for deadlock,
-	 * if we decreased the sequence number.
-	 */
-	bo->val_seq = ticket->stamp;
-	bo->seq_valid = true;
-	if (wake_up)
-		wake_up_all(&bo->event_queue);
-
-	return 0;
-}
-
 int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
 			    bool interruptible, struct ww_acquire_ctx *ticket)
 {
 	struct ttm_bo_global *glob = bo->glob;
-	int put_count, ret;
+	int put_count = 0;
+	int ret = 0;
+
+	if (interruptible)
+		ret = ww_mutex_lock_slow_interruptible(&bo->resv->lock,
+						       ticket);
+	else
+		ww_mutex_lock_slow(&bo->resv->lock, ticket);
 
-	ret = ttm_bo_reserve_slowpath_nolru(bo, interruptible, ticket);
-	if (likely(!ret)) {
+	if (likely(ret == 0)) {
 		spin_lock(&glob->lru_lock);
 		put_count = ttm_bo_del_from_lru(bo);
 		spin_unlock(&glob->lru_lock);
 		ttm_bo_list_ref_sub(bo, put_count, true);
-	}
+	} else if (ret == -EINTR)
+		ret = -ERESTARTSYS;
+
 	return ret;
 }
 EXPORT_SYMBOL(ttm_bo_reserve_slowpath);
@@ -364,8 +294,7 @@ EXPORT_SYMBOL(ttm_bo_reserve_slowpath);
 void ttm_bo_unreserve_ticket_locked(struct ttm_buffer_object *bo, struct ww_acquire_ctx *ticket)
 {
 	ttm_bo_add_to_lru(bo);
-	atomic_set(&bo->reserved, 0);
-	wake_up_all(&bo->event_queue);
+	ww_mutex_unlock(&bo->resv->lock);
 }
 
 void ttm_bo_unreserve(struct ttm_buffer_object *bo)
@@ -558,17 +487,7 @@ static void ttm_bo_cleanup_memtype_use(struct ttm_buffer_object *bo)
 	}
 	ttm_bo_mem_put(bo, &bo->mem);
 
-	atomic_set(&bo->reserved, 0);
-	wake_up_all(&bo->event_queue);
-
-	/*
-	 * Since the final reference to this bo may not be dropped by
-	 * the current task we have to put a memory barrier here to make
-	 * sure the changes done in this function are always visible.
-	 *
-	 * This function only needs protection against the final kref_put.
-	 */
-	smp_mb__before_atomic_dec();
+	ww_mutex_unlock (&bo->resv->lock);
 }
 
 static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo)
@@ -600,10 +519,8 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo)
 		sync_obj = driver->sync_obj_ref(bo->sync_obj);
 	spin_unlock(&bdev->fence_lock);
 
-	if (!ret) {
-		atomic_set(&bo->reserved, 0);
-		wake_up_all(&bo->event_queue);
-	}
+	if (!ret)
+		ww_mutex_unlock(&bo->resv->lock);
 
 	kref_get(&bo->list_kref);
 	list_add_tail(&bo->ddestroy, &bdev->ddestroy);
@@ -653,8 +570,7 @@ static int ttm_bo_cleanup_refs_and_unlock(struct ttm_buffer_object *bo,
 		sync_obj = driver->sync_obj_ref(bo->sync_obj);
 		spin_unlock(&bdev->fence_lock);
 
-		atomic_set(&bo->reserved, 0);
-		wake_up_all(&bo->event_queue);
+		ww_mutex_unlock(&bo->resv->lock);
 		spin_unlock(&glob->lru_lock);
 
 		ret = driver->sync_obj_wait(sync_obj, false, interruptible);
@@ -692,8 +608,7 @@ static int ttm_bo_cleanup_refs_and_unlock(struct ttm_buffer_object *bo,
 		spin_unlock(&bdev->fence_lock);
 
 	if (ret || unlikely(list_empty(&bo->ddestroy))) {
-		atomic_set(&bo->reserved, 0);
-		wake_up_all(&bo->event_queue);
+		ww_mutex_unlock(&bo->resv->lock);
 		spin_unlock(&glob->lru_lock);
 		return ret;
 	}
@@ -1253,6 +1168,7 @@ int ttm_bo_init(struct ttm_bo_device *bdev,
 	int ret = 0;
 	unsigned long num_pages;
 	struct ttm_mem_global *mem_glob = bdev->glob->mem_glob;
+	bool locked;
 
 	ret = ttm_mem_global_alloc(mem_glob, acc_size, false, false);
 	if (ret) {
@@ -1279,8 +1195,6 @@ int ttm_bo_init(struct ttm_bo_device *bdev,
 	kref_init(&bo->kref);
 	kref_init(&bo->list_kref);
 	atomic_set(&bo->cpu_writers, 0);
-	atomic_set(&bo->reserved, 1);
-	init_waitqueue_head(&bo->event_queue);
 	INIT_LIST_HEAD(&bo->lru);
 	INIT_LIST_HEAD(&bo->ddestroy);
 	INIT_LIST_HEAD(&bo->swap);
@@ -1298,37 +1212,34 @@ int ttm_bo_init(struct ttm_bo_device *bdev,
 	bo->mem.bus.io_reserved_count = 0;
 	bo->priv_flags = 0;
 	bo->mem.placement = (TTM_PL_FLAG_SYSTEM | TTM_PL_FLAG_CACHED);
-	bo->seq_valid = false;
 	bo->persistent_swap_storage = persistent_swap_storage;
 	bo->acc_size = acc_size;
 	bo->sg = sg;
+	bo->resv = &bo->ttm_resv;
+	reservation_object_init(bo->resv);
 	atomic_inc(&bo->glob->bo_count);
 
 	ret = ttm_bo_check_placement(bo, placement);
-	if (unlikely(ret != 0))
-		goto out_err;
 
 	/*
 	 * For ttm_bo_type_device buffers, allocate
 	 * address space from the device.
 	 */
-	if (bo->type == ttm_bo_type_device ||
-	    bo->type == ttm_bo_type_sg) {
+	if (likely(!ret) &&
+	    (bo->type == ttm_bo_type_device ||
+	     bo->type == ttm_bo_type_sg))
 		ret = ttm_bo_setup_vm(bo);
-		if (ret)
-			goto out_err;
-	}
 
-	ret = ttm_bo_validate(bo, placement, interruptible, false);
-	if (ret)
-		goto out_err;
+	locked = ww_mutex_trylock(&bo->resv->lock);
+	WARN_ON(!locked);
 
-	ttm_bo_unreserve(bo);
-	return 0;
+	if (likely(!ret))
+		ret = ttm_bo_validate(bo, placement, interruptible, false);
 
-out_err:
 	ttm_bo_unreserve(bo);
-	ttm_bo_unref(&bo);
+
+	if (unlikely(ret))
+		ttm_bo_unref(&bo);
 
 	return ret;
 }
@@ -1941,8 +1852,7 @@ out:
 	 * already swapped buffer.
 	 */
 
-	atomic_set(&bo->reserved, 0);
-	wake_up_all(&bo->event_queue);
+	ww_mutex_unlock(&bo->resv->lock);
 	kref_put(&bo->list_kref, ttm_bo_release_list);
 	return ret;
 }
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index af89458..319cf41 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -433,6 +433,7 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
 	struct ttm_buffer_object *fbo;
 	struct ttm_bo_device *bdev = bo->bdev;
 	struct ttm_bo_driver *driver = bdev->driver;
+	int ret;
 
 	fbo = kmalloc(sizeof(*fbo), GFP_KERNEL);
 	if (!fbo)
@@ -445,7 +446,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
 	 * TODO: Explicit member copy would probably be better here.
 	 */
 
-	init_waitqueue_head(&fbo->event_queue);
 	INIT_LIST_HEAD(&fbo->ddestroy);
 	INIT_LIST_HEAD(&fbo->lru);
 	INIT_LIST_HEAD(&fbo->swap);
@@ -463,6 +463,10 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
 	kref_init(&fbo->kref);
 	fbo->destroy = &ttm_transfered_destroy;
 	fbo->acc_size = 0;
+	fbo->resv = &fbo->ttm_resv;
+	reservation_object_init(fbo->resv);
+	ret = ww_mutex_trylock(&fbo->resv->lock);
+	WARN_ON(!ret);
 
 	*new_obj = fbo;
 	return 0;
diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
index efcb734..7392da5 100644
--- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c
+++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
@@ -48,8 +48,7 @@ static void ttm_eu_backoff_reservation_locked(struct list_head *list,
 			entry->removed = false;
 
 		} else {
-			atomic_set(&bo->reserved, 0);
-			wake_up_all(&bo->event_queue);
+			ww_mutex_unlock(&bo->resv->lock);
 		}
 	}
 }
@@ -134,8 +133,6 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket,
 	glob = entry->bo->glob;
 
 	ww_acquire_init(ticket, &reservation_ww_class);
-	spin_lock(&glob->lru_lock);
-
 retry:
 	list_for_each_entry(entry, list, head) {
 		struct ttm_buffer_object *bo = entry->bo;
@@ -144,42 +141,34 @@ retry:
 		if (entry->reserved)
 			continue;
 
-		ret = ttm_bo_reserve_nolru(bo, true, true, true, ticket);
-		switch (ret) {
-		case 0:
-			break;
-		case -EBUSY:
-			ttm_eu_del_from_lru_locked(list);
-			spin_unlock(&glob->lru_lock);
-			ret = ttm_bo_reserve_nolru(bo, true, false,
-						   true, ticket);
-			spin_lock(&glob->lru_lock);
-
-			if (!ret)
-				break;
 
-			if (unlikely(ret != -EAGAIN))
-				goto err;
+		ret = ttm_bo_reserve_nolru(bo, true, false, true, ticket);
 
-			/* fallthrough */
-		case -EAGAIN:
+		if (ret == -EDEADLK) {
+			/* uh oh, we lost out, drop every reservation and try
+			 * to only reserve this buffer, then start over if
+			 * this succeeds.
+			 */
+			spin_lock(&glob->lru_lock);
 			ttm_eu_backoff_reservation_locked(list, ticket);
 			spin_unlock(&glob->lru_lock);
 			ttm_eu_list_ref_sub(list);
-			ret = ttm_bo_reserve_slowpath_nolru(bo, true, ticket);
-			if (unlikely(ret != 0))
+			ret = ww_mutex_lock_slow_interruptible(&bo->resv->lock,
+							       ticket);
+			if (unlikely(ret != 0)) {
+				if (ret == -EINTR)
+					ret = -ERESTARTSYS;
 				goto err_fini;
+			}
 
-			spin_lock(&glob->lru_lock);
 			entry->reserved = true;
 			if (unlikely(atomic_read(&bo->cpu_writers) > 0)) {
 				ret = -EBUSY;
 				goto err;
 			}
 			goto retry;
-		default:
+		} else if (ret)
 			goto err;
-		}
 
 		entry->reserved = true;
 		if (unlikely(atomic_read(&bo->cpu_writers) > 0)) {
@@ -189,12 +178,14 @@ retry:
 	}
 
 	ww_acquire_done(ticket);
+	spin_lock(&glob->lru_lock);
 	ttm_eu_del_from_lru_locked(list);
 	spin_unlock(&glob->lru_lock);
 	ttm_eu_list_ref_sub(list);
 	return 0;
 
 err:
+	spin_lock(&glob->lru_lock);
 	ttm_eu_backoff_reservation_locked(list, ticket);
 	spin_unlock(&glob->lru_lock);
 	ttm_eu_list_ref_sub(list);
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 0a992b0..31ad860 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -39,6 +39,7 @@
 #include <linux/mm.h>
 #include <linux/rbtree.h>
 #include <linux/bitmap.h>
+#include <linux/reservation.h>
 
 struct ttm_bo_device;
 
@@ -153,7 +154,6 @@ struct ttm_tt;
  * Lru lists may keep one refcount, the delayed delete list, and kref != 0
  * keeps one refcount. When this refcount reaches zero,
  * the object is destroyed.
- * @event_queue: Queue for processes waiting on buffer object status change.
  * @mem: structure describing current placement.
  * @persistent_swap_storage: Usually the swap storage is deleted for buffers
  * pinned in physical memory. If this behaviour is not desired, this member
@@ -164,12 +164,6 @@ struct ttm_tt;
  * @lru: List head for the lru list.
  * @ddestroy: List head for the delayed destroy list.
  * @swap: List head for swap LRU list.
- * @val_seq: Sequence of the validation holding the @reserved lock.
- * Used to avoid starvation when many processes compete to validate the
- * buffer. This member is protected by the bo_device::lru_lock.
- * @seq_valid: The value of @val_seq is valid. This value is protected by
- * the bo_device::lru_lock.
- * @reserved: Deadlock-free lock used for synchronization state transitions.
  * @sync_obj: Pointer to a synchronization object.
  * @priv_flags: Flags describing buffer object internal state.
  * @vm_rb: Rb node for the vm rb tree.
@@ -209,10 +203,9 @@ struct ttm_buffer_object {
 
 	struct kref kref;
 	struct kref list_kref;
-	wait_queue_head_t event_queue;
 
 	/**
-	 * Members protected by the bo::reserved lock.
+	 * Members protected by the bo::resv::reserved lock.
 	 */
 
 	struct ttm_mem_reg mem;
@@ -234,15 +227,6 @@ struct ttm_buffer_object {
 	struct list_head ddestroy;
 	struct list_head swap;
 	struct list_head io_reserve_lru;
-	unsigned long val_seq;
-	bool seq_valid;
-
-	/**
-	 * Members protected by the bdev::lru_lock
-	 * only when written to.
-	 */
-
-	atomic_t reserved;
 
 	/**
 	 * Members protected by struct buffer_object_device::fence_lock
@@ -272,6 +256,9 @@ struct ttm_buffer_object {
 	uint32_t cur_placement;
 
 	struct sg_table *sg;
+
+	struct reservation_object *resv;
+	struct reservation_object ttm_resv;
 };
 
 /**
@@ -736,7 +723,7 @@ extern void ttm_bo_swapout_all(struct ttm_bo_device *bdev);
  */
 static inline bool ttm_bo_is_reserved(struct ttm_buffer_object *bo)
 {
-	return atomic_read(&bo->reserved);
+	return ww_mutex_is_locked(&bo->resv->lock);
 }
 
 #endif
diff --git a/include/drm/ttm/ttm_execbuf_util.h b/include/drm/ttm/ttm_execbuf_util.h
index ba71ef9..ec8a1d3 100644
--- a/include/drm/ttm/ttm_execbuf_util.h
+++ b/include/drm/ttm/ttm_execbuf_util.h
@@ -33,7 +33,6 @@
 
 #include <ttm/ttm_bo_api.h>
 #include <linux/list.h>
-#include <linux/reservation.h>
 
 /**
  * struct ttm_validate_buffer
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 05/13] drm/ast: inline reservations
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
                   ` (3 preceding siblings ...)
  2013-06-27 11:48 ` [PATCH WW 04/13] drm/ttm: convert to the reservation api Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 11:48 ` [PATCH WW 06/13] drm/cirrus: " Maarten Lankhorst
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/gpu/drm/ast/ast_drv.h | 20 ++++++++++++++++++--
 drivers/gpu/drm/ast/ast_ttm.c | 18 ------------------
 2 files changed, 18 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index 02e52d5..622d4ae 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -348,8 +348,24 @@ int ast_gem_create(struct drm_device *dev,
 int ast_bo_pin(struct ast_bo *bo, u32 pl_flag, u64 *gpu_addr);
 int ast_bo_unpin(struct ast_bo *bo);
 
-int ast_bo_reserve(struct ast_bo *bo, bool no_wait);
-void ast_bo_unreserve(struct ast_bo *bo);
+static inline int ast_bo_reserve(struct ast_bo *bo, bool no_wait)
+{
+	int ret;
+
+	ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
+	if (ret) {
+		if (ret != -ERESTARTSYS && ret != -EBUSY)
+			DRM_ERROR("reserve failed %p\n", bo);
+		return ret;
+	}
+	return 0;
+}
+
+static inline void ast_bo_unreserve(struct ast_bo *bo)
+{
+	ttm_bo_unreserve(&bo->bo);
+}
+
 void ast_ttm_placement(struct ast_bo *bo, int domain);
 int ast_bo_push_sysram(struct ast_bo *bo);
 int ast_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/ast/ast_ttm.c b/drivers/gpu/drm/ast/ast_ttm.c
index a5a1a03..98d6708 100644
--- a/drivers/gpu/drm/ast/ast_ttm.c
+++ b/drivers/gpu/drm/ast/ast_ttm.c
@@ -303,24 +303,6 @@ void ast_ttm_placement(struct ast_bo *bo, int domain)
 	bo->placement.num_busy_placement = c;
 }
 
-int ast_bo_reserve(struct ast_bo *bo, bool no_wait)
-{
-	int ret;
-
-	ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
-	if (ret) {
-		if (ret != -ERESTARTSYS && ret != -EBUSY)
-			DRM_ERROR("reserve failed %p\n", bo);
-		return ret;
-	}
-	return 0;
-}
-
-void ast_bo_unreserve(struct ast_bo *bo)
-{
-	ttm_bo_unreserve(&bo->bo);
-}
-
 int ast_bo_create(struct drm_device *dev, int size, int align,
 		  uint32_t flags, struct ast_bo **pastbo)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 06/13] drm/cirrus: inline reservations
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
                   ` (4 preceding siblings ...)
  2013-06-27 11:48 ` [PATCH WW 05/13] drm/ast: inline reservations Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 11:48 ` [PATCH WW 07/13] drm/mgag200: " Maarten Lankhorst
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/gpu/drm/cirrus/cirrus_drv.h | 21 +++++++++++++++++++--
 drivers/gpu/drm/cirrus/cirrus_ttm.c | 18 ------------------
 2 files changed, 19 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/cirrus/cirrus_drv.h b/drivers/gpu/drm/cirrus/cirrus_drv.h
index 7ca0595..bae5560 100644
--- a/drivers/gpu/drm/cirrus/cirrus_drv.h
+++ b/drivers/gpu/drm/cirrus/cirrus_drv.h
@@ -240,8 +240,25 @@ void cirrus_ttm_placement(struct cirrus_bo *bo, int domain);
 int cirrus_bo_create(struct drm_device *dev, int size, int align,
 		     uint32_t flags, struct cirrus_bo **pcirrusbo);
 int cirrus_mmap(struct file *filp, struct vm_area_struct *vma);
-int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait);
-void cirrus_bo_unreserve(struct cirrus_bo *bo);
+
+static inline int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait)
+{
+	int ret;
+
+	ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
+	if (ret) {
+		if (ret != -ERESTARTSYS && ret != -EBUSY)
+			DRM_ERROR("reserve failed %p\n", bo);
+		return ret;
+	}
+	return 0;
+}
+
+static inline void cirrus_bo_unreserve(struct cirrus_bo *bo)
+{
+	ttm_bo_unreserve(&bo->bo);
+}
+
 int cirrus_bo_push_sysram(struct cirrus_bo *bo);
 int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr);
 #endif				/* __CIRRUS_DRV_H__ */
diff --git a/drivers/gpu/drm/cirrus/cirrus_ttm.c b/drivers/gpu/drm/cirrus/cirrus_ttm.c
index 36b9b0b..0047012 100644
--- a/drivers/gpu/drm/cirrus/cirrus_ttm.c
+++ b/drivers/gpu/drm/cirrus/cirrus_ttm.c
@@ -308,24 +308,6 @@ void cirrus_ttm_placement(struct cirrus_bo *bo, int domain)
 	bo->placement.num_busy_placement = c;
 }
 
-int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait)
-{
-	int ret;
-
-	ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
-	if (ret) {
-		if (ret != -ERESTARTSYS && ret != -EBUSY)
-			DRM_ERROR("reserve failed %p\n", bo);
-		return ret;
-	}
-	return 0;
-}
-
-void cirrus_bo_unreserve(struct cirrus_bo *bo)
-{
-	ttm_bo_unreserve(&bo->bo);
-}
-
 int cirrus_bo_create(struct drm_device *dev, int size, int align,
 		  uint32_t flags, struct cirrus_bo **pcirrusbo)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 07/13] drm/mgag200: inline reservations
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
                   ` (5 preceding siblings ...)
  2013-06-27 11:48 ` [PATCH WW 06/13] drm/cirrus: " Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 11:48 ` [PATCH WW 08/13] drm/radeon: " Maarten Lankhorst
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/gpu/drm/mgag200/mgag200_drv.h | 20 ++++++++++++++++++--
 drivers/gpu/drm/mgag200/mgag200_ttm.c | 18 ------------------
 2 files changed, 18 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.h b/drivers/gpu/drm/mgag200/mgag200_drv.h
index 364a05a..b4f955b 100644
--- a/drivers/gpu/drm/mgag200/mgag200_drv.h
+++ b/drivers/gpu/drm/mgag200/mgag200_drv.h
@@ -279,8 +279,24 @@ void mgag200_i2c_destroy(struct mga_i2c_chan *i2c);
 #define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
 void mgag200_ttm_placement(struct mgag200_bo *bo, int domain);
 
-int mgag200_bo_reserve(struct mgag200_bo *bo, bool no_wait);
-void mgag200_bo_unreserve(struct mgag200_bo *bo);
+static inline int mgag200_bo_reserve(struct mgag200_bo *bo, bool no_wait)
+{
+	int ret;
+
+	ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
+	if (ret) {
+		if (ret != -ERESTARTSYS && ret != -EBUSY)
+			DRM_ERROR("reserve failed %p\n", bo);
+		return ret;
+	}
+	return 0;
+}
+
+static inline void mgag200_bo_unreserve(struct mgag200_bo *bo)
+{
+	ttm_bo_unreserve(&bo->bo);
+}
+
 int mgag200_bo_create(struct drm_device *dev, int size, int align,
 		      uint32_t flags, struct mgag200_bo **pastbo);
 int mgag200_mm_init(struct mga_device *mdev);
diff --git a/drivers/gpu/drm/mgag200/mgag200_ttm.c b/drivers/gpu/drm/mgag200/mgag200_ttm.c
index 0004f77..3acb2b0 100644
--- a/drivers/gpu/drm/mgag200/mgag200_ttm.c
+++ b/drivers/gpu/drm/mgag200/mgag200_ttm.c
@@ -303,24 +303,6 @@ void mgag200_ttm_placement(struct mgag200_bo *bo, int domain)
 	bo->placement.num_busy_placement = c;
 }
 
-int mgag200_bo_reserve(struct mgag200_bo *bo, bool no_wait)
-{
-	int ret;
-
-	ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
-	if (ret) {
-		if (ret != -ERESTARTSYS && ret != -EBUSY)
-			DRM_ERROR("reserve failed %p %d\n", bo, ret);
-		return ret;
-	}
-	return 0;
-}
-
-void mgag200_bo_unreserve(struct mgag200_bo *bo)
-{
-	ttm_bo_unreserve(&bo->bo);
-}
-
 int mgag200_bo_create(struct drm_device *dev, int size, int align,
 		  uint32_t flags, struct mgag200_bo **pmgabo)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 08/13] drm/radeon: inline reservations
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
                   ` (6 preceding siblings ...)
  2013-06-27 11:48 ` [PATCH WW 07/13] drm/mgag200: " Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 22:04   ` Jerome Glisse
  2013-06-27 11:48 ` [PATCH WW 09/13] drm/ttm: inline ttm_bo_reserve and related calls Maarten Lankhorst
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/gpu/drm/radeon/radeon_object.c | 23 -----------------------
 drivers/gpu/drm/radeon/radeon_object.h | 22 +++++++++++++++++++++-
 2 files changed, 21 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
index 71287bb..d850dc6 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -619,26 +619,3 @@ int radeon_bo_wait(struct radeon_bo *bo, u32 *mem_type, bool no_wait)
 	ttm_bo_unreserve(&bo->tbo);
 	return r;
 }
-
-
-/**
- * radeon_bo_reserve - reserve bo
- * @bo:		bo structure
- * @no_intr:	don't return -ERESTARTSYS on pending signal
- *
- * Returns:
- * -ERESTARTSYS: A wait for the buffer to become unreserved was interrupted by
- * a signal. Release all buffer reservations and return to user-space.
- */
-int radeon_bo_reserve(struct radeon_bo *bo, bool no_intr)
-{
-	int r;
-
-	r = ttm_bo_reserve(&bo->tbo, !no_intr, false, false, 0);
-	if (unlikely(r != 0)) {
-		if (r != -ERESTARTSYS)
-			dev_err(bo->rdev->dev, "%p reserve failed\n", bo);
-		return r;
-	}
-	return 0;
-}
diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h
index 3e62a3a..456ad6b 100644
--- a/drivers/gpu/drm/radeon/radeon_object.h
+++ b/drivers/gpu/drm/radeon/radeon_object.h
@@ -52,7 +52,27 @@ static inline unsigned radeon_mem_type_to_domain(u32 mem_type)
 	return 0;
 }
 
-int radeon_bo_reserve(struct radeon_bo *bo, bool no_intr);
+/**
+ * radeon_bo_reserve - reserve bo
+ * @bo:		bo structure
+ * @no_intr:	don't return -ERESTARTSYS on pending signal
+ *
+ * Returns:
+ * -ERESTARTSYS: A wait for the buffer to become unreserved was interrupted by
+ * a signal. Release all buffer reservations and return to user-space.
+ */
+static inline int radeon_bo_reserve(struct radeon_bo *bo, bool no_intr)
+{
+	int r;
+
+	r = ttm_bo_reserve(&bo->tbo, !no_intr, false, false, 0);
+	if (unlikely(r != 0)) {
+		if (r != -ERESTARTSYS)
+			dev_err(bo->rdev->dev, "%p reserve failed\n", bo);
+		return r;
+	}
+	return 0;
+}
 
 static inline void radeon_bo_unreserve(struct radeon_bo *bo)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 09/13] drm/ttm: inline ttm_bo_reserve and related calls
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
                   ` (7 preceding siblings ...)
  2013-06-27 11:48 ` [PATCH WW 08/13] drm/radeon: " Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 12:21   ` Daniel Vetter
  2013-06-27 11:48 ` [PATCH WW 10/13] drm/ttm: get rid of ttm_bo_is_reserved usage Maarten Lankhorst
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

Makes lockdep a lot more useful.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/gpu/drm/ttm/ttm_bo.c           | 105 ++------------------
 drivers/gpu/drm/ttm/ttm_execbuf_util.c |   9 +-
 include/drm/ttm/ttm_bo_driver.h        | 175 ++++++++++++++++++++-------------
 3 files changed, 117 insertions(+), 172 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 5f9fe80..a8a27f5 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -182,6 +182,7 @@ void ttm_bo_add_to_lru(struct ttm_buffer_object *bo)
 		}
 	}
 }
+EXPORT_SYMBOL(ttm_bo_add_to_lru);
 
 int ttm_bo_del_from_lru(struct ttm_buffer_object *bo)
 {
@@ -204,35 +205,6 @@ int ttm_bo_del_from_lru(struct ttm_buffer_object *bo)
 	return put_count;
 }
 
-int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
-			  bool interruptible,
-			  bool no_wait, bool use_ticket,
-			  struct ww_acquire_ctx *ticket)
-{
-	int ret = 0;
-
-	if (no_wait) {
-		bool success;
-
-		/* not valid any more, fix your locking! */
-		if (WARN_ON(ticket))
-			return -EBUSY;
-
-		success = ww_mutex_trylock(&bo->resv->lock);
-		return success ? 0 : -EBUSY;
-	}
-
-	if (interruptible)
-		ret = ww_mutex_lock_interruptible(&bo->resv->lock,
-						  ticket);
-	else
-		ret = ww_mutex_lock(&bo->resv->lock, ticket);
-	if (ret == -EINTR)
-		return -ERESTARTSYS;
-	return ret;
-}
-EXPORT_SYMBOL(ttm_bo_reserve);
-
 static void ttm_bo_ref_bug(struct kref *list_kref)
 {
 	BUG();
@@ -245,77 +217,16 @@ void ttm_bo_list_ref_sub(struct ttm_buffer_object *bo, int count,
 		 (never_free) ? ttm_bo_ref_bug : ttm_bo_release_list);
 }
 
-int ttm_bo_reserve(struct ttm_buffer_object *bo,
-		   bool interruptible,
-		   bool no_wait, bool use_ticket,
-		   struct ww_acquire_ctx *ticket)
-{
-	struct ttm_bo_global *glob = bo->glob;
-	int put_count = 0;
-	int ret;
-
-	ret = ttm_bo_reserve_nolru(bo, interruptible, no_wait, use_ticket,
-				    ticket);
-	if (likely(ret == 0)) {
-		spin_lock(&glob->lru_lock);
-		put_count = ttm_bo_del_from_lru(bo);
-		spin_unlock(&glob->lru_lock);
-		ttm_bo_list_ref_sub(bo, put_count, true);
-	}
-
-	return ret;
-}
-
-int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
-			    bool interruptible, struct ww_acquire_ctx *ticket)
-{
-	struct ttm_bo_global *glob = bo->glob;
-	int put_count = 0;
-	int ret = 0;
-
-	if (interruptible)
-		ret = ww_mutex_lock_slow_interruptible(&bo->resv->lock,
-						       ticket);
-	else
-		ww_mutex_lock_slow(&bo->resv->lock, ticket);
-
-	if (likely(ret == 0)) {
-		spin_lock(&glob->lru_lock);
-		put_count = ttm_bo_del_from_lru(bo);
-		spin_unlock(&glob->lru_lock);
-		ttm_bo_list_ref_sub(bo, put_count, true);
-	} else if (ret == -EINTR)
-		ret = -ERESTARTSYS;
-
-	return ret;
-}
-EXPORT_SYMBOL(ttm_bo_reserve_slowpath);
-
-void ttm_bo_unreserve_ticket_locked(struct ttm_buffer_object *bo, struct ww_acquire_ctx *ticket)
+void ttm_bo_del_sub_from_lru(struct ttm_buffer_object *bo)
 {
-	ttm_bo_add_to_lru(bo);
-	ww_mutex_unlock(&bo->resv->lock);
-}
-
-void ttm_bo_unreserve(struct ttm_buffer_object *bo)
-{
-	struct ttm_bo_global *glob = bo->glob;
-
-	spin_lock(&glob->lru_lock);
-	ttm_bo_unreserve_ticket_locked(bo, NULL);
-	spin_unlock(&glob->lru_lock);
-}
-EXPORT_SYMBOL(ttm_bo_unreserve);
-
-void ttm_bo_unreserve_ticket(struct ttm_buffer_object *bo, struct ww_acquire_ctx *ticket)
-{
-	struct ttm_bo_global *glob = bo->glob;
+	int put_count;
 
-	spin_lock(&glob->lru_lock);
-	ttm_bo_unreserve_ticket_locked(bo, ticket);
-	spin_unlock(&glob->lru_lock);
+	spin_lock(&bo->glob->lru_lock);
+	put_count = ttm_bo_del_from_lru(bo);
+	spin_unlock(&bo->glob->lru_lock);
+	ttm_bo_list_ref_sub(bo, put_count, true);
 }
-EXPORT_SYMBOL(ttm_bo_unreserve_ticket);
+EXPORT_SYMBOL(ttm_bo_del_sub_from_lru);
 
 /*
  * Call bo->mutex locked.
diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
index 7392da5..6c91178 100644
--- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c
+++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
@@ -44,12 +44,10 @@ static void ttm_eu_backoff_reservation_locked(struct list_head *list,
 
 		entry->reserved = false;
 		if (entry->removed) {
-			ttm_bo_unreserve_ticket_locked(bo, ticket);
+			ttm_bo_add_to_lru(bo);
 			entry->removed = false;
-
-		} else {
-			ww_mutex_unlock(&bo->resv->lock);
 		}
+		ww_mutex_unlock(&bo->resv->lock);
 	}
 }
 
@@ -220,7 +218,8 @@ void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket,
 		bo = entry->bo;
 		entry->old_sync_obj = bo->sync_obj;
 		bo->sync_obj = driver->sync_obj_ref(sync_obj);
-		ttm_bo_unreserve_ticket_locked(bo, ticket);
+		ttm_bo_add_to_lru(bo);
+		ww_mutex_unlock(&bo->resv->lock);
 		entry->reserved = false;
 	}
 	spin_unlock(&bdev->fence_lock);
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
index ec18c5f..984fc2d 100644
--- a/include/drm/ttm/ttm_bo_driver.h
+++ b/include/drm/ttm/ttm_bo_driver.h
@@ -33,6 +33,7 @@
 #include <ttm/ttm_bo_api.h>
 #include <ttm/ttm_memory.h>
 #include <ttm/ttm_module.h>
+#include <ttm/ttm_placement.h>
 #include <drm/drm_mm.h>
 #include <drm/drm_global.h>
 #include <linux/workqueue.h>
@@ -772,6 +773,55 @@ extern int ttm_mem_io_lock(struct ttm_mem_type_manager *man,
 			   bool interruptible);
 extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
 
+extern void ttm_bo_del_sub_from_lru(struct ttm_buffer_object *bo);
+extern void ttm_bo_add_to_lru(struct ttm_buffer_object *bo);
+
+/**
+ * ttm_bo_reserve_nolru:
+ *
+ * @bo: A pointer to a struct ttm_buffer_object.
+ * @interruptible: Sleep interruptible if waiting.
+ * @no_wait: Don't sleep while trying to reserve, rather return -EBUSY.
+ * @use_ticket: If @bo is already reserved, Only sleep waiting for
+ * it to become unreserved if @ticket->stamp is older.
+ *
+ * Will not remove reserved buffers from the lru lists.
+ * Otherwise identical to ttm_bo_reserve.
+ *
+ * Returns:
+ * -EDEADLK: The reservation may cause a deadlock.
+ * Release all buffer reservations, wait for @bo to become unreserved and
+ * try again. (only if use_sequence == 1).
+ * -ERESTARTSYS: A wait for the buffer to become unreserved was interrupted by
+ * a signal. Release all buffer reservations and return to user-space.
+ * -EBUSY: The function needed to sleep, but @no_wait was true
+ * -EALREADY: Bo already reserved using @ticket. This error code will only
+ * be returned if @use_ticket is set to true.
+ */
+static inline int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
+				       bool interruptible,
+				       bool no_wait, bool use_ticket,
+				       struct ww_acquire_ctx *ticket)
+{
+	int ret = 0;
+
+	if (no_wait) {
+		bool success;
+		if (WARN_ON(ticket))
+			return -EBUSY;
+
+		success = ww_mutex_trylock(&bo->resv->lock);
+		return success ? 0 : -EBUSY;
+	}
+
+	if (interruptible)
+		ret = ww_mutex_lock_interruptible(&bo->resv->lock, ticket);
+	else
+		ret = ww_mutex_lock(&bo->resv->lock, ticket);
+	if (ret == -EINTR)
+		return -ERESTARTSYS;
+	return ret;
+}
 
 /**
  * ttm_bo_reserve:
@@ -780,7 +830,7 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
  * @interruptible: Sleep interruptible if waiting.
  * @no_wait: Don't sleep while trying to reserve, rather return -EBUSY.
  * @use_ticket: If @bo is already reserved, Only sleep waiting for
- * it to become unreserved if @sequence < (@bo)->sequence.
+ * it to become unreserved if @ticket->stamp is older.
  *
  * Locks a buffer object for validation. (Or prevents other processes from
  * locking it for validation) and removes it from lru lists, while taking
@@ -794,7 +844,7 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
  * Processes attempting to reserve multiple buffers other than for eviction,
  * (typically execbuf), should first obtain a unique 32-bit
  * validation sequence number,
- * and call this function with @use_sequence == 1 and @sequence == the unique
+ * and call this function with @use_ticket == 1 and @ticket->stamp == the unique
  * sequence number. If upon call of this function, the buffer object is already
  * reserved, the validation sequence is checked against the validation
  * sequence of the process currently reserving the buffer,
@@ -809,37 +859,31 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
  * will eventually succeed, preventing both deadlocks and starvation.
  *
  * Returns:
- * -EAGAIN: The reservation may cause a deadlock.
+ * -EDEADLK: The reservation may cause a deadlock.
  * Release all buffer reservations, wait for @bo to become unreserved and
  * try again. (only if use_sequence == 1).
  * -ERESTARTSYS: A wait for the buffer to become unreserved was interrupted by
  * a signal. Release all buffer reservations and return to user-space.
  * -EBUSY: The function needed to sleep, but @no_wait was true
- * -EDEADLK: Bo already reserved using @sequence. This error code will only
- * be returned if @use_sequence is set to true.
+ * -EALREADY: Bo already reserved using @ticket. This error code will only
+ * be returned if @use_ticket is set to true.
  */
-extern int ttm_bo_reserve(struct ttm_buffer_object *bo,
-			  bool interruptible,
-			  bool no_wait, bool use_ticket,
-			  struct ww_acquire_ctx *ticket);
+static inline int ttm_bo_reserve(struct ttm_buffer_object *bo,
+				 bool interruptible,
+				 bool no_wait, bool use_ticket,
+				 struct ww_acquire_ctx *ticket)
+{
+	int ret;
 
-/**
- * ttm_bo_reserve_slowpath_nolru:
- * @bo: A pointer to a struct ttm_buffer_object.
- * @interruptible: Sleep interruptible if waiting.
- * @sequence: Set (@bo)->sequence to this value after lock
- *
- * This is called after ttm_bo_reserve returns -EAGAIN and we backed off
- * from all our other reservations. Because there are no other reservations
- * held by us, this function cannot deadlock any more.
- *
- * Will not remove reserved buffers from the lru lists.
- * Otherwise identical to ttm_bo_reserve_slowpath.
- */
-extern int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
-					 bool interruptible,
-					 struct ww_acquire_ctx *ticket);
+	WARN_ON(!atomic_read(&bo->kref.refcount));
+
+	ret = ttm_bo_reserve_nolru(bo, interruptible, no_wait, use_ticket,
+				    ticket);
+	if (likely(ret == 0))
+		ttm_bo_del_sub_from_lru(bo);
 
+	return ret;
+}
 
 /**
  * ttm_bo_reserve_slowpath:
@@ -851,45 +895,27 @@ extern int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
  * from all our other reservations. Because there are no other reservations
  * held by us, this function cannot deadlock any more.
  */
-extern int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
-				   bool interruptible,
-				   struct ww_acquire_ctx *ticket);
+static inline int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
+					  bool interruptible,
+					  struct ww_acquire_ctx *ticket)
+{
+	int ret = 0;
 
-/**
- * ttm_bo_reserve_nolru:
- *
- * @bo: A pointer to a struct ttm_buffer_object.
- * @interruptible: Sleep interruptible if waiting.
- * @no_wait: Don't sleep while trying to reserve, rather return -EBUSY.
- * @use_sequence: If @bo is already reserved, Only sleep waiting for
- * it to become unreserved if @sequence < (@bo)->sequence.
- *
- * Will not remove reserved buffers from the lru lists.
- * Otherwise identical to ttm_bo_reserve.
- *
- * Returns:
- * -EAGAIN: The reservation may cause a deadlock.
- * Release all buffer reservations, wait for @bo to become unreserved and
- * try again. (only if use_sequence == 1).
- * -ERESTARTSYS: A wait for the buffer to become unreserved was interrupted by
- * a signal. Release all buffer reservations and return to user-space.
- * -EBUSY: The function needed to sleep, but @no_wait was true
- * -EDEADLK: Bo already reserved using @sequence. This error code will only
- * be returned if @use_sequence is set to true.
- */
-extern int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
-				 bool interruptible,
-				 bool no_wait, bool use_ticket,
-				 struct ww_acquire_ctx *ticket);
+	WARN_ON(!atomic_read(&bo->kref.refcount));
 
-/**
- * ttm_bo_unreserve
- *
- * @bo: A pointer to a struct ttm_buffer_object.
- *
- * Unreserve a previous reservation of @bo.
- */
-extern void ttm_bo_unreserve(struct ttm_buffer_object *bo);
+	if (interruptible)
+		ret = ww_mutex_lock_slow_interruptible(&bo->resv->lock,
+						       ticket);
+	else
+		ww_mutex_lock_slow(&bo->resv->lock, ticket);
+
+	if (likely(ret == 0))
+		ttm_bo_del_sub_from_lru(bo);
+	else if (ret == -EINTR)
+		ret = -ERESTARTSYS;
+
+	return ret;
+}
 
 /**
  * ttm_bo_unreserve_ticket
@@ -898,19 +924,28 @@ extern void ttm_bo_unreserve(struct ttm_buffer_object *bo);
  *
  * Unreserve a previous reservation of @bo made with @ticket.
  */
-extern void ttm_bo_unreserve_ticket(struct ttm_buffer_object *bo,
-				    struct ww_acquire_ctx *ticket);
+static inline void ttm_bo_unreserve_ticket(struct ttm_buffer_object *bo,
+					   struct ww_acquire_ctx *t)
+{
+	if (!(bo->mem.placement & TTM_PL_FLAG_NO_EVICT)) {
+		spin_lock(&bo->glob->lru_lock);
+		ttm_bo_add_to_lru(bo);
+		spin_unlock(&bo->glob->lru_lock);
+	}
+	ww_mutex_unlock(&bo->resv->lock);
+}
 
 /**
- * ttm_bo_unreserve_locked
+ * ttm_bo_unreserve
+ *
  * @bo: A pointer to a struct ttm_buffer_object.
- * @ticket: ww_acquire_ctx used for reserving, or NULL
  *
- * Unreserve a previous reservation of @bo made with @ticket.
- * Needs to be called with struct ttm_bo_global::lru_lock held.
+ * Unreserve a previous reservation of @bo.
  */
-extern void ttm_bo_unreserve_ticket_locked(struct ttm_buffer_object *bo,
-					   struct ww_acquire_ctx *ticket);
+static inline void ttm_bo_unreserve(struct ttm_buffer_object *bo)
+{
+	ttm_bo_unreserve_ticket(bo, NULL);
+}
 
 /*
  * ttm_bo_util.c
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 10/13] drm/ttm: get rid of ttm_bo_is_reserved usage
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
                   ` (8 preceding siblings ...)
  2013-06-27 11:48 ` [PATCH WW 09/13] drm/ttm: inline ttm_bo_reserve and related calls Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 12:23   ` Daniel Vetter
  2013-06-27 11:48 ` [PATCH WW 11/13] drm/radeon: " Maarten Lankhorst
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

Use lockdep_assert_held instead.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/gpu/drm/ttm/ttm_bo.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index a8a27f5..6e6975c 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -166,7 +166,7 @@ void ttm_bo_add_to_lru(struct ttm_buffer_object *bo)
 	struct ttm_bo_device *bdev = bo->bdev;
 	struct ttm_mem_type_manager *man;
 
-	BUG_ON(!ttm_bo_is_reserved(bo));
+	lockdep_assert_held(&bo->resv->lock.base);
 
 	if (!(bo->mem.placement & TTM_PL_FLAG_NO_EVICT)) {
 
@@ -671,7 +671,7 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo, bool interruptible,
 		goto out;
 	}
 
-	BUG_ON(!ttm_bo_is_reserved(bo));
+	lockdep_assert_held(&bo->resv->lock.base);
 
 	evict_mem = bo->mem;
 	evict_mem.mm_node = NULL;
@@ -961,7 +961,7 @@ int ttm_bo_move_buffer(struct ttm_buffer_object *bo,
 	struct ttm_mem_reg mem;
 	struct ttm_bo_device *bdev = bo->bdev;
 
-	BUG_ON(!ttm_bo_is_reserved(bo));
+	lockdep_assert_held(&bo->resv->lock.base);
 
 	/*
 	 * FIXME: It's possible to pipeline buffer moves.
@@ -1020,7 +1020,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo,
 {
 	int ret;
 
-	BUG_ON(!ttm_bo_is_reserved(bo));
+	lockdep_assert_held(&bo->resv->lock.base);
 	/* Check that range is valid */
 	if (placement->lpfn || placement->fpfn)
 		if (placement->fpfn > placement->lpfn ||
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 11/13] drm/radeon: get rid of ttm_bo_is_reserved usage
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
                   ` (9 preceding siblings ...)
  2013-06-27 11:48 ` [PATCH WW 10/13] drm/ttm: get rid of ttm_bo_is_reserved usage Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 22:05   ` Jerome Glisse
  2013-06-27 11:48 ` [PATCH WW 12/13] drm/vmwgfx: " Maarten Lankhorst
  2013-06-27 11:48 ` [PATCH WW 13/13] drm/ttm: get rid of ttm_bo_is_reserved Maarten Lankhorst
  12 siblings, 1 reply; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

Try to use lockdep_assert_held or other alternatives where possible.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/gpu/drm/radeon/radeon_object.c |  8 ++--
 drivers/gpu/drm/radeon/radeon_object.h |  5 ---
 drivers/gpu/drm/radeon/radeon_test.c   | 75 +++++++++++++++++-----------------
 3 files changed, 43 insertions(+), 45 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
index d850dc6..0219d26 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -400,7 +400,7 @@ int radeon_bo_get_surface_reg(struct radeon_bo *bo)
 	int steal;
 	int i;
 
-	BUG_ON(!radeon_bo_is_reserved(bo));
+	lockdep_assert_held(&bo->tbo.resv->lock.base);
 
 	if (!bo->tiling_flags)
 		return 0;
@@ -526,7 +526,8 @@ void radeon_bo_get_tiling_flags(struct radeon_bo *bo,
 				uint32_t *tiling_flags,
 				uint32_t *pitch)
 {
-	BUG_ON(!radeon_bo_is_reserved(bo));
+	lockdep_assert_held(&bo->tbo.resv->lock.base);
+
 	if (tiling_flags)
 		*tiling_flags = bo->tiling_flags;
 	if (pitch)
@@ -536,7 +537,8 @@ void radeon_bo_get_tiling_flags(struct radeon_bo *bo,
 int radeon_bo_check_tiling(struct radeon_bo *bo, bool has_moved,
 				bool force_drop)
 {
-	BUG_ON(!radeon_bo_is_reserved(bo) && !force_drop);
+	if (!force_drop)
+		lockdep_assert_held(&bo->tbo.resv->lock.base);
 
 	if (!(bo->tiling_flags & RADEON_TILING_SURFACE))
 		return 0;
diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h
index 456ad6b..91519a5 100644
--- a/drivers/gpu/drm/radeon/radeon_object.h
+++ b/drivers/gpu/drm/radeon/radeon_object.h
@@ -98,11 +98,6 @@ static inline unsigned long radeon_bo_size(struct radeon_bo *bo)
 	return bo->tbo.num_pages << PAGE_SHIFT;
 }
 
-static inline bool radeon_bo_is_reserved(struct radeon_bo *bo)
-{
-	return ttm_bo_is_reserved(&bo->tbo);
-}
-
 static inline unsigned radeon_bo_ngpu_pages(struct radeon_bo *bo)
 {
 	return (bo->tbo.num_pages << PAGE_SHIFT) / RADEON_GPU_PAGE_SIZE;
diff --git a/drivers/gpu/drm/radeon/radeon_test.c b/drivers/gpu/drm/radeon/radeon_test.c
index bbed4af..f4d6bce 100644
--- a/drivers/gpu/drm/radeon/radeon_test.c
+++ b/drivers/gpu/drm/radeon/radeon_test.c
@@ -35,7 +35,6 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
 {
 	struct radeon_bo *vram_obj = NULL;
 	struct radeon_bo **gtt_obj = NULL;
-	struct radeon_fence *fence = NULL;
 	uint64_t gtt_addr, vram_addr;
 	unsigned i, n, size;
 	int r, ring;
@@ -81,37 +80,38 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
 	}
 	r = radeon_bo_reserve(vram_obj, false);
 	if (unlikely(r != 0))
-		goto out_cleanup;
+		goto out_unref;
 	r = radeon_bo_pin(vram_obj, RADEON_GEM_DOMAIN_VRAM, &vram_addr);
 	if (r) {
 		DRM_ERROR("Failed to pin VRAM object\n");
-		goto out_cleanup;
+		goto out_unres;
 	}
 	for (i = 0; i < n; i++) {
 		void *gtt_map, *vram_map;
 		void **gtt_start, **gtt_end;
 		void **vram_start, **vram_end;
+		struct radeon_fence *fence = NULL;
 
 		r = radeon_bo_create(rdev, size, PAGE_SIZE, true,
 				     RADEON_GEM_DOMAIN_GTT, NULL, gtt_obj + i);
 		if (r) {
 			DRM_ERROR("Failed to create GTT object %d\n", i);
-			goto out_cleanup;
+			goto out_lclean;
 		}
 
 		r = radeon_bo_reserve(gtt_obj[i], false);
 		if (unlikely(r != 0))
-			goto out_cleanup;
+			goto out_lclean_unref;
 		r = radeon_bo_pin(gtt_obj[i], RADEON_GEM_DOMAIN_GTT, &gtt_addr);
 		if (r) {
 			DRM_ERROR("Failed to pin GTT object %d\n", i);
-			goto out_cleanup;
+			goto out_lclean_unres;
 		}
 
 		r = radeon_bo_kmap(gtt_obj[i], &gtt_map);
 		if (r) {
 			DRM_ERROR("Failed to map GTT object %d\n", i);
-			goto out_cleanup;
+			goto out_lclean_unpin;
 		}
 
 		for (gtt_start = gtt_map, gtt_end = gtt_map + size;
@@ -127,13 +127,13 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
 			r = radeon_copy_blit(rdev, gtt_addr, vram_addr, size / RADEON_GPU_PAGE_SIZE, &fence);
 		if (r) {
 			DRM_ERROR("Failed GTT->VRAM copy %d\n", i);
-			goto out_cleanup;
+			goto out_lclean_unpin;
 		}
 
 		r = radeon_fence_wait(fence, false);
 		if (r) {
 			DRM_ERROR("Failed to wait for GTT->VRAM fence %d\n", i);
-			goto out_cleanup;
+			goto out_lclean_unpin;
 		}
 
 		radeon_fence_unref(&fence);
@@ -141,7 +141,7 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
 		r = radeon_bo_kmap(vram_obj, &vram_map);
 		if (r) {
 			DRM_ERROR("Failed to map VRAM object after copy %d\n", i);
-			goto out_cleanup;
+			goto out_lclean_unpin;
 		}
 
 		for (gtt_start = gtt_map, gtt_end = gtt_map + size,
@@ -160,7 +160,7 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
 					  (vram_addr - rdev->mc.vram_start +
 					   (void*)gtt_start - gtt_map));
 				radeon_bo_kunmap(vram_obj);
-				goto out_cleanup;
+				goto out_lclean_unpin;
 			}
 			*vram_start = vram_start;
 		}
@@ -173,13 +173,13 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
 			r = radeon_copy_blit(rdev, vram_addr, gtt_addr, size / RADEON_GPU_PAGE_SIZE, &fence);
 		if (r) {
 			DRM_ERROR("Failed VRAM->GTT copy %d\n", i);
-			goto out_cleanup;
+			goto out_lclean_unpin;
 		}
 
 		r = radeon_fence_wait(fence, false);
 		if (r) {
 			DRM_ERROR("Failed to wait for VRAM->GTT fence %d\n", i);
-			goto out_cleanup;
+			goto out_lclean_unpin;
 		}
 
 		radeon_fence_unref(&fence);
@@ -187,7 +187,7 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
 		r = radeon_bo_kmap(gtt_obj[i], &gtt_map);
 		if (r) {
 			DRM_ERROR("Failed to map GTT object after copy %d\n", i);
-			goto out_cleanup;
+			goto out_lclean_unpin;
 		}
 
 		for (gtt_start = gtt_map, gtt_end = gtt_map + size,
@@ -206,7 +206,7 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
 					  (gtt_addr - rdev->mc.gtt_start +
 					   (void*)vram_start - vram_map));
 				radeon_bo_kunmap(gtt_obj[i]);
-				goto out_cleanup;
+				goto out_lclean_unpin;
 			}
 		}
 
@@ -214,31 +214,32 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
 
 		DRM_INFO("Tested GTT->VRAM and VRAM->GTT copy for GTT offset 0x%llx\n",
 			 gtt_addr - rdev->mc.gtt_start);
+		continue;
+
+out_lclean_unpin:
+		radeon_bo_unpin(gtt_obj[i]);
+out_lclean_unres:
+		radeon_bo_unreserve(gtt_obj[i]);
+out_lclean_unref:
+		radeon_bo_unref(&gtt_obj[i]);
+out_lclean:
+		for (--i; i >= 0; --i) {
+			radeon_bo_unpin(gtt_obj[i]);
+			radeon_bo_unreserve(gtt_obj[i]);
+			radeon_bo_unref(&gtt_obj[i]);
+		}
+		if (fence)
+			radeon_fence_unref(&fence);
+		break;
 	}
 
+	radeon_bo_unpin(vram_obj);
+out_unres:
+	radeon_bo_unreserve(vram_obj);
+out_unref:
+	radeon_bo_unref(&vram_obj);
 out_cleanup:
-	if (vram_obj) {
-		if (radeon_bo_is_reserved(vram_obj)) {
-			radeon_bo_unpin(vram_obj);
-			radeon_bo_unreserve(vram_obj);
-		}
-		radeon_bo_unref(&vram_obj);
-	}
-	if (gtt_obj) {
-		for (i = 0; i < n; i++) {
-			if (gtt_obj[i]) {
-				if (radeon_bo_is_reserved(gtt_obj[i])) {
-					radeon_bo_unpin(gtt_obj[i]);
-					radeon_bo_unreserve(gtt_obj[i]);
-				}
-				radeon_bo_unref(&gtt_obj[i]);
-			}
-		}
-		kfree(gtt_obj);
-	}
-	if (fence) {
-		radeon_fence_unref(&fence);
-	}
+	kfree(gtt_obj);
 	if (r) {
 		printk(KERN_WARNING "Error while testing BO move.\n");
 	}
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 12/13] drm/vmwgfx: get rid of ttm_bo_is_reserved usage
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
                   ` (10 preceding siblings ...)
  2013-06-27 11:48 ` [PATCH WW 11/13] drm/radeon: " Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  2013-06-27 11:48 ` [PATCH WW 13/13] drm/ttm: get rid of ttm_bo_is_reserved Maarten Lankhorst
  12 siblings, 0 replies; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

Use lockdep_assert_held instead.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c   | 2 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c
index 5fae06a..d4e54fc 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c
@@ -302,7 +302,7 @@ void vmw_bo_pin(struct ttm_buffer_object *bo, bool pin)
 	uint32_t old_mem_type = bo->mem.mem_type;
 	int ret;
 
-	BUG_ON(!ttm_bo_is_reserved(bo));
+	lockdep_assert_held(&bo->resv->lock.base);
 	BUG_ON(old_mem_type != TTM_PL_VRAM &&
 	       old_mem_type != VMW_PL_GMR);
 
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
index ced7946..7953d1f 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
@@ -958,13 +958,13 @@ void vmw_resource_unreserve(struct vmw_resource *res,
 	if (new_backup && new_backup != res->backup) {
 
 		if (res->backup) {
-			BUG_ON(!ttm_bo_is_reserved(&res->backup->base));
+			lockdep_assert_held(&res->backup->base.resv->lock.base);
 			list_del_init(&res->mob_head);
 			vmw_dmabuf_unreference(&res->backup);
 		}
 
 		res->backup = vmw_dmabuf_reference(new_backup);
-		BUG_ON(!ttm_bo_is_reserved(&new_backup->base));
+		lockdep_assert_held(&new_backup->base.resv->lock.base);
 		list_add_tail(&res->mob_head, &new_backup->res_list);
 	}
 	if (new_backup)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH WW 13/13] drm/ttm: get rid of ttm_bo_is_reserved
  2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
                   ` (11 preceding siblings ...)
  2013-06-27 11:48 ` [PATCH WW 12/13] drm/vmwgfx: " Maarten Lankhorst
@ 2013-06-27 11:48 ` Maarten Lankhorst
  12 siblings, 0 replies; 21+ messages in thread
From: Maarten Lankhorst @ 2013-06-27 11:48 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 include/drm/ttm/ttm_bo_api.h | 14 --------------
 1 file changed, 14 deletions(-)

diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 31ad860..8a6aa56 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -712,18 +712,4 @@ extern ssize_t ttm_bo_io(struct ttm_bo_device *bdev, struct file *filp,
 
 extern void ttm_bo_swapout_all(struct ttm_bo_device *bdev);
 
-/**
- * ttm_bo_is_reserved - return an indication if a ttm buffer object is reserved
- *
- * @bo:     The buffer object to check.
- *
- * This function returns an indication if a bo is reserved or not, and should
- * only be used to print an error when it is not from incorrect api usage, since
- * there's no guarantee that it is the caller that is holding the reservation.
- */
-static inline bool ttm_bo_is_reserved(struct ttm_buffer_object *bo)
-{
-	return ww_mutex_is_locked(&bo->resv->lock);
-}
-
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH WW 09/13] drm/ttm: inline ttm_bo_reserve and related calls
  2013-06-27 11:48 ` [PATCH WW 09/13] drm/ttm: inline ttm_bo_reserve and related calls Maarten Lankhorst
@ 2013-06-27 12:21   ` Daniel Vetter
  0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2013-06-27 12:21 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: Dave Airlie, dri-devel

On Thu, Jun 27, 2013 at 01:48:24PM +0200, Maarten Lankhorst wrote:
> Makes lockdep a lot more useful.
> 
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> ---
>  drivers/gpu/drm/ttm/ttm_bo.c           | 105 ++------------------
>  drivers/gpu/drm/ttm/ttm_execbuf_util.c |   9 +-
>  include/drm/ttm/ttm_bo_driver.h        | 175 ++++++++++++++++++++-------------

The function movement in the header makes the diff a bit hard to read. So
if possible I think that should be avoided. Anyway I think I've spotted a
few kerneldoc updates (like s/EAGAIN/EDEADLK/) which should be part of the
main conversion patch.
-Daniel

>  3 files changed, 117 insertions(+), 172 deletions(-)
> 
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 5f9fe80..a8a27f5 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -182,6 +182,7 @@ void ttm_bo_add_to_lru(struct ttm_buffer_object *bo)
>  		}
>  	}
>  }
> +EXPORT_SYMBOL(ttm_bo_add_to_lru);
>  
>  int ttm_bo_del_from_lru(struct ttm_buffer_object *bo)
>  {
> @@ -204,35 +205,6 @@ int ttm_bo_del_from_lru(struct ttm_buffer_object *bo)
>  	return put_count;
>  }
>  
> -int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
> -			  bool interruptible,
> -			  bool no_wait, bool use_ticket,
> -			  struct ww_acquire_ctx *ticket)
> -{
> -	int ret = 0;
> -
> -	if (no_wait) {
> -		bool success;
> -
> -		/* not valid any more, fix your locking! */
> -		if (WARN_ON(ticket))
> -			return -EBUSY;
> -
> -		success = ww_mutex_trylock(&bo->resv->lock);
> -		return success ? 0 : -EBUSY;
> -	}
> -
> -	if (interruptible)
> -		ret = ww_mutex_lock_interruptible(&bo->resv->lock,
> -						  ticket);
> -	else
> -		ret = ww_mutex_lock(&bo->resv->lock, ticket);
> -	if (ret == -EINTR)
> -		return -ERESTARTSYS;
> -	return ret;
> -}
> -EXPORT_SYMBOL(ttm_bo_reserve);
> -
>  static void ttm_bo_ref_bug(struct kref *list_kref)
>  {
>  	BUG();
> @@ -245,77 +217,16 @@ void ttm_bo_list_ref_sub(struct ttm_buffer_object *bo, int count,
>  		 (never_free) ? ttm_bo_ref_bug : ttm_bo_release_list);
>  }
>  
> -int ttm_bo_reserve(struct ttm_buffer_object *bo,
> -		   bool interruptible,
> -		   bool no_wait, bool use_ticket,
> -		   struct ww_acquire_ctx *ticket)
> -{
> -	struct ttm_bo_global *glob = bo->glob;
> -	int put_count = 0;
> -	int ret;
> -
> -	ret = ttm_bo_reserve_nolru(bo, interruptible, no_wait, use_ticket,
> -				    ticket);
> -	if (likely(ret == 0)) {
> -		spin_lock(&glob->lru_lock);
> -		put_count = ttm_bo_del_from_lru(bo);
> -		spin_unlock(&glob->lru_lock);
> -		ttm_bo_list_ref_sub(bo, put_count, true);
> -	}
> -
> -	return ret;
> -}
> -
> -int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
> -			    bool interruptible, struct ww_acquire_ctx *ticket)
> -{
> -	struct ttm_bo_global *glob = bo->glob;
> -	int put_count = 0;
> -	int ret = 0;
> -
> -	if (interruptible)
> -		ret = ww_mutex_lock_slow_interruptible(&bo->resv->lock,
> -						       ticket);
> -	else
> -		ww_mutex_lock_slow(&bo->resv->lock, ticket);
> -
> -	if (likely(ret == 0)) {
> -		spin_lock(&glob->lru_lock);
> -		put_count = ttm_bo_del_from_lru(bo);
> -		spin_unlock(&glob->lru_lock);
> -		ttm_bo_list_ref_sub(bo, put_count, true);
> -	} else if (ret == -EINTR)
> -		ret = -ERESTARTSYS;
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(ttm_bo_reserve_slowpath);
> -
> -void ttm_bo_unreserve_ticket_locked(struct ttm_buffer_object *bo, struct ww_acquire_ctx *ticket)
> +void ttm_bo_del_sub_from_lru(struct ttm_buffer_object *bo)
>  {
> -	ttm_bo_add_to_lru(bo);
> -	ww_mutex_unlock(&bo->resv->lock);
> -}
> -
> -void ttm_bo_unreserve(struct ttm_buffer_object *bo)
> -{
> -	struct ttm_bo_global *glob = bo->glob;
> -
> -	spin_lock(&glob->lru_lock);
> -	ttm_bo_unreserve_ticket_locked(bo, NULL);
> -	spin_unlock(&glob->lru_lock);
> -}
> -EXPORT_SYMBOL(ttm_bo_unreserve);
> -
> -void ttm_bo_unreserve_ticket(struct ttm_buffer_object *bo, struct ww_acquire_ctx *ticket)
> -{
> -	struct ttm_bo_global *glob = bo->glob;
> +	int put_count;
>  
> -	spin_lock(&glob->lru_lock);
> -	ttm_bo_unreserve_ticket_locked(bo, ticket);
> -	spin_unlock(&glob->lru_lock);
> +	spin_lock(&bo->glob->lru_lock);
> +	put_count = ttm_bo_del_from_lru(bo);
> +	spin_unlock(&bo->glob->lru_lock);
> +	ttm_bo_list_ref_sub(bo, put_count, true);
>  }
> -EXPORT_SYMBOL(ttm_bo_unreserve_ticket);
> +EXPORT_SYMBOL(ttm_bo_del_sub_from_lru);
>  
>  /*
>   * Call bo->mutex locked.
> diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> index 7392da5..6c91178 100644
> --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> @@ -44,12 +44,10 @@ static void ttm_eu_backoff_reservation_locked(struct list_head *list,
>  
>  		entry->reserved = false;
>  		if (entry->removed) {
> -			ttm_bo_unreserve_ticket_locked(bo, ticket);
> +			ttm_bo_add_to_lru(bo);
>  			entry->removed = false;
> -
> -		} else {
> -			ww_mutex_unlock(&bo->resv->lock);
>  		}
> +		ww_mutex_unlock(&bo->resv->lock);
>  	}
>  }
>  
> @@ -220,7 +218,8 @@ void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket,
>  		bo = entry->bo;
>  		entry->old_sync_obj = bo->sync_obj;
>  		bo->sync_obj = driver->sync_obj_ref(sync_obj);
> -		ttm_bo_unreserve_ticket_locked(bo, ticket);
> +		ttm_bo_add_to_lru(bo);
> +		ww_mutex_unlock(&bo->resv->lock);
>  		entry->reserved = false;
>  	}
>  	spin_unlock(&bdev->fence_lock);
> diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
> index ec18c5f..984fc2d 100644
> --- a/include/drm/ttm/ttm_bo_driver.h
> +++ b/include/drm/ttm/ttm_bo_driver.h
> @@ -33,6 +33,7 @@
>  #include <ttm/ttm_bo_api.h>
>  #include <ttm/ttm_memory.h>
>  #include <ttm/ttm_module.h>
> +#include <ttm/ttm_placement.h>
>  #include <drm/drm_mm.h>
>  #include <drm/drm_global.h>
>  #include <linux/workqueue.h>
> @@ -772,6 +773,55 @@ extern int ttm_mem_io_lock(struct ttm_mem_type_manager *man,
>  			   bool interruptible);
>  extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
>  
> +extern void ttm_bo_del_sub_from_lru(struct ttm_buffer_object *bo);
> +extern void ttm_bo_add_to_lru(struct ttm_buffer_object *bo);
> +
> +/**
> + * ttm_bo_reserve_nolru:
> + *
> + * @bo: A pointer to a struct ttm_buffer_object.
> + * @interruptible: Sleep interruptible if waiting.
> + * @no_wait: Don't sleep while trying to reserve, rather return -EBUSY.
> + * @use_ticket: If @bo is already reserved, Only sleep waiting for
> + * it to become unreserved if @ticket->stamp is older.
> + *
> + * Will not remove reserved buffers from the lru lists.
> + * Otherwise identical to ttm_bo_reserve.
> + *
> + * Returns:
> + * -EDEADLK: The reservation may cause a deadlock.
> + * Release all buffer reservations, wait for @bo to become unreserved and
> + * try again. (only if use_sequence == 1).
> + * -ERESTARTSYS: A wait for the buffer to become unreserved was interrupted by
> + * a signal. Release all buffer reservations and return to user-space.
> + * -EBUSY: The function needed to sleep, but @no_wait was true
> + * -EALREADY: Bo already reserved using @ticket. This error code will only
> + * be returned if @use_ticket is set to true.
> + */
> +static inline int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
> +				       bool interruptible,
> +				       bool no_wait, bool use_ticket,
> +				       struct ww_acquire_ctx *ticket)
> +{
> +	int ret = 0;
> +
> +	if (no_wait) {
> +		bool success;
> +		if (WARN_ON(ticket))
> +			return -EBUSY;
> +
> +		success = ww_mutex_trylock(&bo->resv->lock);
> +		return success ? 0 : -EBUSY;
> +	}
> +
> +	if (interruptible)
> +		ret = ww_mutex_lock_interruptible(&bo->resv->lock, ticket);
> +	else
> +		ret = ww_mutex_lock(&bo->resv->lock, ticket);
> +	if (ret == -EINTR)
> +		return -ERESTARTSYS;
> +	return ret;
> +}
>  
>  /**
>   * ttm_bo_reserve:
> @@ -780,7 +830,7 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
>   * @interruptible: Sleep interruptible if waiting.
>   * @no_wait: Don't sleep while trying to reserve, rather return -EBUSY.
>   * @use_ticket: If @bo is already reserved, Only sleep waiting for
> - * it to become unreserved if @sequence < (@bo)->sequence.
> + * it to become unreserved if @ticket->stamp is older.
>   *
>   * Locks a buffer object for validation. (Or prevents other processes from
>   * locking it for validation) and removes it from lru lists, while taking
> @@ -794,7 +844,7 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
>   * Processes attempting to reserve multiple buffers other than for eviction,
>   * (typically execbuf), should first obtain a unique 32-bit
>   * validation sequence number,
> - * and call this function with @use_sequence == 1 and @sequence == the unique
> + * and call this function with @use_ticket == 1 and @ticket->stamp == the unique
>   * sequence number. If upon call of this function, the buffer object is already
>   * reserved, the validation sequence is checked against the validation
>   * sequence of the process currently reserving the buffer,
> @@ -809,37 +859,31 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
>   * will eventually succeed, preventing both deadlocks and starvation.
>   *
>   * Returns:
> - * -EAGAIN: The reservation may cause a deadlock.
> + * -EDEADLK: The reservation may cause a deadlock.
>   * Release all buffer reservations, wait for @bo to become unreserved and
>   * try again. (only if use_sequence == 1).
>   * -ERESTARTSYS: A wait for the buffer to become unreserved was interrupted by
>   * a signal. Release all buffer reservations and return to user-space.
>   * -EBUSY: The function needed to sleep, but @no_wait was true
> - * -EDEADLK: Bo already reserved using @sequence. This error code will only
> - * be returned if @use_sequence is set to true.
> + * -EALREADY: Bo already reserved using @ticket. This error code will only
> + * be returned if @use_ticket is set to true.
>   */
> -extern int ttm_bo_reserve(struct ttm_buffer_object *bo,
> -			  bool interruptible,
> -			  bool no_wait, bool use_ticket,
> -			  struct ww_acquire_ctx *ticket);
> +static inline int ttm_bo_reserve(struct ttm_buffer_object *bo,
> +				 bool interruptible,
> +				 bool no_wait, bool use_ticket,
> +				 struct ww_acquire_ctx *ticket)
> +{
> +	int ret;
>  
> -/**
> - * ttm_bo_reserve_slowpath_nolru:
> - * @bo: A pointer to a struct ttm_buffer_object.
> - * @interruptible: Sleep interruptible if waiting.
> - * @sequence: Set (@bo)->sequence to this value after lock
> - *
> - * This is called after ttm_bo_reserve returns -EAGAIN and we backed off
> - * from all our other reservations. Because there are no other reservations
> - * held by us, this function cannot deadlock any more.
> - *
> - * Will not remove reserved buffers from the lru lists.
> - * Otherwise identical to ttm_bo_reserve_slowpath.
> - */
> -extern int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
> -					 bool interruptible,
> -					 struct ww_acquire_ctx *ticket);
> +	WARN_ON(!atomic_read(&bo->kref.refcount));
> +
> +	ret = ttm_bo_reserve_nolru(bo, interruptible, no_wait, use_ticket,
> +				    ticket);
> +	if (likely(ret == 0))
> +		ttm_bo_del_sub_from_lru(bo);
>  
> +	return ret;
> +}
>  
>  /**
>   * ttm_bo_reserve_slowpath:
> @@ -851,45 +895,27 @@ extern int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
>   * from all our other reservations. Because there are no other reservations
>   * held by us, this function cannot deadlock any more.
>   */
> -extern int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
> -				   bool interruptible,
> -				   struct ww_acquire_ctx *ticket);
> +static inline int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
> +					  bool interruptible,
> +					  struct ww_acquire_ctx *ticket)
> +{
> +	int ret = 0;
>  
> -/**
> - * ttm_bo_reserve_nolru:
> - *
> - * @bo: A pointer to a struct ttm_buffer_object.
> - * @interruptible: Sleep interruptible if waiting.
> - * @no_wait: Don't sleep while trying to reserve, rather return -EBUSY.
> - * @use_sequence: If @bo is already reserved, Only sleep waiting for
> - * it to become unreserved if @sequence < (@bo)->sequence.
> - *
> - * Will not remove reserved buffers from the lru lists.
> - * Otherwise identical to ttm_bo_reserve.
> - *
> - * Returns:
> - * -EAGAIN: The reservation may cause a deadlock.
> - * Release all buffer reservations, wait for @bo to become unreserved and
> - * try again. (only if use_sequence == 1).
> - * -ERESTARTSYS: A wait for the buffer to become unreserved was interrupted by
> - * a signal. Release all buffer reservations and return to user-space.
> - * -EBUSY: The function needed to sleep, but @no_wait was true
> - * -EDEADLK: Bo already reserved using @sequence. This error code will only
> - * be returned if @use_sequence is set to true.
> - */
> -extern int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
> -				 bool interruptible,
> -				 bool no_wait, bool use_ticket,
> -				 struct ww_acquire_ctx *ticket);
> +	WARN_ON(!atomic_read(&bo->kref.refcount));
>  
> -/**
> - * ttm_bo_unreserve
> - *
> - * @bo: A pointer to a struct ttm_buffer_object.
> - *
> - * Unreserve a previous reservation of @bo.
> - */
> -extern void ttm_bo_unreserve(struct ttm_buffer_object *bo);
> +	if (interruptible)
> +		ret = ww_mutex_lock_slow_interruptible(&bo->resv->lock,
> +						       ticket);
> +	else
> +		ww_mutex_lock_slow(&bo->resv->lock, ticket);
> +
> +	if (likely(ret == 0))
> +		ttm_bo_del_sub_from_lru(bo);
> +	else if (ret == -EINTR)
> +		ret = -ERESTARTSYS;
> +
> +	return ret;
> +}
>  
>  /**
>   * ttm_bo_unreserve_ticket
> @@ -898,19 +924,28 @@ extern void ttm_bo_unreserve(struct ttm_buffer_object *bo);
>   *
>   * Unreserve a previous reservation of @bo made with @ticket.
>   */
> -extern void ttm_bo_unreserve_ticket(struct ttm_buffer_object *bo,
> -				    struct ww_acquire_ctx *ticket);
> +static inline void ttm_bo_unreserve_ticket(struct ttm_buffer_object *bo,
> +					   struct ww_acquire_ctx *t)
> +{
> +	if (!(bo->mem.placement & TTM_PL_FLAG_NO_EVICT)) {
> +		spin_lock(&bo->glob->lru_lock);
> +		ttm_bo_add_to_lru(bo);
> +		spin_unlock(&bo->glob->lru_lock);
> +	}
> +	ww_mutex_unlock(&bo->resv->lock);
> +}
>  
>  /**
> - * ttm_bo_unreserve_locked
> + * ttm_bo_unreserve
> + *
>   * @bo: A pointer to a struct ttm_buffer_object.
> - * @ticket: ww_acquire_ctx used for reserving, or NULL
>   *
> - * Unreserve a previous reservation of @bo made with @ticket.
> - * Needs to be called with struct ttm_bo_global::lru_lock held.
> + * Unreserve a previous reservation of @bo.
>   */
> -extern void ttm_bo_unreserve_ticket_locked(struct ttm_buffer_object *bo,
> -					   struct ww_acquire_ctx *ticket);
> +static inline void ttm_bo_unreserve(struct ttm_buffer_object *bo)
> +{
> +	ttm_bo_unreserve_ticket(bo, NULL);
> +}
>  
>  /*
>   * ttm_bo_util.c
> -- 
> 1.8.3.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH WW 10/13] drm/ttm: get rid of ttm_bo_is_reserved usage
  2013-06-27 11:48 ` [PATCH WW 10/13] drm/ttm: get rid of ttm_bo_is_reserved usage Maarten Lankhorst
@ 2013-06-27 12:23   ` Daniel Vetter
  0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2013-06-27 12:23 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: Dave Airlie, dri-devel

On Thu, Jun 27, 2013 at 01:48:25PM +0200, Maarten Lankhorst wrote:
> Use lockdep_assert_held instead.
> 
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> ---
>  drivers/gpu/drm/ttm/ttm_bo.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index a8a27f5..6e6975c 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -166,7 +166,7 @@ void ttm_bo_add_to_lru(struct ttm_buffer_object *bo)
>  	struct ttm_bo_device *bdev = bo->bdev;
>  	struct ttm_mem_type_manager *man;
>  
> -	BUG_ON(!ttm_bo_is_reserved(bo));
> +	lockdep_assert_held(&bo->resv->lock.base);

What about assert_ttm_bo_is_reserved instead to both dtrt and also hide
the pointer chasing a bit better? Maybe even add a
assert_reservation_is_held helper to the very first patch?

Same comment applies to following patches which roll this change out on
the drivers.
-Daniel

>  
>  	if (!(bo->mem.placement & TTM_PL_FLAG_NO_EVICT)) {
>  
> @@ -671,7 +671,7 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo, bool interruptible,
>  		goto out;
>  	}
>  
> -	BUG_ON(!ttm_bo_is_reserved(bo));
> +	lockdep_assert_held(&bo->resv->lock.base);
>  
>  	evict_mem = bo->mem;
>  	evict_mem.mm_node = NULL;
> @@ -961,7 +961,7 @@ int ttm_bo_move_buffer(struct ttm_buffer_object *bo,
>  	struct ttm_mem_reg mem;
>  	struct ttm_bo_device *bdev = bo->bdev;
>  
> -	BUG_ON(!ttm_bo_is_reserved(bo));
> +	lockdep_assert_held(&bo->resv->lock.base);
>  
>  	/*
>  	 * FIXME: It's possible to pipeline buffer moves.
> @@ -1020,7 +1020,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo,
>  {
>  	int ret;
>  
> -	BUG_ON(!ttm_bo_is_reserved(bo));
> +	lockdep_assert_held(&bo->resv->lock.base);
>  	/* Check that range is valid */
>  	if (placement->lpfn || placement->fpfn)
>  		if (placement->fpfn > placement->lpfn ||
> -- 
> 1.8.3.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH WW 01/13] reservation: cross-device reservation support, v4
  2013-06-27 11:48 ` [PATCH WW 01/13] reservation: cross-device reservation support, v4 Maarten Lankhorst
@ 2013-06-27 21:45   ` Jerome Glisse
  0 siblings, 0 replies; 21+ messages in thread
From: Jerome Glisse @ 2013-06-27 21:45 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: Dave Airlie, dri-devel

On Thu, Jun 27, 2013 at 01:48:16PM +0200, Maarten Lankhorst wrote:
> This adds support for a generic reservations framework that can be
> hooked up to ttm and dma-buf and allows easy sharing of reservations
> across devices.
> 
> The idea is that a dma-buf and ttm object both will get a pointer
> to a struct reservation_object, which has to be reserved before
> anything is done with the contents of the dma-buf.
> 
> Changes since v1:
>  - Fix locking issue in ticket_reserve, which could cause mutex_unlock
>    to be called too many times.
> Changes since v2:
>  - All fence related calls and members have been taken out for now,
>    what's left is the bare minimum to be useful for ttm locking conversion.
> Changes since v3:
>  - Removed helper functions too. The documentation has an example
>    implementation for locking. With the move to ww_mutex there is no
>    need to have much logic any more.
> 
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>

Reviewed-by: Jerome Glisse <jglisse@redhat.com>

> ---
>  Documentation/DocBook/device-drivers.tmpl |  2 +
>  drivers/base/Makefile                     |  2 +-
>  drivers/base/reservation.c                | 39 +++++++++++++++++++
>  include/linux/reservation.h               | 62 +++++++++++++++++++++++++++++++
>  4 files changed, 104 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/base/reservation.c
>  create mode 100644 include/linux/reservation.h
> 
> diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
> index c36892c..f0648a8 100644
> --- a/Documentation/DocBook/device-drivers.tmpl
> +++ b/Documentation/DocBook/device-drivers.tmpl
> @@ -126,6 +126,8 @@ X!Edrivers/base/interface.c
>       </sect1>
>       <sect1><title>Device Drivers DMA Management</title>
>  !Edrivers/base/dma-buf.c
> +!Edrivers/base/reservation.c
> +!Iinclude/linux/reservation.h
>  !Edrivers/base/dma-coherent.c
>  !Edrivers/base/dma-mapping.c
>       </sect1>
> diff --git a/drivers/base/Makefile b/drivers/base/Makefile
> index 4e22ce3..48029aa 100644
> --- a/drivers/base/Makefile
> +++ b/drivers/base/Makefile
> @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o
>  obj-y			+= power/
>  obj-$(CONFIG_HAS_DMA)	+= dma-mapping.o
>  obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
> -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o
> +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o reservation.o
>  obj-$(CONFIG_ISA)	+= isa.o
>  obj-$(CONFIG_FW_LOADER)	+= firmware_class.o
>  obj-$(CONFIG_NUMA)	+= node.o
> diff --git a/drivers/base/reservation.c b/drivers/base/reservation.c
> new file mode 100644
> index 0000000..a73fbf3
> --- /dev/null
> +++ b/drivers/base/reservation.c
> @@ -0,0 +1,39 @@
> +/*
> + * Copyright (C) 2012-2013 Canonical Ltd
> + *
> + * Based on bo.c which bears the following copyright notice,
> + * but is dual licensed:
> + *
> + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
> + * All Rights Reserved.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + **************************************************************************/
> +/*
> + * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
> + */
> +
> +#include <linux/reservation.h>
> +#include <linux/export.h>
> +
> +DEFINE_WW_CLASS(reservation_ww_class);
> +EXPORT_SYMBOL(reservation_ww_class);
> diff --git a/include/linux/reservation.h b/include/linux/reservation.h
> new file mode 100644
> index 0000000..e9ee806
> --- /dev/null
> +++ b/include/linux/reservation.h
> @@ -0,0 +1,62 @@
> +/*
> + * Header file for reservations for dma-buf and ttm
> + *
> + * Copyright(C) 2011 Linaro Limited. All rights reserved.
> + * Copyright (C) 2012-2013 Canonical Ltd
> + * Copyright (C) 2012 Texas Instruments
> + *
> + * Authors:
> + * Rob Clark <rob.clark@linaro.org>
> + * Maarten Lankhorst <maarten.lankhorst@canonical.com>
> + * Thomas Hellstrom <thellstrom-at-vmware-dot-com>
> + *
> + * Based on bo.c which bears the following copyright notice,
> + * but is dual licensed:
> + *
> + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
> + * All Rights Reserved.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + */
> +#ifndef _LINUX_RESERVATION_H
> +#define _LINUX_RESERVATION_H
> +
> +#include <linux/mutex.h>
> +
> +extern struct ww_class reservation_ww_class;
> +
> +struct reservation_object {
> +	struct ww_mutex lock;
> +};
> +
> +static inline void
> +reservation_object_init(struct reservation_object *obj)
> +{
> +	ww_mutex_init(&obj->lock, &reservation_ww_class);
> +}
> +
> +static inline void
> +reservation_object_fini(struct reservation_object *obj)
> +{
> +	ww_mutex_destroy(&obj->lock);
> +}
> +
> +#endif /* _LINUX_RESERVATION_H */
> -- 
> 1.8.3.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH WW 02/13] drm/ttm: make ttm reservation calls behave like reservation calls
  2013-06-27 11:48 ` [PATCH WW 02/13] drm/ttm: make ttm reservation calls behave like reservation calls Maarten Lankhorst
@ 2013-06-27 21:47   ` Jerome Glisse
  0 siblings, 0 replies; 21+ messages in thread
From: Jerome Glisse @ 2013-06-27 21:47 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: Dave Airlie, dri-devel

On Thu, Jun 27, 2013 at 01:48:17PM +0200, Maarten Lankhorst wrote:
> This commit converts the source of the val_seq counter to
> the ww_mutex api. The reservation objects are converted later,
> because there is still a lockdep splat in nouveau that has to
> resolved first.
> 
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>

Reviewed-by: Jerome Glisse <jglisse@redhat.com>

> ---
>  drivers/gpu/drm/nouveau/nouveau_gem.c    | 38 ++++++++++++++-------
>  drivers/gpu/drm/radeon/radeon.h          |  1 +
>  drivers/gpu/drm/radeon/radeon_cs.c       | 18 +++++-----
>  drivers/gpu/drm/radeon/radeon_object.c   |  5 +--
>  drivers/gpu/drm/radeon/radeon_object.h   |  3 +-
>  drivers/gpu/drm/radeon/radeon_uvd.c      | 27 +++++++--------
>  drivers/gpu/drm/ttm/ttm_bo.c             | 50 +++++++++++++++++----------
>  drivers/gpu/drm/ttm/ttm_execbuf_util.c   | 58 +++++++++++++++++---------------
>  drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c  | 14 ++++----
>  drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 23 ++++++++-----
>  include/drm/ttm/ttm_bo_api.h             |  2 +-
>  include/drm/ttm/ttm_bo_driver.h          | 32 +++++++++++++-----
>  include/drm/ttm/ttm_execbuf_util.h       | 13 +++++--
>  13 files changed, 172 insertions(+), 112 deletions(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index 7054706..e35d468 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -277,10 +277,12 @@ struct validate_op {
>  	struct list_head vram_list;
>  	struct list_head gart_list;
>  	struct list_head both_list;
> +	struct ww_acquire_ctx ticket;
>  };
>  
>  static void
> -validate_fini_list(struct list_head *list, struct nouveau_fence *fence)
> +validate_fini_list(struct list_head *list, struct nouveau_fence *fence,
> +		   struct ww_acquire_ctx *ticket)
>  {
>  	struct list_head *entry, *tmp;
>  	struct nouveau_bo *nvbo;
> @@ -297,17 +299,24 @@ validate_fini_list(struct list_head *list, struct nouveau_fence *fence)
>  
>  		list_del(&nvbo->entry);
>  		nvbo->reserved_by = NULL;
> -		ttm_bo_unreserve(&nvbo->bo);
> +		ttm_bo_unreserve_ticket(&nvbo->bo, ticket);
>  		drm_gem_object_unreference_unlocked(nvbo->gem);
>  	}
>  }
>  
>  static void
> -validate_fini(struct validate_op *op, struct nouveau_fence* fence)
> +validate_fini_no_ticket(struct validate_op *op, struct nouveau_fence *fence)
>  {
> -	validate_fini_list(&op->vram_list, fence);
> -	validate_fini_list(&op->gart_list, fence);
> -	validate_fini_list(&op->both_list, fence);
> +	validate_fini_list(&op->vram_list, fence, &op->ticket);
> +	validate_fini_list(&op->gart_list, fence, &op->ticket);
> +	validate_fini_list(&op->both_list, fence, &op->ticket);
> +}
> +
> +static void
> +validate_fini(struct validate_op *op, struct nouveau_fence *fence)
> +{
> +	validate_fini_no_ticket(op, fence);
> +	ww_acquire_fini(&op->ticket);
>  }
>  
>  static int
> @@ -317,13 +326,11 @@ validate_init(struct nouveau_channel *chan, struct drm_file *file_priv,
>  {
>  	struct nouveau_cli *cli = nouveau_cli(file_priv);
>  	struct drm_device *dev = chan->drm->dev;
> -	struct nouveau_drm *drm = nouveau_drm(dev);
> -	uint32_t sequence;
>  	int trycnt = 0;
>  	int ret, i;
>  	struct nouveau_bo *res_bo = NULL;
>  
> -	sequence = atomic_add_return(1, &drm->ttm.validate_sequence);
> +	ww_acquire_init(&op->ticket, &reservation_ww_class);
>  retry:
>  	if (++trycnt > 100000) {
>  		NV_ERROR(cli, "%s failed and gave up.\n", __func__);
> @@ -338,6 +345,7 @@ retry:
>  		gem = drm_gem_object_lookup(dev, file_priv, b->handle);
>  		if (!gem) {
>  			NV_ERROR(cli, "Unknown handle 0x%08x\n", b->handle);
> +			ww_acquire_done(&op->ticket);
>  			validate_fini(op, NULL);
>  			return -ENOENT;
>  		}
> @@ -352,21 +360,23 @@ retry:
>  			NV_ERROR(cli, "multiple instances of buffer %d on "
>  				      "validation list\n", b->handle);
>  			drm_gem_object_unreference_unlocked(gem);
> +			ww_acquire_done(&op->ticket);
>  			validate_fini(op, NULL);
>  			return -EINVAL;
>  		}
>  
> -		ret = ttm_bo_reserve(&nvbo->bo, true, false, true, sequence);
> +		ret = ttm_bo_reserve(&nvbo->bo, true, false, true, &op->ticket);
>  		if (ret) {
> -			validate_fini(op, NULL);
> +			validate_fini_no_ticket(op, NULL);
>  			if (unlikely(ret == -EAGAIN)) {
> -				sequence = atomic_add_return(1, &drm->ttm.validate_sequence);
>  				ret = ttm_bo_reserve_slowpath(&nvbo->bo, true,
> -							      sequence);
> +							      &op->ticket);
>  				if (!ret)
>  					res_bo = nvbo;
>  			}
>  			if (unlikely(ret)) {
> +				ww_acquire_done(&op->ticket);
> +				ww_acquire_fini(&op->ticket);
>  				drm_gem_object_unreference_unlocked(gem);
>  				if (ret != -ERESTARTSYS)
>  					NV_ERROR(cli, "fail reserve\n");
> @@ -390,6 +400,7 @@ retry:
>  			NV_ERROR(cli, "invalid valid domains: 0x%08x\n",
>  				 b->valid_domains);
>  			list_add_tail(&nvbo->entry, &op->both_list);
> +			ww_acquire_done(&op->ticket);
>  			validate_fini(op, NULL);
>  			return -EINVAL;
>  		}
> @@ -397,6 +408,7 @@ retry:
>  			goto retry;
>  	}
>  
> +	ww_acquire_done(&op->ticket);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
> index 142ce6c..eaf4ff5 100644
> --- a/drivers/gpu/drm/radeon/radeon.h
> +++ b/drivers/gpu/drm/radeon/radeon.h
> @@ -883,6 +883,7 @@ struct radeon_cs_parser {
>  	u32			cs_flags;
>  	u32			ring;
>  	s32			priority;
> +	struct ww_acquire_ctx	ticket;
>  };
>  
>  extern int radeon_cs_finish_pages(struct radeon_cs_parser *p);
> diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c
> index 7e265a5..8a91901 100644
> --- a/drivers/gpu/drm/radeon/radeon_cs.c
> +++ b/drivers/gpu/drm/radeon/radeon_cs.c
> @@ -106,7 +106,7 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser *p)
>  		radeon_bo_list_add_object(&p->relocs[i].lobj,
>  					  &p->validated);
>  	}
> -	return radeon_bo_list_validate(&p->validated, p->ring);
> +	return radeon_bo_list_validate(&p->ticket, &p->validated, p->ring);
>  }
>  
>  static int radeon_cs_get_ring(struct radeon_cs_parser *p, u32 ring, s32 priority)
> @@ -314,15 +314,17 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data)
>   * If error is set than unvalidate buffer, otherwise just free memory
>   * used by parsing context.
>   **/
> -static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error)
> +static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error, bool backoff)
>  {
>  	unsigned i;
>  
>  	if (!error) {
> -		ttm_eu_fence_buffer_objects(&parser->validated,
> +		ttm_eu_fence_buffer_objects(&parser->ticket,
> +					    &parser->validated,
>  					    parser->ib.fence);
> -	} else {
> -		ttm_eu_backoff_reservation(&parser->validated);
> +	} else if (backoff) {
> +		ttm_eu_backoff_reservation(&parser->ticket,
> +					   &parser->validated);
>  	}
>  
>  	if (parser->relocs != NULL) {
> @@ -535,7 +537,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>  	r = radeon_cs_parser_init(&parser, data);
>  	if (r) {
>  		DRM_ERROR("Failed to initialize parser !\n");
> -		radeon_cs_parser_fini(&parser, r);
> +		radeon_cs_parser_fini(&parser, r, false);
>  		up_read(&rdev->exclusive_lock);
>  		r = radeon_cs_handle_lockup(rdev, r);
>  		return r;
> @@ -544,7 +546,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>  	if (r) {
>  		if (r != -ERESTARTSYS)
>  			DRM_ERROR("Failed to parse relocation %d!\n", r);
> -		radeon_cs_parser_fini(&parser, r);
> +		radeon_cs_parser_fini(&parser, r, false);
>  		up_read(&rdev->exclusive_lock);
>  		r = radeon_cs_handle_lockup(rdev, r);
>  		return r;
> @@ -562,7 +564,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>  		goto out;
>  	}
>  out:
> -	radeon_cs_parser_fini(&parser, r);
> +	radeon_cs_parser_fini(&parser, r, true);
>  	up_read(&rdev->exclusive_lock);
>  	r = radeon_cs_handle_lockup(rdev, r);
>  	return r;
> diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
> index 07af5a9..71287bb 100644
> --- a/drivers/gpu/drm/radeon/radeon_object.c
> +++ b/drivers/gpu/drm/radeon/radeon_object.c
> @@ -349,14 +349,15 @@ void radeon_bo_list_add_object(struct radeon_bo_list *lobj,
>  	}
>  }
>  
> -int radeon_bo_list_validate(struct list_head *head, int ring)
> +int radeon_bo_list_validate(struct ww_acquire_ctx *ticket,
> +			    struct list_head *head, int ring)
>  {
>  	struct radeon_bo_list *lobj;
>  	struct radeon_bo *bo;
>  	u32 domain;
>  	int r;
>  
> -	r = ttm_eu_reserve_buffers(head);
> +	r = ttm_eu_reserve_buffers(ticket, head);
>  	if (unlikely(r != 0)) {
>  		return r;
>  	}
> diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h
> index e2cb80a..3e62a3a 100644
> --- a/drivers/gpu/drm/radeon/radeon_object.h
> +++ b/drivers/gpu/drm/radeon/radeon_object.h
> @@ -128,7 +128,8 @@ extern int radeon_bo_init(struct radeon_device *rdev);
>  extern void radeon_bo_fini(struct radeon_device *rdev);
>  extern void radeon_bo_list_add_object(struct radeon_bo_list *lobj,
>  				struct list_head *head);
> -extern int radeon_bo_list_validate(struct list_head *head, int ring);
> +extern int radeon_bo_list_validate(struct ww_acquire_ctx *ticket,
> +				   struct list_head *head, int ring);
>  extern int radeon_bo_fbdev_mmap(struct radeon_bo *bo,
>  				struct vm_area_struct *vma);
>  extern int radeon_bo_set_tiling_flags(struct radeon_bo *bo,
> diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c
> index cad735d..0078cdf 100644
> --- a/drivers/gpu/drm/radeon/radeon_uvd.c
> +++ b/drivers/gpu/drm/radeon/radeon_uvd.c
> @@ -542,6 +542,7 @@ static int radeon_uvd_send_msg(struct radeon_device *rdev,
>  			       struct radeon_fence **fence)
>  {
>  	struct ttm_validate_buffer tv;
> +	struct ww_acquire_ctx ticket;
>  	struct list_head head;
>  	struct radeon_ib ib;
>  	uint64_t addr;
> @@ -553,7 +554,7 @@ static int radeon_uvd_send_msg(struct radeon_device *rdev,
>  	INIT_LIST_HEAD(&head);
>  	list_add(&tv.head, &head);
>  
> -	r = ttm_eu_reserve_buffers(&head);
> +	r = ttm_eu_reserve_buffers(&ticket, &head);
>  	if (r)
>  		return r;
>  
> @@ -561,16 +562,12 @@ static int radeon_uvd_send_msg(struct radeon_device *rdev,
>  	radeon_uvd_force_into_uvd_segment(bo);
>  
>  	r = ttm_bo_validate(&bo->tbo, &bo->placement, true, false);
> -	if (r) {
> -		ttm_eu_backoff_reservation(&head);
> -		return r;
> -	}
> +	if (r) 
> +		goto err;
>  
>  	r = radeon_ib_get(rdev, ring, &ib, NULL, 16);
> -	if (r) {
> -		ttm_eu_backoff_reservation(&head);
> -		return r;
> -	}
> +	if (r)
> +		goto err;
>  
>  	addr = radeon_bo_gpu_offset(bo);
>  	ib.ptr[0] = PACKET0(UVD_GPCOM_VCPU_DATA0, 0);
> @@ -584,11 +581,9 @@ static int radeon_uvd_send_msg(struct radeon_device *rdev,
>  	ib.length_dw = 16;
>  
>  	r = radeon_ib_schedule(rdev, &ib, NULL);
> -	if (r) {
> -		ttm_eu_backoff_reservation(&head);
> -		return r;
> -	}
> -	ttm_eu_fence_buffer_objects(&head, ib.fence);
> +	if (r)
> +		goto err;
> +	ttm_eu_fence_buffer_objects(&ticket, &head, ib.fence);
>  
>  	if (fence)
>  		*fence = radeon_fence_ref(ib.fence);
> @@ -596,6 +591,10 @@ static int radeon_uvd_send_msg(struct radeon_device *rdev,
>  	radeon_ib_free(rdev, &ib);
>  	radeon_bo_unref(&bo);
>  	return 0;
> +
> +err:
> +	ttm_eu_backoff_reservation(&ticket, &head);
> +	return r;
>  }
>  
>  /* multiple fence commands without any stream commands in between can
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 9b07b7d..b912375 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -215,7 +215,8 @@ int ttm_bo_del_from_lru(struct ttm_buffer_object *bo)
>  
>  int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
>  			  bool interruptible,
> -			  bool no_wait, bool use_sequence, uint32_t sequence)
> +			  bool no_wait, bool use_ticket,
> +			  struct ww_acquire_ctx *ticket)
>  {
>  	int ret;
>  
> @@ -223,17 +224,17 @@ int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
>  		/**
>  		 * Deadlock avoidance for multi-bo reserving.
>  		 */
> -		if (use_sequence && bo->seq_valid) {
> +		if (use_ticket && bo->seq_valid) {
>  			/**
>  			 * We've already reserved this one.
>  			 */
> -			if (unlikely(sequence == bo->val_seq))
> +			if (unlikely(ticket->stamp == bo->val_seq))
>  				return -EDEADLK;
>  			/**
>  			 * Already reserved by a thread that will not back
>  			 * off for us. We need to back off.
>  			 */
> -			if (unlikely(sequence - bo->val_seq < (1 << 31)))
> +			if (unlikely(ticket->stamp - bo->val_seq <= LONG_MAX))
>  				return -EAGAIN;
>  		}
>  
> @@ -246,13 +247,14 @@ int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
>  			return ret;
>  	}
>  
> -	if (use_sequence) {
> +	if (use_ticket) {
>  		bool wake_up = false;
> +
>  		/**
>  		 * Wake up waiters that may need to recheck for deadlock,
>  		 * if we decreased the sequence number.
>  		 */
> -		if (unlikely((bo->val_seq - sequence < (1 << 31))
> +		if (unlikely((bo->val_seq - ticket->stamp <= LONG_MAX)
>  			     || !bo->seq_valid))
>  			wake_up = true;
>  
> @@ -266,7 +268,7 @@ int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
>  		 * written before val_seq was, and just means some slightly
>  		 * increased cpu usage
>  		 */
> -		bo->val_seq = sequence;
> +		bo->val_seq = ticket->stamp;
>  		bo->seq_valid = true;
>  		if (wake_up)
>  			wake_up_all(&bo->event_queue);
> @@ -292,14 +294,15 @@ void ttm_bo_list_ref_sub(struct ttm_buffer_object *bo, int count,
>  
>  int ttm_bo_reserve(struct ttm_buffer_object *bo,
>  		   bool interruptible,
> -		   bool no_wait, bool use_sequence, uint32_t sequence)
> +		   bool no_wait, bool use_ticket,
> +		   struct ww_acquire_ctx *ticket)
>  {
>  	struct ttm_bo_global *glob = bo->glob;
>  	int put_count = 0;
>  	int ret;
>  
> -	ret = ttm_bo_reserve_nolru(bo, interruptible, no_wait, use_sequence,
> -				   sequence);
> +	ret = ttm_bo_reserve_nolru(bo, interruptible, no_wait, use_ticket,
> +				    ticket);
>  	if (likely(ret == 0)) {
>  		spin_lock(&glob->lru_lock);
>  		put_count = ttm_bo_del_from_lru(bo);
> @@ -311,13 +314,14 @@ int ttm_bo_reserve(struct ttm_buffer_object *bo,
>  }
>  
>  int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
> -				  bool interruptible, uint32_t sequence)
> +				  bool interruptible,
> +				  struct ww_acquire_ctx *ticket)
>  {
>  	bool wake_up = false;
>  	int ret;
>  
>  	while (unlikely(atomic_xchg(&bo->reserved, 1) != 0)) {
> -		WARN_ON(bo->seq_valid && sequence == bo->val_seq);
> +		WARN_ON(bo->seq_valid && ticket->stamp == bo->val_seq);
>  
>  		ret = ttm_bo_wait_unreserved(bo, interruptible);
>  
> @@ -325,14 +329,14 @@ int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
>  			return ret;
>  	}
>  
> -	if ((bo->val_seq - sequence < (1 << 31)) || !bo->seq_valid)
> +	if (bo->val_seq - ticket->stamp < LONG_MAX || !bo->seq_valid)
>  		wake_up = true;
>  
>  	/**
>  	 * Wake up waiters that may need to recheck for deadlock,
>  	 * if we decreased the sequence number.
>  	 */
> -	bo->val_seq = sequence;
> +	bo->val_seq = ticket->stamp;
>  	bo->seq_valid = true;
>  	if (wake_up)
>  		wake_up_all(&bo->event_queue);
> @@ -341,12 +345,12 @@ int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
>  }
>  
>  int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
> -			    bool interruptible, uint32_t sequence)
> +			    bool interruptible, struct ww_acquire_ctx *ticket)
>  {
>  	struct ttm_bo_global *glob = bo->glob;
>  	int put_count, ret;
>  
> -	ret = ttm_bo_reserve_slowpath_nolru(bo, interruptible, sequence);
> +	ret = ttm_bo_reserve_slowpath_nolru(bo, interruptible, ticket);
>  	if (likely(!ret)) {
>  		spin_lock(&glob->lru_lock);
>  		put_count = ttm_bo_del_from_lru(bo);
> @@ -357,7 +361,7 @@ int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
>  }
>  EXPORT_SYMBOL(ttm_bo_reserve_slowpath);
>  
> -void ttm_bo_unreserve_locked(struct ttm_buffer_object *bo)
> +void ttm_bo_unreserve_ticket_locked(struct ttm_buffer_object *bo, struct ww_acquire_ctx *ticket)
>  {
>  	ttm_bo_add_to_lru(bo);
>  	atomic_set(&bo->reserved, 0);
> @@ -369,11 +373,21 @@ void ttm_bo_unreserve(struct ttm_buffer_object *bo)
>  	struct ttm_bo_global *glob = bo->glob;
>  
>  	spin_lock(&glob->lru_lock);
> -	ttm_bo_unreserve_locked(bo);
> +	ttm_bo_unreserve_ticket_locked(bo, NULL);
>  	spin_unlock(&glob->lru_lock);
>  }
>  EXPORT_SYMBOL(ttm_bo_unreserve);
>  
> +void ttm_bo_unreserve_ticket(struct ttm_buffer_object *bo, struct ww_acquire_ctx *ticket)
> +{
> +	struct ttm_bo_global *glob = bo->glob;
> +
> +	spin_lock(&glob->lru_lock);
> +	ttm_bo_unreserve_ticket_locked(bo, ticket);
> +	spin_unlock(&glob->lru_lock);
> +}
> +EXPORT_SYMBOL(ttm_bo_unreserve_ticket);
> +
>  /*
>   * Call bo->mutex locked.
>   */
> diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> index 7b90def..efcb734 100644
> --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> @@ -32,7 +32,8 @@
>  #include <linux/sched.h>
>  #include <linux/module.h>
>  
> -static void ttm_eu_backoff_reservation_locked(struct list_head *list)
> +static void ttm_eu_backoff_reservation_locked(struct list_head *list,
> +					      struct ww_acquire_ctx *ticket)
>  {
>  	struct ttm_validate_buffer *entry;
>  
> @@ -41,14 +42,15 @@ static void ttm_eu_backoff_reservation_locked(struct list_head *list)
>  		if (!entry->reserved)
>  			continue;
>  
> +		entry->reserved = false;
>  		if (entry->removed) {
> -			ttm_bo_add_to_lru(bo);
> +			ttm_bo_unreserve_ticket_locked(bo, ticket);
>  			entry->removed = false;
>  
> +		} else {
> +			atomic_set(&bo->reserved, 0);
> +			wake_up_all(&bo->event_queue);
>  		}
> -		entry->reserved = false;
> -		atomic_set(&bo->reserved, 0);
> -		wake_up_all(&bo->event_queue);
>  	}
>  }
>  
> @@ -82,7 +84,8 @@ static void ttm_eu_list_ref_sub(struct list_head *list)
>  	}
>  }
>  
> -void ttm_eu_backoff_reservation(struct list_head *list)
> +void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket,
> +				struct list_head *list)
>  {
>  	struct ttm_validate_buffer *entry;
>  	struct ttm_bo_global *glob;
> @@ -93,7 +96,8 @@ void ttm_eu_backoff_reservation(struct list_head *list)
>  	entry = list_first_entry(list, struct ttm_validate_buffer, head);
>  	glob = entry->bo->glob;
>  	spin_lock(&glob->lru_lock);
> -	ttm_eu_backoff_reservation_locked(list);
> +	ttm_eu_backoff_reservation_locked(list, ticket);
> +	ww_acquire_fini(ticket);
>  	spin_unlock(&glob->lru_lock);
>  }
>  EXPORT_SYMBOL(ttm_eu_backoff_reservation);
> @@ -110,12 +114,12 @@ EXPORT_SYMBOL(ttm_eu_backoff_reservation);
>   * buffers in different orders.
>   */
>  
> -int ttm_eu_reserve_buffers(struct list_head *list)
> +int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket,
> +			   struct list_head *list)
>  {
>  	struct ttm_bo_global *glob;
>  	struct ttm_validate_buffer *entry;
>  	int ret;
> -	uint32_t val_seq;
>  
>  	if (list_empty(list))
>  		return 0;
> @@ -129,8 +133,8 @@ int ttm_eu_reserve_buffers(struct list_head *list)
>  	entry = list_first_entry(list, struct ttm_validate_buffer, head);
>  	glob = entry->bo->glob;
>  
> +	ww_acquire_init(ticket, &reservation_ww_class);
>  	spin_lock(&glob->lru_lock);
> -	val_seq = entry->bo->bdev->val_seq++;
>  
>  retry:
>  	list_for_each_entry(entry, list, head) {
> @@ -140,7 +144,7 @@ retry:
>  		if (entry->reserved)
>  			continue;
>  
> -		ret = ttm_bo_reserve_nolru(bo, true, true, true, val_seq);
> +		ret = ttm_bo_reserve_nolru(bo, true, true, true, ticket);
>  		switch (ret) {
>  		case 0:
>  			break;
> @@ -148,8 +152,9 @@ retry:
>  			ttm_eu_del_from_lru_locked(list);
>  			spin_unlock(&glob->lru_lock);
>  			ret = ttm_bo_reserve_nolru(bo, true, false,
> -						   true, val_seq);
> +						   true, ticket);
>  			spin_lock(&glob->lru_lock);
> +
>  			if (!ret)
>  				break;
>  
> @@ -158,21 +163,13 @@ retry:
>  
>  			/* fallthrough */
>  		case -EAGAIN:
> -			ttm_eu_backoff_reservation_locked(list);
> -
> -			/*
> -			 * temporarily increase sequence number every retry,
> -			 * to prevent us from seeing our old reservation
> -			 * sequence when someone else reserved the buffer,
> -			 * but hasn't updated the seq_valid/seqno members yet.
> -			 */
> -			val_seq = entry->bo->bdev->val_seq++;
> -
> +			ttm_eu_backoff_reservation_locked(list, ticket);
>  			spin_unlock(&glob->lru_lock);
>  			ttm_eu_list_ref_sub(list);
> -			ret = ttm_bo_reserve_slowpath_nolru(bo, true, val_seq);
> +			ret = ttm_bo_reserve_slowpath_nolru(bo, true, ticket);
>  			if (unlikely(ret != 0))
> -				return ret;
> +				goto err_fini;
> +
>  			spin_lock(&glob->lru_lock);
>  			entry->reserved = true;
>  			if (unlikely(atomic_read(&bo->cpu_writers) > 0)) {
> @@ -191,21 +188,25 @@ retry:
>  		}
>  	}
>  
> +	ww_acquire_done(ticket);
>  	ttm_eu_del_from_lru_locked(list);
>  	spin_unlock(&glob->lru_lock);
>  	ttm_eu_list_ref_sub(list);
> -
>  	return 0;
>  
>  err:
> -	ttm_eu_backoff_reservation_locked(list);
> +	ttm_eu_backoff_reservation_locked(list, ticket);
>  	spin_unlock(&glob->lru_lock);
>  	ttm_eu_list_ref_sub(list);
> +err_fini:
> +	ww_acquire_done(ticket);
> +	ww_acquire_fini(ticket);
>  	return ret;
>  }
>  EXPORT_SYMBOL(ttm_eu_reserve_buffers);
>  
> -void ttm_eu_fence_buffer_objects(struct list_head *list, void *sync_obj)
> +void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket,
> +				 struct list_head *list, void *sync_obj)
>  {
>  	struct ttm_validate_buffer *entry;
>  	struct ttm_buffer_object *bo;
> @@ -228,11 +229,12 @@ void ttm_eu_fence_buffer_objects(struct list_head *list, void *sync_obj)
>  		bo = entry->bo;
>  		entry->old_sync_obj = bo->sync_obj;
>  		bo->sync_obj = driver->sync_obj_ref(sync_obj);
> -		ttm_bo_unreserve_locked(bo);
> +		ttm_bo_unreserve_ticket_locked(bo, ticket);
>  		entry->reserved = false;
>  	}
>  	spin_unlock(&bdev->fence_lock);
>  	spin_unlock(&glob->lru_lock);
> +	ww_acquire_fini(ticket);
>  
>  	list_for_each_entry(entry, list, head) {
>  		if (entry->old_sync_obj)
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
> index 394e647..599f646 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
> @@ -1432,6 +1432,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
>  	struct vmw_fence_obj *fence = NULL;
>  	struct vmw_resource *error_resource;
>  	struct list_head resource_list;
> +	struct ww_acquire_ctx ticket;
>  	uint32_t handle;
>  	void *cmd;
>  	int ret;
> @@ -1488,7 +1489,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
>  	if (unlikely(ret != 0))
>  		goto out_err;
>  
> -	ret = ttm_eu_reserve_buffers(&sw_context->validate_nodes);
> +	ret = ttm_eu_reserve_buffers(&ticket, &sw_context->validate_nodes);
>  	if (unlikely(ret != 0))
>  		goto out_err;
>  
> @@ -1537,7 +1538,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
>  		DRM_ERROR("Fence submission error. Syncing.\n");
>  
>  	vmw_resource_list_unreserve(&sw_context->resource_list, false);
> -	ttm_eu_fence_buffer_objects(&sw_context->validate_nodes,
> +	ttm_eu_fence_buffer_objects(&ticket, &sw_context->validate_nodes,
>  				    (void *) fence);
>  
>  	if (unlikely(dev_priv->pinned_bo != NULL &&
> @@ -1570,7 +1571,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
>  out_err:
>  	vmw_resource_relocations_free(&sw_context->res_relocations);
>  	vmw_free_relocations(sw_context);
> -	ttm_eu_backoff_reservation(&sw_context->validate_nodes);
> +	ttm_eu_backoff_reservation(&ticket, &sw_context->validate_nodes);
>  	vmw_resource_list_unreserve(&sw_context->resource_list, true);
>  	vmw_clear_validations(sw_context);
>  	if (unlikely(dev_priv->pinned_bo != NULL &&
> @@ -1644,6 +1645,7 @@ void __vmw_execbuf_release_pinned_bo(struct vmw_private *dev_priv,
>  	struct list_head validate_list;
>  	struct ttm_validate_buffer pinned_val, query_val;
>  	struct vmw_fence_obj *lfence = NULL;
> +	struct ww_acquire_ctx ticket;
>  
>  	if (dev_priv->pinned_bo == NULL)
>  		goto out_unlock;
> @@ -1657,7 +1659,7 @@ void __vmw_execbuf_release_pinned_bo(struct vmw_private *dev_priv,
>  	list_add_tail(&query_val.head, &validate_list);
>  
>  	do {
> -		ret = ttm_eu_reserve_buffers(&validate_list);
> +		ret = ttm_eu_reserve_buffers(&ticket, &validate_list);
>  	} while (ret == -ERESTARTSYS);
>  
>  	if (unlikely(ret != 0)) {
> @@ -1684,7 +1686,7 @@ void __vmw_execbuf_release_pinned_bo(struct vmw_private *dev_priv,
>  						  NULL);
>  		fence = lfence;
>  	}
> -	ttm_eu_fence_buffer_objects(&validate_list, (void *) fence);
> +	ttm_eu_fence_buffer_objects(&ticket, &validate_list, (void *) fence);
>  	if (lfence != NULL)
>  		vmw_fence_obj_unreference(&lfence);
>  
> @@ -1696,7 +1698,7 @@ out_unlock:
>  	return;
>  
>  out_no_emit:
> -	ttm_eu_backoff_reservation(&validate_list);
> +	ttm_eu_backoff_reservation(&ticket, &validate_list);
>  out_no_reserve:
>  	ttm_bo_unref(&query_val.bo);
>  	ttm_bo_unref(&pinned_val.bo);
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
> index bc78425..ced7946 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
> @@ -990,9 +990,11 @@ void vmw_resource_unreserve(struct vmw_resource *res,
>   * @val_buf:        On successful return contains data about the
>   *                  reserved and validated backup buffer.
>   */
> -int vmw_resource_check_buffer(struct vmw_resource *res,
> -			      bool interruptible,
> -			      struct ttm_validate_buffer *val_buf)
> +static int
> +vmw_resource_check_buffer(struct vmw_resource *res,
> +			  struct ww_acquire_ctx *ticket,
> +			  bool interruptible,
> +			  struct ttm_validate_buffer *val_buf)
>  {
>  	struct list_head val_list;
>  	bool backup_dirty = false;
> @@ -1007,7 +1009,7 @@ int vmw_resource_check_buffer(struct vmw_resource *res,
>  	INIT_LIST_HEAD(&val_list);
>  	val_buf->bo = ttm_bo_reference(&res->backup->base);
>  	list_add_tail(&val_buf->head, &val_list);
> -	ret = ttm_eu_reserve_buffers(&val_list);
> +	ret = ttm_eu_reserve_buffers(ticket, &val_list);
>  	if (unlikely(ret != 0))
>  		goto out_no_reserve;
>  
> @@ -1025,7 +1027,7 @@ int vmw_resource_check_buffer(struct vmw_resource *res,
>  	return 0;
>  
>  out_no_validate:
> -	ttm_eu_backoff_reservation(&val_list);
> +	ttm_eu_backoff_reservation(ticket, &val_list);
>  out_no_reserve:
>  	ttm_bo_unref(&val_buf->bo);
>  	if (backup_dirty)
> @@ -1069,7 +1071,9 @@ int vmw_resource_reserve(struct vmw_resource *res, bool no_backup)
>   *.
>   * @val_buf:        Backup buffer information.
>   */
> -void vmw_resource_backoff_reservation(struct ttm_validate_buffer *val_buf)
> +static void
> +vmw_resource_backoff_reservation(struct ww_acquire_ctx *ticket,
> +				 struct ttm_validate_buffer *val_buf)
>  {
>  	struct list_head val_list;
>  
> @@ -1078,7 +1082,7 @@ void vmw_resource_backoff_reservation(struct ttm_validate_buffer *val_buf)
>  
>  	INIT_LIST_HEAD(&val_list);
>  	list_add_tail(&val_buf->head, &val_list);
> -	ttm_eu_backoff_reservation(&val_list);
> +	ttm_eu_backoff_reservation(ticket, &val_list);
>  	ttm_bo_unref(&val_buf->bo);
>  }
>  
> @@ -1092,12 +1096,13 @@ int vmw_resource_do_evict(struct vmw_resource *res)
>  {
>  	struct ttm_validate_buffer val_buf;
>  	const struct vmw_res_func *func = res->func;
> +	struct ww_acquire_ctx ticket;
>  	int ret;
>  
>  	BUG_ON(!func->may_evict);
>  
>  	val_buf.bo = NULL;
> -	ret = vmw_resource_check_buffer(res, true, &val_buf);
> +	ret = vmw_resource_check_buffer(res, &ticket, true, &val_buf);
>  	if (unlikely(ret != 0))
>  		return ret;
>  
> @@ -1112,7 +1117,7 @@ int vmw_resource_do_evict(struct vmw_resource *res)
>  	res->backup_dirty = true;
>  	res->res_dirty = false;
>  out_no_unbind:
> -	vmw_resource_backoff_reservation(&val_buf);
> +	vmw_resource_backoff_reservation(&ticket, &val_buf);
>  
>  	return ret;
>  }
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 3cb5d84..0a992b0 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -234,7 +234,7 @@ struct ttm_buffer_object {
>  	struct list_head ddestroy;
>  	struct list_head swap;
>  	struct list_head io_reserve_lru;
> -	uint32_t val_seq;
> +	unsigned long val_seq;
>  	bool seq_valid;
>  
>  	/**
> diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
> index 9c8dca7..ec18c5f 100644
> --- a/include/drm/ttm/ttm_bo_driver.h
> +++ b/include/drm/ttm/ttm_bo_driver.h
> @@ -38,6 +38,7 @@
>  #include <linux/workqueue.h>
>  #include <linux/fs.h>
>  #include <linux/spinlock.h>
> +#include <linux/reservation.h>
>  
>  struct ttm_backend_func {
>  	/**
> @@ -778,7 +779,7 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
>   * @bo: A pointer to a struct ttm_buffer_object.
>   * @interruptible: Sleep interruptible if waiting.
>   * @no_wait: Don't sleep while trying to reserve, rather return -EBUSY.
> - * @use_sequence: If @bo is already reserved, Only sleep waiting for
> + * @use_ticket: If @bo is already reserved, Only sleep waiting for
>   * it to become unreserved if @sequence < (@bo)->sequence.
>   *
>   * Locks a buffer object for validation. (Or prevents other processes from
> @@ -819,7 +820,8 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man);
>   */
>  extern int ttm_bo_reserve(struct ttm_buffer_object *bo,
>  			  bool interruptible,
> -			  bool no_wait, bool use_sequence, uint32_t sequence);
> +			  bool no_wait, bool use_ticket,
> +			  struct ww_acquire_ctx *ticket);
>  
>  /**
>   * ttm_bo_reserve_slowpath_nolru:
> @@ -836,7 +838,7 @@ extern int ttm_bo_reserve(struct ttm_buffer_object *bo,
>   */
>  extern int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
>  					 bool interruptible,
> -					 uint32_t sequence);
> +					 struct ww_acquire_ctx *ticket);
>  
>  
>  /**
> @@ -850,7 +852,8 @@ extern int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
>   * held by us, this function cannot deadlock any more.
>   */
>  extern int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
> -				   bool interruptible, uint32_t sequence);
> +				   bool interruptible,
> +				   struct ww_acquire_ctx *ticket);
>  
>  /**
>   * ttm_bo_reserve_nolru:
> @@ -876,8 +879,8 @@ extern int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
>   */
>  extern int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
>  				 bool interruptible,
> -				 bool no_wait, bool use_sequence,
> -				 uint32_t sequence);
> +				 bool no_wait, bool use_ticket,
> +				 struct ww_acquire_ctx *ticket);
>  
>  /**
>   * ttm_bo_unreserve
> @@ -889,14 +892,25 @@ extern int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
>  extern void ttm_bo_unreserve(struct ttm_buffer_object *bo);
>  
>  /**
> - * ttm_bo_unreserve_locked
> + * ttm_bo_unreserve_ticket
> + * @bo: A pointer to a struct ttm_buffer_object.
> + * @ticket: ww_acquire_ctx used for reserving
>   *
> + * Unreserve a previous reservation of @bo made with @ticket.
> + */
> +extern void ttm_bo_unreserve_ticket(struct ttm_buffer_object *bo,
> +				    struct ww_acquire_ctx *ticket);
> +
> +/**
> + * ttm_bo_unreserve_locked
>   * @bo: A pointer to a struct ttm_buffer_object.
> + * @ticket: ww_acquire_ctx used for reserving, or NULL
>   *
> - * Unreserve a previous reservation of @bo.
> + * Unreserve a previous reservation of @bo made with @ticket.
>   * Needs to be called with struct ttm_bo_global::lru_lock held.
>   */
> -extern void ttm_bo_unreserve_locked(struct ttm_buffer_object *bo);
> +extern void ttm_bo_unreserve_ticket_locked(struct ttm_buffer_object *bo,
> +					   struct ww_acquire_ctx *ticket);
>  
>  /*
>   * ttm_bo_util.c
> diff --git a/include/drm/ttm/ttm_execbuf_util.h b/include/drm/ttm/ttm_execbuf_util.h
> index 547e19f..ba71ef9 100644
> --- a/include/drm/ttm/ttm_execbuf_util.h
> +++ b/include/drm/ttm/ttm_execbuf_util.h
> @@ -33,6 +33,7 @@
>  
>  #include <ttm/ttm_bo_api.h>
>  #include <linux/list.h>
> +#include <linux/reservation.h>
>  
>  /**
>   * struct ttm_validate_buffer
> @@ -57,17 +58,20 @@ struct ttm_validate_buffer {
>  /**
>   * function ttm_eu_backoff_reservation
>   *
> + * @ticket:   ww_acquire_ctx from reserve call
>   * @list:     thread private list of ttm_validate_buffer structs.
>   *
>   * Undoes all buffer validation reservations for bos pointed to by
>   * the list entries.
>   */
>  
> -extern void ttm_eu_backoff_reservation(struct list_head *list);
> +extern void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket,
> +				       struct list_head *list);
>  
>  /**
>   * function ttm_eu_reserve_buffers
>   *
> + * @ticket:  [out] ww_acquire_ctx returned by call.
>   * @list:    thread private list of ttm_validate_buffer structs.
>   *
>   * Tries to reserve bos pointed to by the list entries for validation.
> @@ -90,11 +94,13 @@ extern void ttm_eu_backoff_reservation(struct list_head *list);
>   * has failed.
>   */
>  
> -extern int ttm_eu_reserve_buffers(struct list_head *list);
> +extern int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket,
> +				  struct list_head *list);
>  
>  /**
>   * function ttm_eu_fence_buffer_objects.
>   *
> + * @ticket:      ww_acquire_ctx from reserve call
>   * @list:        thread private list of ttm_validate_buffer structs.
>   * @sync_obj:    The new sync object for the buffers.
>   *
> @@ -104,6 +110,7 @@ extern int ttm_eu_reserve_buffers(struct list_head *list);
>   *
>   */
>  
> -extern void ttm_eu_fence_buffer_objects(struct list_head *list, void *sync_obj);
> +extern void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket,
> +					struct list_head *list, void *sync_obj);
>  
>  #endif
> -- 
> 1.8.3.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH WW 04/13] drm/ttm: convert to the reservation api
  2013-06-27 11:48 ` [PATCH WW 04/13] drm/ttm: convert to the reservation api Maarten Lankhorst
@ 2013-06-27 22:03   ` Jerome Glisse
  0 siblings, 0 replies; 21+ messages in thread
From: Jerome Glisse @ 2013-06-27 22:03 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: Dave Airlie, dri-devel

On Thu, Jun 27, 2013 at 01:48:19PM +0200, Maarten Lankhorst wrote:
> Now that the code is compatible in semantics, flip the switch.
> Use ww_mutex instead of the homegrown implementation.
> 
> ww_mutex uses -EDEADLK to signal that the caller has to back off,
> and -EALREADY to indicate this buffer is already held by the caller.
> 
> ttm used -EAGAIN and -EDEADLK for those, respectively. So some changes
> were needed to handle this correctly.
> 
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>

This one change the radeon cs kernel API. We will now return -EALREADY
if userspace try to send a cs with same bo two time on the list. Which
is bug in itself, but a bug that might have been abuse by some to force
trigger gpu reset (even if i am convince that i sent a patch at one
point to avoid that). I am not against that change just thought i should
point it off.

Reviewed-by: Jerome Glisse <jglisse@redhat.com>

> ---
>  drivers/gpu/drm/nouveau/nouveau_gem.c  |   2 +-
>  drivers/gpu/drm/qxl/qxl_object.h       |   5 -
>  drivers/gpu/drm/ttm/ttm_bo.c           | 190 +++++++++------------------------
>  drivers/gpu/drm/ttm/ttm_bo_util.c      |   6 +-
>  drivers/gpu/drm/ttm/ttm_execbuf_util.c |  43 +++-----
>  include/drm/ttm/ttm_bo_api.h           |  25 ++---
>  include/drm/ttm/ttm_execbuf_util.h     |   1 -
>  7 files changed, 79 insertions(+), 193 deletions(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index e35d468..2b2077d 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -368,7 +368,7 @@ retry:
>  		ret = ttm_bo_reserve(&nvbo->bo, true, false, true, &op->ticket);
>  		if (ret) {
>  			validate_fini_no_ticket(op, NULL);
> -			if (unlikely(ret == -EAGAIN)) {
> +			if (unlikely(ret == -EDEADLK)) {
>  				ret = ttm_bo_reserve_slowpath(&nvbo->bo, true,
>  							      &op->ticket);
>  				if (!ret)
> diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
> index b4fd89f..ee7ad79 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.h
> +++ b/drivers/gpu/drm/qxl/qxl_object.h
> @@ -57,11 +57,6 @@ static inline unsigned long qxl_bo_size(struct qxl_bo *bo)
>  	return bo->tbo.num_pages << PAGE_SHIFT;
>  }
>  
> -static inline bool qxl_bo_is_reserved(struct qxl_bo *bo)
> -{
> -	return !!atomic_read(&bo->tbo.reserved);
> -}
> -
>  static inline u64 qxl_bo_mmap_offset(struct qxl_bo *bo)
>  {
>  	return bo->tbo.addr_space_offset;
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index b912375..5f9fe80 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -150,6 +150,9 @@ static void ttm_bo_release_list(struct kref *list_kref)
>  	if (bo->ttm)
>  		ttm_tt_destroy(bo->ttm);
>  	atomic_dec(&bo->glob->bo_count);
> +	if (bo->resv == &bo->ttm_resv)
> +		reservation_object_fini(&bo->ttm_resv);
> +
>  	if (bo->destroy)
>  		bo->destroy(bo);
>  	else {
> @@ -158,18 +161,6 @@ static void ttm_bo_release_list(struct kref *list_kref)
>  	ttm_mem_global_free(bdev->glob->mem_glob, acc_size);
>  }
>  
> -static int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo,
> -				  bool interruptible)
> -{
> -	if (interruptible) {
> -		return wait_event_interruptible(bo->event_queue,
> -					       !ttm_bo_is_reserved(bo));
> -	} else {
> -		wait_event(bo->event_queue, !ttm_bo_is_reserved(bo));
> -		return 0;
> -	}
> -}
> -
>  void ttm_bo_add_to_lru(struct ttm_buffer_object *bo)
>  {
>  	struct ttm_bo_device *bdev = bo->bdev;
> @@ -218,65 +209,27 @@ int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo,
>  			  bool no_wait, bool use_ticket,
>  			  struct ww_acquire_ctx *ticket)
>  {
> -	int ret;
> +	int ret = 0;
>  
> -	while (unlikely(atomic_xchg(&bo->reserved, 1) != 0)) {
> -		/**
> -		 * Deadlock avoidance for multi-bo reserving.
> -		 */
> -		if (use_ticket && bo->seq_valid) {
> -			/**
> -			 * We've already reserved this one.
> -			 */
> -			if (unlikely(ticket->stamp == bo->val_seq))
> -				return -EDEADLK;
> -			/**
> -			 * Already reserved by a thread that will not back
> -			 * off for us. We need to back off.
> -			 */
> -			if (unlikely(ticket->stamp - bo->val_seq <= LONG_MAX))
> -				return -EAGAIN;
> -		}
> +	if (no_wait) {
> +		bool success;
>  
> -		if (no_wait)
> +		/* not valid any more, fix your locking! */
> +		if (WARN_ON(ticket))
>  			return -EBUSY;
>  
> -		ret = ttm_bo_wait_unreserved(bo, interruptible);
> -
> -		if (unlikely(ret))
> -			return ret;
> -	}
> -
> -	if (use_ticket) {
> -		bool wake_up = false;
> -
> -		/**
> -		 * Wake up waiters that may need to recheck for deadlock,
> -		 * if we decreased the sequence number.
> -		 */
> -		if (unlikely((bo->val_seq - ticket->stamp <= LONG_MAX)
> -			     || !bo->seq_valid))
> -			wake_up = true;
> -
> -		/*
> -		 * In the worst case with memory ordering these values can be
> -		 * seen in the wrong order. However since we call wake_up_all
> -		 * in that case, this will hopefully not pose a problem,
> -		 * and the worst case would only cause someone to accidentally
> -		 * hit -EAGAIN in ttm_bo_reserve when they see old value of
> -		 * val_seq. However this would only happen if seq_valid was
> -		 * written before val_seq was, and just means some slightly
> -		 * increased cpu usage
> -		 */
> -		bo->val_seq = ticket->stamp;
> -		bo->seq_valid = true;
> -		if (wake_up)
> -			wake_up_all(&bo->event_queue);
> -	} else {
> -		bo->seq_valid = false;
> +		success = ww_mutex_trylock(&bo->resv->lock);
> +		return success ? 0 : -EBUSY;
>  	}
>  
> -	return 0;
> +	if (interruptible)
> +		ret = ww_mutex_lock_interruptible(&bo->resv->lock,
> +						  ticket);
> +	else
> +		ret = ww_mutex_lock(&bo->resv->lock, ticket);
> +	if (ret == -EINTR)
> +		return -ERESTARTSYS;
> +	return ret;
>  }
>  EXPORT_SYMBOL(ttm_bo_reserve);
>  
> @@ -313,50 +266,27 @@ int ttm_bo_reserve(struct ttm_buffer_object *bo,
>  	return ret;
>  }
>  
> -int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo,
> -				  bool interruptible,
> -				  struct ww_acquire_ctx *ticket)
> -{
> -	bool wake_up = false;
> -	int ret;
> -
> -	while (unlikely(atomic_xchg(&bo->reserved, 1) != 0)) {
> -		WARN_ON(bo->seq_valid && ticket->stamp == bo->val_seq);
> -
> -		ret = ttm_bo_wait_unreserved(bo, interruptible);
> -
> -		if (unlikely(ret))
> -			return ret;
> -	}
> -
> -	if (bo->val_seq - ticket->stamp < LONG_MAX || !bo->seq_valid)
> -		wake_up = true;
> -
> -	/**
> -	 * Wake up waiters that may need to recheck for deadlock,
> -	 * if we decreased the sequence number.
> -	 */
> -	bo->val_seq = ticket->stamp;
> -	bo->seq_valid = true;
> -	if (wake_up)
> -		wake_up_all(&bo->event_queue);
> -
> -	return 0;
> -}
> -
>  int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo,
>  			    bool interruptible, struct ww_acquire_ctx *ticket)
>  {
>  	struct ttm_bo_global *glob = bo->glob;
> -	int put_count, ret;
> +	int put_count = 0;
> +	int ret = 0;
> +
> +	if (interruptible)
> +		ret = ww_mutex_lock_slow_interruptible(&bo->resv->lock,
> +						       ticket);
> +	else
> +		ww_mutex_lock_slow(&bo->resv->lock, ticket);
>  
> -	ret = ttm_bo_reserve_slowpath_nolru(bo, interruptible, ticket);
> -	if (likely(!ret)) {
> +	if (likely(ret == 0)) {
>  		spin_lock(&glob->lru_lock);
>  		put_count = ttm_bo_del_from_lru(bo);
>  		spin_unlock(&glob->lru_lock);
>  		ttm_bo_list_ref_sub(bo, put_count, true);
> -	}
> +	} else if (ret == -EINTR)
> +		ret = -ERESTARTSYS;
> +
>  	return ret;
>  }
>  EXPORT_SYMBOL(ttm_bo_reserve_slowpath);
> @@ -364,8 +294,7 @@ EXPORT_SYMBOL(ttm_bo_reserve_slowpath);
>  void ttm_bo_unreserve_ticket_locked(struct ttm_buffer_object *bo, struct ww_acquire_ctx *ticket)
>  {
>  	ttm_bo_add_to_lru(bo);
> -	atomic_set(&bo->reserved, 0);
> -	wake_up_all(&bo->event_queue);
> +	ww_mutex_unlock(&bo->resv->lock);
>  }
>  
>  void ttm_bo_unreserve(struct ttm_buffer_object *bo)
> @@ -558,17 +487,7 @@ static void ttm_bo_cleanup_memtype_use(struct ttm_buffer_object *bo)
>  	}
>  	ttm_bo_mem_put(bo, &bo->mem);
>  
> -	atomic_set(&bo->reserved, 0);
> -	wake_up_all(&bo->event_queue);
> -
> -	/*
> -	 * Since the final reference to this bo may not be dropped by
> -	 * the current task we have to put a memory barrier here to make
> -	 * sure the changes done in this function are always visible.
> -	 *
> -	 * This function only needs protection against the final kref_put.
> -	 */
> -	smp_mb__before_atomic_dec();
> +	ww_mutex_unlock (&bo->resv->lock);
>  }
>  
>  static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo)
> @@ -600,10 +519,8 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo)
>  		sync_obj = driver->sync_obj_ref(bo->sync_obj);
>  	spin_unlock(&bdev->fence_lock);
>  
> -	if (!ret) {
> -		atomic_set(&bo->reserved, 0);
> -		wake_up_all(&bo->event_queue);
> -	}
> +	if (!ret)
> +		ww_mutex_unlock(&bo->resv->lock);
>  
>  	kref_get(&bo->list_kref);
>  	list_add_tail(&bo->ddestroy, &bdev->ddestroy);
> @@ -653,8 +570,7 @@ static int ttm_bo_cleanup_refs_and_unlock(struct ttm_buffer_object *bo,
>  		sync_obj = driver->sync_obj_ref(bo->sync_obj);
>  		spin_unlock(&bdev->fence_lock);
>  
> -		atomic_set(&bo->reserved, 0);
> -		wake_up_all(&bo->event_queue);
> +		ww_mutex_unlock(&bo->resv->lock);
>  		spin_unlock(&glob->lru_lock);
>  
>  		ret = driver->sync_obj_wait(sync_obj, false, interruptible);
> @@ -692,8 +608,7 @@ static int ttm_bo_cleanup_refs_and_unlock(struct ttm_buffer_object *bo,
>  		spin_unlock(&bdev->fence_lock);
>  
>  	if (ret || unlikely(list_empty(&bo->ddestroy))) {
> -		atomic_set(&bo->reserved, 0);
> -		wake_up_all(&bo->event_queue);
> +		ww_mutex_unlock(&bo->resv->lock);
>  		spin_unlock(&glob->lru_lock);
>  		return ret;
>  	}
> @@ -1253,6 +1168,7 @@ int ttm_bo_init(struct ttm_bo_device *bdev,
>  	int ret = 0;
>  	unsigned long num_pages;
>  	struct ttm_mem_global *mem_glob = bdev->glob->mem_glob;
> +	bool locked;
>  
>  	ret = ttm_mem_global_alloc(mem_glob, acc_size, false, false);
>  	if (ret) {
> @@ -1279,8 +1195,6 @@ int ttm_bo_init(struct ttm_bo_device *bdev,
>  	kref_init(&bo->kref);
>  	kref_init(&bo->list_kref);
>  	atomic_set(&bo->cpu_writers, 0);
> -	atomic_set(&bo->reserved, 1);
> -	init_waitqueue_head(&bo->event_queue);
>  	INIT_LIST_HEAD(&bo->lru);
>  	INIT_LIST_HEAD(&bo->ddestroy);
>  	INIT_LIST_HEAD(&bo->swap);
> @@ -1298,37 +1212,34 @@ int ttm_bo_init(struct ttm_bo_device *bdev,
>  	bo->mem.bus.io_reserved_count = 0;
>  	bo->priv_flags = 0;
>  	bo->mem.placement = (TTM_PL_FLAG_SYSTEM | TTM_PL_FLAG_CACHED);
> -	bo->seq_valid = false;
>  	bo->persistent_swap_storage = persistent_swap_storage;
>  	bo->acc_size = acc_size;
>  	bo->sg = sg;
> +	bo->resv = &bo->ttm_resv;
> +	reservation_object_init(bo->resv);
>  	atomic_inc(&bo->glob->bo_count);
>  
>  	ret = ttm_bo_check_placement(bo, placement);
> -	if (unlikely(ret != 0))
> -		goto out_err;
>  
>  	/*
>  	 * For ttm_bo_type_device buffers, allocate
>  	 * address space from the device.
>  	 */
> -	if (bo->type == ttm_bo_type_device ||
> -	    bo->type == ttm_bo_type_sg) {
> +	if (likely(!ret) &&
> +	    (bo->type == ttm_bo_type_device ||
> +	     bo->type == ttm_bo_type_sg))
>  		ret = ttm_bo_setup_vm(bo);
> -		if (ret)
> -			goto out_err;
> -	}
>  
> -	ret = ttm_bo_validate(bo, placement, interruptible, false);
> -	if (ret)
> -		goto out_err;
> +	locked = ww_mutex_trylock(&bo->resv->lock);
> +	WARN_ON(!locked);
>  
> -	ttm_bo_unreserve(bo);
> -	return 0;
> +	if (likely(!ret))
> +		ret = ttm_bo_validate(bo, placement, interruptible, false);
>  
> -out_err:
>  	ttm_bo_unreserve(bo);
> -	ttm_bo_unref(&bo);
> +
> +	if (unlikely(ret))
> +		ttm_bo_unref(&bo);
>  
>  	return ret;
>  }
> @@ -1941,8 +1852,7 @@ out:
>  	 * already swapped buffer.
>  	 */
>  
> -	atomic_set(&bo->reserved, 0);
> -	wake_up_all(&bo->event_queue);
> +	ww_mutex_unlock(&bo->resv->lock);
>  	kref_put(&bo->list_kref, ttm_bo_release_list);
>  	return ret;
>  }
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index af89458..319cf41 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -433,6 +433,7 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
>  	struct ttm_buffer_object *fbo;
>  	struct ttm_bo_device *bdev = bo->bdev;
>  	struct ttm_bo_driver *driver = bdev->driver;
> +	int ret;
>  
>  	fbo = kmalloc(sizeof(*fbo), GFP_KERNEL);
>  	if (!fbo)
> @@ -445,7 +446,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
>  	 * TODO: Explicit member copy would probably be better here.
>  	 */
>  
> -	init_waitqueue_head(&fbo->event_queue);
>  	INIT_LIST_HEAD(&fbo->ddestroy);
>  	INIT_LIST_HEAD(&fbo->lru);
>  	INIT_LIST_HEAD(&fbo->swap);
> @@ -463,6 +463,10 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
>  	kref_init(&fbo->kref);
>  	fbo->destroy = &ttm_transfered_destroy;
>  	fbo->acc_size = 0;
> +	fbo->resv = &fbo->ttm_resv;
> +	reservation_object_init(fbo->resv);
> +	ret = ww_mutex_trylock(&fbo->resv->lock);
> +	WARN_ON(!ret);
>  
>  	*new_obj = fbo;
>  	return 0;
> diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> index efcb734..7392da5 100644
> --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> @@ -48,8 +48,7 @@ static void ttm_eu_backoff_reservation_locked(struct list_head *list,
>  			entry->removed = false;
>  
>  		} else {
> -			atomic_set(&bo->reserved, 0);
> -			wake_up_all(&bo->event_queue);
> +			ww_mutex_unlock(&bo->resv->lock);
>  		}
>  	}
>  }
> @@ -134,8 +133,6 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket,
>  	glob = entry->bo->glob;
>  
>  	ww_acquire_init(ticket, &reservation_ww_class);
> -	spin_lock(&glob->lru_lock);
> -
>  retry:
>  	list_for_each_entry(entry, list, head) {
>  		struct ttm_buffer_object *bo = entry->bo;
> @@ -144,42 +141,34 @@ retry:
>  		if (entry->reserved)
>  			continue;
>  
> -		ret = ttm_bo_reserve_nolru(bo, true, true, true, ticket);
> -		switch (ret) {
> -		case 0:
> -			break;
> -		case -EBUSY:
> -			ttm_eu_del_from_lru_locked(list);
> -			spin_unlock(&glob->lru_lock);
> -			ret = ttm_bo_reserve_nolru(bo, true, false,
> -						   true, ticket);
> -			spin_lock(&glob->lru_lock);
> -
> -			if (!ret)
> -				break;
>  
> -			if (unlikely(ret != -EAGAIN))
> -				goto err;
> +		ret = ttm_bo_reserve_nolru(bo, true, false, true, ticket);
>  
> -			/* fallthrough */
> -		case -EAGAIN:
> +		if (ret == -EDEADLK) {
> +			/* uh oh, we lost out, drop every reservation and try
> +			 * to only reserve this buffer, then start over if
> +			 * this succeeds.
> +			 */
> +			spin_lock(&glob->lru_lock);
>  			ttm_eu_backoff_reservation_locked(list, ticket);
>  			spin_unlock(&glob->lru_lock);
>  			ttm_eu_list_ref_sub(list);
> -			ret = ttm_bo_reserve_slowpath_nolru(bo, true, ticket);
> -			if (unlikely(ret != 0))
> +			ret = ww_mutex_lock_slow_interruptible(&bo->resv->lock,
> +							       ticket);
> +			if (unlikely(ret != 0)) {
> +				if (ret == -EINTR)
> +					ret = -ERESTARTSYS;
>  				goto err_fini;
> +			}
>  
> -			spin_lock(&glob->lru_lock);
>  			entry->reserved = true;
>  			if (unlikely(atomic_read(&bo->cpu_writers) > 0)) {
>  				ret = -EBUSY;
>  				goto err;
>  			}
>  			goto retry;
> -		default:
> +		} else if (ret)
>  			goto err;
> -		}
>  
>  		entry->reserved = true;
>  		if (unlikely(atomic_read(&bo->cpu_writers) > 0)) {
> @@ -189,12 +178,14 @@ retry:
>  	}
>  
>  	ww_acquire_done(ticket);
> +	spin_lock(&glob->lru_lock);
>  	ttm_eu_del_from_lru_locked(list);
>  	spin_unlock(&glob->lru_lock);
>  	ttm_eu_list_ref_sub(list);
>  	return 0;
>  
>  err:
> +	spin_lock(&glob->lru_lock);
>  	ttm_eu_backoff_reservation_locked(list, ticket);
>  	spin_unlock(&glob->lru_lock);
>  	ttm_eu_list_ref_sub(list);
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 0a992b0..31ad860 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -39,6 +39,7 @@
>  #include <linux/mm.h>
>  #include <linux/rbtree.h>
>  #include <linux/bitmap.h>
> +#include <linux/reservation.h>
>  
>  struct ttm_bo_device;
>  
> @@ -153,7 +154,6 @@ struct ttm_tt;
>   * Lru lists may keep one refcount, the delayed delete list, and kref != 0
>   * keeps one refcount. When this refcount reaches zero,
>   * the object is destroyed.
> - * @event_queue: Queue for processes waiting on buffer object status change.
>   * @mem: structure describing current placement.
>   * @persistent_swap_storage: Usually the swap storage is deleted for buffers
>   * pinned in physical memory. If this behaviour is not desired, this member
> @@ -164,12 +164,6 @@ struct ttm_tt;
>   * @lru: List head for the lru list.
>   * @ddestroy: List head for the delayed destroy list.
>   * @swap: List head for swap LRU list.
> - * @val_seq: Sequence of the validation holding the @reserved lock.
> - * Used to avoid starvation when many processes compete to validate the
> - * buffer. This member is protected by the bo_device::lru_lock.
> - * @seq_valid: The value of @val_seq is valid. This value is protected by
> - * the bo_device::lru_lock.
> - * @reserved: Deadlock-free lock used for synchronization state transitions.
>   * @sync_obj: Pointer to a synchronization object.
>   * @priv_flags: Flags describing buffer object internal state.
>   * @vm_rb: Rb node for the vm rb tree.
> @@ -209,10 +203,9 @@ struct ttm_buffer_object {
>  
>  	struct kref kref;
>  	struct kref list_kref;
> -	wait_queue_head_t event_queue;
>  
>  	/**
> -	 * Members protected by the bo::reserved lock.
> +	 * Members protected by the bo::resv::reserved lock.
>  	 */
>  
>  	struct ttm_mem_reg mem;
> @@ -234,15 +227,6 @@ struct ttm_buffer_object {
>  	struct list_head ddestroy;
>  	struct list_head swap;
>  	struct list_head io_reserve_lru;
> -	unsigned long val_seq;
> -	bool seq_valid;
> -
> -	/**
> -	 * Members protected by the bdev::lru_lock
> -	 * only when written to.
> -	 */
> -
> -	atomic_t reserved;
>  
>  	/**
>  	 * Members protected by struct buffer_object_device::fence_lock
> @@ -272,6 +256,9 @@ struct ttm_buffer_object {
>  	uint32_t cur_placement;
>  
>  	struct sg_table *sg;
> +
> +	struct reservation_object *resv;
> +	struct reservation_object ttm_resv;
>  };
>  
>  /**
> @@ -736,7 +723,7 @@ extern void ttm_bo_swapout_all(struct ttm_bo_device *bdev);
>   */
>  static inline bool ttm_bo_is_reserved(struct ttm_buffer_object *bo)
>  {
> -	return atomic_read(&bo->reserved);
> +	return ww_mutex_is_locked(&bo->resv->lock);
>  }
>  
>  #endif
> diff --git a/include/drm/ttm/ttm_execbuf_util.h b/include/drm/ttm/ttm_execbuf_util.h
> index ba71ef9..ec8a1d3 100644
> --- a/include/drm/ttm/ttm_execbuf_util.h
> +++ b/include/drm/ttm/ttm_execbuf_util.h
> @@ -33,7 +33,6 @@
>  
>  #include <ttm/ttm_bo_api.h>
>  #include <linux/list.h>
> -#include <linux/reservation.h>
>  
>  /**
>   * struct ttm_validate_buffer
> -- 
> 1.8.3.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH WW 08/13] drm/radeon: inline reservations
  2013-06-27 11:48 ` [PATCH WW 08/13] drm/radeon: " Maarten Lankhorst
@ 2013-06-27 22:04   ` Jerome Glisse
  0 siblings, 0 replies; 21+ messages in thread
From: Jerome Glisse @ 2013-06-27 22:04 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: Dave Airlie, dri-devel

On Thu, Jun 27, 2013 at 01:48:23PM +0200, Maarten Lankhorst wrote:
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>

Reviewed-by: Jerome Glisse <jglisse@redhat.com>

> ---
>  drivers/gpu/drm/radeon/radeon_object.c | 23 -----------------------
>  drivers/gpu/drm/radeon/radeon_object.h | 22 +++++++++++++++++++++-
>  2 files changed, 21 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
> index 71287bb..d850dc6 100644
> --- a/drivers/gpu/drm/radeon/radeon_object.c
> +++ b/drivers/gpu/drm/radeon/radeon_object.c
> @@ -619,26 +619,3 @@ int radeon_bo_wait(struct radeon_bo *bo, u32 *mem_type, bool no_wait)
>  	ttm_bo_unreserve(&bo->tbo);
>  	return r;
>  }
> -
> -
> -/**
> - * radeon_bo_reserve - reserve bo
> - * @bo:		bo structure
> - * @no_intr:	don't return -ERESTARTSYS on pending signal
> - *
> - * Returns:
> - * -ERESTARTSYS: A wait for the buffer to become unreserved was interrupted by
> - * a signal. Release all buffer reservations and return to user-space.
> - */
> -int radeon_bo_reserve(struct radeon_bo *bo, bool no_intr)
> -{
> -	int r;
> -
> -	r = ttm_bo_reserve(&bo->tbo, !no_intr, false, false, 0);
> -	if (unlikely(r != 0)) {
> -		if (r != -ERESTARTSYS)
> -			dev_err(bo->rdev->dev, "%p reserve failed\n", bo);
> -		return r;
> -	}
> -	return 0;
> -}
> diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h
> index 3e62a3a..456ad6b 100644
> --- a/drivers/gpu/drm/radeon/radeon_object.h
> +++ b/drivers/gpu/drm/radeon/radeon_object.h
> @@ -52,7 +52,27 @@ static inline unsigned radeon_mem_type_to_domain(u32 mem_type)
>  	return 0;
>  }
>  
> -int radeon_bo_reserve(struct radeon_bo *bo, bool no_intr);
> +/**
> + * radeon_bo_reserve - reserve bo
> + * @bo:		bo structure
> + * @no_intr:	don't return -ERESTARTSYS on pending signal
> + *
> + * Returns:
> + * -ERESTARTSYS: A wait for the buffer to become unreserved was interrupted by
> + * a signal. Release all buffer reservations and return to user-space.
> + */
> +static inline int radeon_bo_reserve(struct radeon_bo *bo, bool no_intr)
> +{
> +	int r;
> +
> +	r = ttm_bo_reserve(&bo->tbo, !no_intr, false, false, 0);
> +	if (unlikely(r != 0)) {
> +		if (r != -ERESTARTSYS)
> +			dev_err(bo->rdev->dev, "%p reserve failed\n", bo);
> +		return r;
> +	}
> +	return 0;
> +}
>  
>  static inline void radeon_bo_unreserve(struct radeon_bo *bo)
>  {
> -- 
> 1.8.3.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH WW 11/13] drm/radeon: get rid of ttm_bo_is_reserved usage
  2013-06-27 11:48 ` [PATCH WW 11/13] drm/radeon: " Maarten Lankhorst
@ 2013-06-27 22:05   ` Jerome Glisse
  0 siblings, 0 replies; 21+ messages in thread
From: Jerome Glisse @ 2013-06-27 22:05 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: Dave Airlie, dri-devel

On Thu, Jun 27, 2013 at 01:48:26PM +0200, Maarten Lankhorst wrote:
> Try to use lockdep_assert_held or other alternatives where possible.
> 
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>

Reviewed-by: Jerome Glisse <jglisse@redhat.com>

> ---
>  drivers/gpu/drm/radeon/radeon_object.c |  8 ++--
>  drivers/gpu/drm/radeon/radeon_object.h |  5 ---
>  drivers/gpu/drm/radeon/radeon_test.c   | 75 +++++++++++++++++-----------------
>  3 files changed, 43 insertions(+), 45 deletions(-)
> 
> diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
> index d850dc6..0219d26 100644
> --- a/drivers/gpu/drm/radeon/radeon_object.c
> +++ b/drivers/gpu/drm/radeon/radeon_object.c
> @@ -400,7 +400,7 @@ int radeon_bo_get_surface_reg(struct radeon_bo *bo)
>  	int steal;
>  	int i;
>  
> -	BUG_ON(!radeon_bo_is_reserved(bo));
> +	lockdep_assert_held(&bo->tbo.resv->lock.base);
>  
>  	if (!bo->tiling_flags)
>  		return 0;
> @@ -526,7 +526,8 @@ void radeon_bo_get_tiling_flags(struct radeon_bo *bo,
>  				uint32_t *tiling_flags,
>  				uint32_t *pitch)
>  {
> -	BUG_ON(!radeon_bo_is_reserved(bo));
> +	lockdep_assert_held(&bo->tbo.resv->lock.base);
> +
>  	if (tiling_flags)
>  		*tiling_flags = bo->tiling_flags;
>  	if (pitch)
> @@ -536,7 +537,8 @@ void radeon_bo_get_tiling_flags(struct radeon_bo *bo,
>  int radeon_bo_check_tiling(struct radeon_bo *bo, bool has_moved,
>  				bool force_drop)
>  {
> -	BUG_ON(!radeon_bo_is_reserved(bo) && !force_drop);
> +	if (!force_drop)
> +		lockdep_assert_held(&bo->tbo.resv->lock.base);
>  
>  	if (!(bo->tiling_flags & RADEON_TILING_SURFACE))
>  		return 0;
> diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h
> index 456ad6b..91519a5 100644
> --- a/drivers/gpu/drm/radeon/radeon_object.h
> +++ b/drivers/gpu/drm/radeon/radeon_object.h
> @@ -98,11 +98,6 @@ static inline unsigned long radeon_bo_size(struct radeon_bo *bo)
>  	return bo->tbo.num_pages << PAGE_SHIFT;
>  }
>  
> -static inline bool radeon_bo_is_reserved(struct radeon_bo *bo)
> -{
> -	return ttm_bo_is_reserved(&bo->tbo);
> -}
> -
>  static inline unsigned radeon_bo_ngpu_pages(struct radeon_bo *bo)
>  {
>  	return (bo->tbo.num_pages << PAGE_SHIFT) / RADEON_GPU_PAGE_SIZE;
> diff --git a/drivers/gpu/drm/radeon/radeon_test.c b/drivers/gpu/drm/radeon/radeon_test.c
> index bbed4af..f4d6bce 100644
> --- a/drivers/gpu/drm/radeon/radeon_test.c
> +++ b/drivers/gpu/drm/radeon/radeon_test.c
> @@ -35,7 +35,6 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
>  {
>  	struct radeon_bo *vram_obj = NULL;
>  	struct radeon_bo **gtt_obj = NULL;
> -	struct radeon_fence *fence = NULL;
>  	uint64_t gtt_addr, vram_addr;
>  	unsigned i, n, size;
>  	int r, ring;
> @@ -81,37 +80,38 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
>  	}
>  	r = radeon_bo_reserve(vram_obj, false);
>  	if (unlikely(r != 0))
> -		goto out_cleanup;
> +		goto out_unref;
>  	r = radeon_bo_pin(vram_obj, RADEON_GEM_DOMAIN_VRAM, &vram_addr);
>  	if (r) {
>  		DRM_ERROR("Failed to pin VRAM object\n");
> -		goto out_cleanup;
> +		goto out_unres;
>  	}
>  	for (i = 0; i < n; i++) {
>  		void *gtt_map, *vram_map;
>  		void **gtt_start, **gtt_end;
>  		void **vram_start, **vram_end;
> +		struct radeon_fence *fence = NULL;
>  
>  		r = radeon_bo_create(rdev, size, PAGE_SIZE, true,
>  				     RADEON_GEM_DOMAIN_GTT, NULL, gtt_obj + i);
>  		if (r) {
>  			DRM_ERROR("Failed to create GTT object %d\n", i);
> -			goto out_cleanup;
> +			goto out_lclean;
>  		}
>  
>  		r = radeon_bo_reserve(gtt_obj[i], false);
>  		if (unlikely(r != 0))
> -			goto out_cleanup;
> +			goto out_lclean_unref;
>  		r = radeon_bo_pin(gtt_obj[i], RADEON_GEM_DOMAIN_GTT, &gtt_addr);
>  		if (r) {
>  			DRM_ERROR("Failed to pin GTT object %d\n", i);
> -			goto out_cleanup;
> +			goto out_lclean_unres;
>  		}
>  
>  		r = radeon_bo_kmap(gtt_obj[i], &gtt_map);
>  		if (r) {
>  			DRM_ERROR("Failed to map GTT object %d\n", i);
> -			goto out_cleanup;
> +			goto out_lclean_unpin;
>  		}
>  
>  		for (gtt_start = gtt_map, gtt_end = gtt_map + size;
> @@ -127,13 +127,13 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
>  			r = radeon_copy_blit(rdev, gtt_addr, vram_addr, size / RADEON_GPU_PAGE_SIZE, &fence);
>  		if (r) {
>  			DRM_ERROR("Failed GTT->VRAM copy %d\n", i);
> -			goto out_cleanup;
> +			goto out_lclean_unpin;
>  		}
>  
>  		r = radeon_fence_wait(fence, false);
>  		if (r) {
>  			DRM_ERROR("Failed to wait for GTT->VRAM fence %d\n", i);
> -			goto out_cleanup;
> +			goto out_lclean_unpin;
>  		}
>  
>  		radeon_fence_unref(&fence);
> @@ -141,7 +141,7 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
>  		r = radeon_bo_kmap(vram_obj, &vram_map);
>  		if (r) {
>  			DRM_ERROR("Failed to map VRAM object after copy %d\n", i);
> -			goto out_cleanup;
> +			goto out_lclean_unpin;
>  		}
>  
>  		for (gtt_start = gtt_map, gtt_end = gtt_map + size,
> @@ -160,7 +160,7 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
>  					  (vram_addr - rdev->mc.vram_start +
>  					   (void*)gtt_start - gtt_map));
>  				radeon_bo_kunmap(vram_obj);
> -				goto out_cleanup;
> +				goto out_lclean_unpin;
>  			}
>  			*vram_start = vram_start;
>  		}
> @@ -173,13 +173,13 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
>  			r = radeon_copy_blit(rdev, vram_addr, gtt_addr, size / RADEON_GPU_PAGE_SIZE, &fence);
>  		if (r) {
>  			DRM_ERROR("Failed VRAM->GTT copy %d\n", i);
> -			goto out_cleanup;
> +			goto out_lclean_unpin;
>  		}
>  
>  		r = radeon_fence_wait(fence, false);
>  		if (r) {
>  			DRM_ERROR("Failed to wait for VRAM->GTT fence %d\n", i);
> -			goto out_cleanup;
> +			goto out_lclean_unpin;
>  		}
>  
>  		radeon_fence_unref(&fence);
> @@ -187,7 +187,7 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
>  		r = radeon_bo_kmap(gtt_obj[i], &gtt_map);
>  		if (r) {
>  			DRM_ERROR("Failed to map GTT object after copy %d\n", i);
> -			goto out_cleanup;
> +			goto out_lclean_unpin;
>  		}
>  
>  		for (gtt_start = gtt_map, gtt_end = gtt_map + size,
> @@ -206,7 +206,7 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
>  					  (gtt_addr - rdev->mc.gtt_start +
>  					   (void*)vram_start - vram_map));
>  				radeon_bo_kunmap(gtt_obj[i]);
> -				goto out_cleanup;
> +				goto out_lclean_unpin;
>  			}
>  		}
>  
> @@ -214,31 +214,32 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
>  
>  		DRM_INFO("Tested GTT->VRAM and VRAM->GTT copy for GTT offset 0x%llx\n",
>  			 gtt_addr - rdev->mc.gtt_start);
> +		continue;
> +
> +out_lclean_unpin:
> +		radeon_bo_unpin(gtt_obj[i]);
> +out_lclean_unres:
> +		radeon_bo_unreserve(gtt_obj[i]);
> +out_lclean_unref:
> +		radeon_bo_unref(&gtt_obj[i]);
> +out_lclean:
> +		for (--i; i >= 0; --i) {
> +			radeon_bo_unpin(gtt_obj[i]);
> +			radeon_bo_unreserve(gtt_obj[i]);
> +			radeon_bo_unref(&gtt_obj[i]);
> +		}
> +		if (fence)
> +			radeon_fence_unref(&fence);
> +		break;
>  	}
>  
> +	radeon_bo_unpin(vram_obj);
> +out_unres:
> +	radeon_bo_unreserve(vram_obj);
> +out_unref:
> +	radeon_bo_unref(&vram_obj);
>  out_cleanup:
> -	if (vram_obj) {
> -		if (radeon_bo_is_reserved(vram_obj)) {
> -			radeon_bo_unpin(vram_obj);
> -			radeon_bo_unreserve(vram_obj);
> -		}
> -		radeon_bo_unref(&vram_obj);
> -	}
> -	if (gtt_obj) {
> -		for (i = 0; i < n; i++) {
> -			if (gtt_obj[i]) {
> -				if (radeon_bo_is_reserved(gtt_obj[i])) {
> -					radeon_bo_unpin(gtt_obj[i]);
> -					radeon_bo_unreserve(gtt_obj[i]);
> -				}
> -				radeon_bo_unref(&gtt_obj[i]);
> -			}
> -		}
> -		kfree(gtt_obj);
> -	}
> -	if (fence) {
> -		radeon_fence_unref(&fence);
> -	}
> +	kfree(gtt_obj);
>  	if (r) {
>  		printk(KERN_WARNING "Error while testing BO move.\n");
>  	}
> -- 
> 1.8.3.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2013-06-27 22:05 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-27 11:48 [PATCH WW 00/13] Convert TTM to Wound/wait mutexes Maarten Lankhorst
2013-06-27 11:48 ` [PATCH WW 01/13] reservation: cross-device reservation support, v4 Maarten Lankhorst
2013-06-27 21:45   ` Jerome Glisse
2013-06-27 11:48 ` [PATCH WW 02/13] drm/ttm: make ttm reservation calls behave like reservation calls Maarten Lankhorst
2013-06-27 21:47   ` Jerome Glisse
2013-06-27 11:48 ` [PATCH WW 03/13] drm/nouveau: make flipping lockdep safe Maarten Lankhorst
2013-06-27 11:48 ` [PATCH WW 04/13] drm/ttm: convert to the reservation api Maarten Lankhorst
2013-06-27 22:03   ` Jerome Glisse
2013-06-27 11:48 ` [PATCH WW 05/13] drm/ast: inline reservations Maarten Lankhorst
2013-06-27 11:48 ` [PATCH WW 06/13] drm/cirrus: " Maarten Lankhorst
2013-06-27 11:48 ` [PATCH WW 07/13] drm/mgag200: " Maarten Lankhorst
2013-06-27 11:48 ` [PATCH WW 08/13] drm/radeon: " Maarten Lankhorst
2013-06-27 22:04   ` Jerome Glisse
2013-06-27 11:48 ` [PATCH WW 09/13] drm/ttm: inline ttm_bo_reserve and related calls Maarten Lankhorst
2013-06-27 12:21   ` Daniel Vetter
2013-06-27 11:48 ` [PATCH WW 10/13] drm/ttm: get rid of ttm_bo_is_reserved usage Maarten Lankhorst
2013-06-27 12:23   ` Daniel Vetter
2013-06-27 11:48 ` [PATCH WW 11/13] drm/radeon: " Maarten Lankhorst
2013-06-27 22:05   ` Jerome Glisse
2013-06-27 11:48 ` [PATCH WW 12/13] drm/vmwgfx: " Maarten Lankhorst
2013-06-27 11:48 ` [PATCH WW 13/13] drm/ttm: get rid of ttm_bo_is_reserved Maarten Lankhorst

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.